By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,921 Members | 1,479 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,921 IT Pros & Developers. It's quick & easy.

One Command Keeps Causing "'tempdb' is full" Error

P: n/a
I keep getting the following error message when I run a serie of SQL
commands:

Server: Msg 9002, Level 17, State 6, Line 15
The log file for database 'tempdb' is full.
Back up the transaction log for the database
to free up some log space.

I have tried "dump transaction tempdb with no_log" right before I run
the SQL command. But that doesn't help.

The serie of SQL commands that I try to run is the following:

create table #NewBatOp
(
BatchJournalID uniqueidentifier not null,
batch_nr varchar(5) null,
OperationNum varchar(3) null,
OperationHours real null,
EmployeeNum varchar(6) null,
OperationDate datetime null,
IsOverTime tinyint null
)
-- |-- Comment this one line
-- | out will not trigger
insert into #NewBatOp -- <---| the error
select
bj.BatchJournalID, bj.batch_nr, bo.opno,
bo.hrs, bo.bonno, bo.dat, bo.otflg
from batop bo
inner join BatchJournal bj on
bo.bat = bj.batch_nr and
bj.BatchJournalID in
(select BatchJournalID from BatchControl)
if ( @@error <> 0 )
goto OnError

drop table #NewBatOp
goto EndTest

OnError:
drop table #NewBatOp
print "Error: Failed to import new batch-operations into
journal."

EndTest:

I have tried running the above statements in ISQL and in Query
Analyzer, and I get the same error.

I didn't have this problem before I have moved the database from one
server to another server.
- The OS in the old server is Windows-NT,
and the SQL Server in the old server is the 2000 version.
- The OS in the new server is Windows-2000,
and the SQL Server in the old server is the 2000 version.

The settings in tempdb in both servers are more or less the same.
Actually, the tempdb in the new server is actually much bigger than
the one in the old server. The size of the transaction logs in both
server are the same (and cannot be changed manually). Both the data
and the transaction log of tempdb can automatically grow in 10%
increment and no restriction on size.

The data-and-log of the tempdb are both in one hard disk. The hard
disk has 10-GB free space available. Moreover the size of the result
set from the "select" statement above is only 530KB (around 3000 rows
in the result-set). I believe it is a very small database operation.
Therefore, I don't think the size has anything to do with the error.

I don't think the "inner-join" clause is the cause of the problem. The
reason is that I have used the same "inner-join" clause in other
queries, and they don't have any problem. As a matter of fact, I have
used many other queries that are far more complicated and have created
much bigger result set in tempdb, and they don't have this problem.

I am very puzzled of this error. Can someone give me a pointer?

Thanks in advance for any info.

Jay Chan
Jul 20 '05 #1
Share this Question
Share on Google+
10 Replies


P: n/a

"Jay Chan" <ja******@hotmail.com> wrote in message
news:c7**************************@posting.google.c om...
I keep getting the following error message when I run a serie of SQL
commands:

Server: Msg 9002, Level 17, State 6, Line 15
The log file for database 'tempdb' is full.
Back up the transaction log for the database
to free up some log space.

I have tried "dump transaction tempdb with no_log" right before I run
the SQL command. But that doesn't help.


One thing to check is the Collation of the tables involved.

We had this problem occur on one of our systems and until we rebuilt the
database with the right collation we'd have the same problem.

Essentially I think what was happening in our case was the ENTIRE source
tables would be copied into the tempdb and then "converted" to a matching
collation, and then the join performed. (Since you can't do an accurate
join on a mismatched collation since you have to determine for example if
'Joe' matches 'JOE'. Depending on the collation that may or may not match.)

Jul 20 '05 #2

P: n/a
> One thing to check is the Collation of the tables involved.

I will look into this.
We had this problem occur on one of our systems and until we rebuilt the
database with the right collation we'd have the same problem.
How do you get the right collation when you rebuild your database?
Essentially I think what was happening in our case was the ENTIRE source
tables would be copied into the tempdb and then "converted" to a matching
collation, and then the join performed. (Since you can't do an accurate
join on a mismatched collation since you have to determine for example if
'Joe' matches 'JOE'. Depending on the collation that may or may not match.)


If this is the case, I may need to change the query to reduce the
number of intermediate temporary tables that the query generates.

This is very strange because the same SQL statement worked fine before
I move the database to the new server. I will see how this goes.

Thanks.

Jay Chan
Jul 20 '05 #3

P: n/a

"Jay Chan" <ja******@hotmail.com> wrote in message
news:c7*************************@posting.google.co m...
One thing to check is the Collation of the tables involved.
I will look into this.
We had this problem occur on one of our systems and until we rebuilt the
database with the right collation we'd have the same problem.


How do you get the right collation when you rebuild your database?


Look up the syntax for CREATE DATABASE.

There's also a way (which I ALWAYS forget) to programatically get the
default collation type of the server.
Essentially I think what was happening in our case was the ENTIRE source
tables would be copied into the tempdb and then "converted" to a matching collation, and then the join performed. (Since you can't do an accurate
join on a mismatched collation since you have to determine for example if 'Joe' matches 'JOE'. Depending on the collation that may or may not
match.)
If this is the case, I may need to change the query to reduce the
number of intermediate temporary tables that the query generates.

This is very strange because the same SQL statement worked fine before
I move the database to the new server. I will see how this goes.
The new server was probably installed with a different default collation.
This is what bit us.


Thanks.

Jay Chan

Jul 20 '05 #4

P: n/a
> The new server was probably installed with a different default collation.
This is what bit us.


I will go look for it. Thanks for the pointer.

Jay Chan
Jul 20 '05 #5

P: n/a
> > How do you get the right collation when you rebuild your database?

Look up the syntax for CREATE DATABASE.
I cannot find the collation in the current database. I have a feeling
that it may not have anything to do with the problem that I am having
(see below).
Essentially I think what was happening in our case was the ENTIRE
source tables would be copied into the tempdb and then "converted" to a
matching collation, and then the join performed. (Since you can't do
an accurate join on a mismatched collation since you have to determine
for example if 'Joe' matches 'JOE'. Depending on the collation that
may or may not match.)


I have greatly simplified the query for the purpose of testing, and I
still get the same error (and I have done a "dump transaction tempdb
with no_log" before I start the test). Because the query is very
simple, I don't think collation has to do with this problem. Moreover
the source table is quite small (3500 rows, each rows is around 34
bytes or a bit more). The whole source table can easily fit inside
tempdb with plenty of space to spare. And I have enlarged tempdb from
10MB to 200MB. But these don't help. I still have the problem. The
following is the simple SQL query that I am talking about:

create table #NewBatOp
(
OperationNum varchar(3) null
)
-- |-- Comment this one line
-- | out will not trigger
insert into #NewBatOp -- <---| the error
select opno from batop

drop table #NewBatOp

Basically, this simple query copies one field from all the rows in the
source table "batop" to a temporary table "#NewBatOp". There is really
nothing complicated about this query.

Does anyone know anything about this problem?

Thanks for any info.

Jay Chan
Jul 20 '05 #6

P: n/a

"Jay Chan" <ja******@hotmail.com> wrote in message
news:c7**************************@posting.google.c om...
Finally, we have fixed the problem after we have contacted Microsoft
Tech Support.

We need to auto-grow the transaction-log of tempdb by "10MB" instead
of by "10%". If I understand this correctly, the transaction log of
tempdb was initially set to 1MB (somehow I cannot change it through
Enterprise Manager if I set it to grow by 10%), it would grow to only
1.1MB after it had grown by 10%. If the transaction log caused by the
INSERT statement is over 1.1MB, the database server will give out that
"Log file is full" message. Growing it by 10MB fixes the problem.

Before I changed it to grow by 10MB, I could not change the initial
size of the transaction log file of tempdb. It got stuck at 1MB.
Strangely, after I changed it to grow by 10MB, I can change the
initial size.


Note the 10% thing can be a bad thing for another reason. Overtime a 100MB
db can expand to say be 1 Gig... a 10% expansion will then be 100MB and will
take time (blocking inserts, etc) AND be more likely to fill up the disk or
fail because it can't allocate 10%. (perhaps a better example is 10Gig on a
10.5 gig disk. :-)

So I tend to always grow my DB's by a fixed amount.

Jul 20 '05 #7

P: n/a

"Erland Sommarskog" <so****@algonet.se> wrote in message
news:Xn**********************@127.0.0.1...
Jay Chan (ja******@hotmail.com) writes:
Moreover, according to the tech support, that "Log file is full"
message is not an error message; that is just a warning message. I
could have changed my script to ignore that warning message, and I
would still be fine. I don't know if this is really true or not. But I
will not risk this.
Warning message? It has severity 17, so that is not a warning. Then
again, SQL Server and error handling.


I believe the order of events is a log file is full, which generates the
error and THEN an expansion of the DB. Which means the next insert may be
fine.

Sorta weird, but basically what I've seen.

In any case, I'm glad to hear that your case was resolved, and it was
interesting to learn how it was resolved.

--
Erland Sommarskog, SQL Server MVP, so****@algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Jul 20 '05 #8

P: n/a
> I believe the order of events is a log file is full, which generates the
error and THEN an expansion of the DB. Which means the next insert may be
fine.


If I understand yours correctly, this means the current INSERT failed,
and the next INSERT may be OK. This surely sounds like an error to me
(not a warning as what the MS tech support had suggested). In any
case, I don't know if this is true or not because I am not going to
re-create the problem and find this out (for lack of curiosity).

Thanks for the explanation on error-vs-warning.

Jay Chan
Jul 20 '05 #9

P: n/a
> When I read this I thought I maybe had learnt something new. Or maybe
there was some misunderstanding on the way.

In any case, I ran this on my server at home:

select a.*, c.* into #explosion from
Northwind..Orders a, Northwind..Orders b, pubs..authors c
where 1 = 0
go
exec sp_helpdb tempdb
go
insert #explosion
select a.*, c.* from
Northwind..Orders a, Northwind..Orders b, pubs..authors c
go

The initial size of tempdb was 25 MB, with 5120 KB for the log.
When I killed the process from another window, the log for tempdb
had grown to over 1 GB, so apparently the log can grow more than
once during a statement.
I assume the purpose of the test is to see if one INSERT statement can
cause the log file of tempdb to grow more than once.

But your test script involves three separated steps (three different
'go' statements). Each one of those three steps can cause the log file
of the tempdb to expand. You may need to remove those 'go' statements
and re-test the script.

I don't mean to sound like I know this stuff. Actually, I don't really
know. I am just trying to follow the logic of the test.
My guess why growing 10 MB at a time worked, is that even if you
started with a small log, you eventually come to the situation when
10% was more than the remaining free space on the disk. By growing
10 MB at a time, you will not run out of disk, until you run out of
disk if you understand what I mean.


I think you are right to say that growing by 10MB instead of 10% seems
to be a safe way to grow the log file -- just in case it grows so big
that the disk is full. Thanks.

I need to point out that the problem that caused me to start this
message thread has nothing to do with disk becoming full. I need to
clarify this just in case someone jumps into the middle of this
message thread and misunderstands what this message thread was all
about.

Jay Chan
Jul 20 '05 #10

P: n/a
> You start with 10 MB. Then it grows by 10% to 11 MB. Then it grows by
10% to 12.1 MB. After 100 autogrows, the log file is 1584 MB. If at
this stage there is not 158 MB free when the next autogrow sets in,
the autogrow fails and you get the message. But say that at this point
the operation was almost done, and all you needed was 20 more MB.
Then you get through with 10 MB increments.


Thanks for pointing out one more time the benefit of increasing the
log file by 10MB instead of by 10%.

But this doesn't explain the reason why I got that "log file of tempdb
is full" error message because I have at least 1-GB free when the
INSERT statement should only take 1 to 3 MB (at most). Therefore,
using up all the available disk space was not the cause of my original
problem.

Jay Chan
Jul 20 '05 #11

This discussion thread is closed

Replies have been disabled for this discussion.