By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
444,119 Members | 2,086 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 444,119 IT Pros & Developers. It's quick & easy.

Log File is Full error: How to prevent this happening?

P: n/a
"The log file for database is full. Back up the transaction log for
the database to free up some log space."

Now I only know this way to deal with that manually,

Step1. in option , chance Recovery model from FULL to Simple.
Step2: go to task to manually shrink the log file
Step3: Change recovery model back from simple to FULL.

But by this way, I could get same problem again, the log file is fill,
and need free up.

Could you give an idea how to prevent this from happening? what and
how should I do???

Thanks a lot in advance for your help.

Feb 9 '07 #1
Share this Question
Share on Google+
3 Replies


P: n/a
(da*****@gmail.com) writes:
"The log file for database is full. Back up the transaction log for
the database to free up some log space."

Now I only know this way to deal with that manually,

Step1. in option , chance Recovery model from FULL to Simple.
Step2: go to task to manually shrink the log file
Step3: Change recovery model back from simple to FULL.

But by this way, I could get same problem again, the log file is fill,
and need free up.

Could you give an idea how to prevent this from happening? what and
how should I do???
First: decide what recovery mechanism do you need. Second: implement it.

Do you need up-to-the-point recovery? That is, if the database goes
belly-up, or somewhat makes a fatal mistake, are you content with
restoring the latest backup? Or do you want recovery as close to
the accident as possible. If you are content with the latest backup,
go for simple, and all you need to make sure is that you back up the
database regularly.

If you want up-to-the-point-recovery, the you need FULL recovery mode.
You also need to backup your transaction log regularly. How often
depends on the transaction rate in your database. If you never backup
the transaction log, it will outgrow the disk eventually. Note also that
by switching to simple recovery, you break the log chain, and the
ability to up-to-the-point recovery.

Also with full recovery, you need to backup your database regularly.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 9 '07 #2

P: n/a
GZ
danc...
This can also be an application issue, I often encounter situations
where delevelopers do not batch their transactions, and thus do not
give the log a chance to truncate, if this is an inhouse app; updates
and deletes should be done in batches. The size of the batch should be
set by the ideal size of your transaction log, the size of the db
etc.. Also, be sure to dump the log periodically, typically I dump the
log every 15 minutes (when the db is in full recovery mode). This will
also assist in controlling the size of the log.
Cheers,
G
Feb 11 '07 #3

P: n/a
GZ (ge**************@gmail.com) writes:
This can also be an application issue, I often encounter situations
where delevelopers do not batch their transactions, and thus do not
give the log a chance to truncate, if this is an inhouse app; updates
and deletes should be done in batches. The size of the batch should be
set by the ideal size of your transaction log, the size of the db
etc.. Also, be sure to dump the log periodically, typically I dump the
log every 15 minutes (when the db is in full recovery mode). This will
also assist in controlling the size of the log.
I assume that you are thinking of is the case where you update, insert
or delete millions of rows in one go, and that in such case it may be
better to break up the operation in batches of say 500000 rows. Yes,
this can be a good idea. But it has to implemented in such a way that
if the operation fails half-way, the database is still consistent.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pro...ads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinf...ons/books.mspx
Feb 11 '07 #4

This discussion thread is closed

Replies have been disabled for this discussion.