473,666 Members | 2,131 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Trans log backup based on size rather than time?

aj
SQL Server SP2 9.0.3042 64-bit

I recently put my first SQL Server DB in production. In the "other"
database that I use (not interested in any arguments), you can indicate
the desired size of your transaction logs. When the current log reaches
that size, it gets backed up (or whatever you have configured to
happen). Certain events in the database might cause the logs to get
prematurely "cut" at a particular time, but the logs are for the most
part a consistent size.

In SQL Server, it looks as if the notion of transaction log backup is
based on /time/ rather than log size. When I use Maintenance Plans to
create a log backup plan, it wants to know WHEN to back the log up.

When the I/O in the database is not consistent, this can make the size
of your logs vary quite a bit. For example, on Sundays, when my
database is fairly quiet, I see /really/ small transaction log backups..

This leads to a few questions:
1. Is it possible to back up SQL Server transaction logs based on size
rather than time?
2. Even if there is, is this a good idea?
3. Is there another method to back up transaction logs other than maint
plans/Agent? Perhaps one that uses size as its determination?

Any help/thoughts appreciated.

cheers
aj
Oct 14 '08 #1
5 2712
aj (ro****@mcdonal ds.com) writes:
In SQL Server, it looks as if the notion of transaction log backup is
based on /time/ rather than log size. When I use Maintenance Plans to
create a log backup plan, it wants to know WHEN to back the log up.

When the I/O in the database is not consistent, this can make the size
of your logs vary quite a bit. For example, on Sundays, when my
database is fairly quiet, I see /really/ small transaction log backups..
Well, it is a good idea to ask yourself *why* you take transaction log
backups. If the only reason is to keep the transaction log down in
size, you should consider simple recovery.

The normal reason to backup the transaction log is to be able to
restore to a point in time in case of a catastrophic failure. It may
be a quiet day in terms of transaction log growth, but 1000 new orders
were inserted and then your log disk goes capoot. If the data disk
fails, you can still back up the translog, but if the log disk you
are in trouble. And since it was a quite day, you find that the
log-size threshold you set up was never reached, and those 1000 orders
are lost.

I think the question most businesses ask is: how many minutes of data can
we afford to lose in case of a failure?
This leads to a few questions:
1. Is it possible to back up SQL Server transaction logs based on size
rather than time?
You could set up an Agent job that checks the log size, and goes to
sleep again if the limit has not been exceeded. But that's a more complex
operation that just backing up the transaction log in the first place.
2. Even if there is, is this a good idea?
No.
3. Is there another method to back up transaction logs other than maint
plans/Agent? Perhaps one that uses size as its determination?
Well, you can use Windows Task Scheduler, if you dislike Agent for some
reason. You could also define a startup procedure that perfoms:

WHILE 1 = 1
BEGIN
WAITFOR DELAY '00:10:00'
BACKUP LOG db TO ...
END

Whatever, since log backups should be taken regularly, there is all
reason to schedule them. But you don't need maintenenance plans. The
advantage with maintenance plans is that the name the files uniquely,
and delete them after a while. If you were to run BACKUP LOG directly
from an Agent job, you would have to cater for this yourself. Then
again, that is not exactly rocket science.
SQL Server SP2 9.0.3042 64-bit
Beware! There is a very serious bug with maintenance plans in that version
which causes DBCC CHECKDB to run in the same database when you schedule
it for many.

Get hold of version 9.0.3073 which is publically available. It also includes
two security fixes, whereof one looks quite serious.

--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinf...ons/books.mspx

Oct 14 '08 #2
aj
Erland Sommarskog wrote:
aj (ro****@mcdonal ds.com) writes:
>In SQL Server, it looks as if the notion of transaction log backup is
based on /time/ rather than log size. When I use Maintenance Plans to
create a log backup plan, it wants to know WHEN to back the log up.

When the I/O in the database is not consistent, this can make the size
of your logs vary quite a bit. For example, on Sundays, when my
database is fairly quiet, I see /really/ small transaction log backups..

Well, it is a good idea to ask yourself *why* you take transaction log
backups. If the only reason is to keep the transaction log down in
size, you should consider simple recovery.
No - I take log backups for disaster recovery. I live in SW Florida.
Disaster Recovery is a /very/ big part of my professional life.. :(
The normal reason to backup the transaction log is to be able to
restore to a point in time in case of a catastrophic failure. It may
be a quiet day in terms of transaction log growth, but 1000 new orders
were inserted and then your log disk goes capoot. If the data disk
fails, you can still back up the translog, but if the log disk you
are in trouble. And since it was a quite day, you find that the
log-size threshold you set up was never reached, and those 1000 orders
are lost.
I think the question most businesses ask is: how many minutes of data can
we afford to lose in case of a failure?
Good points. So the way to look at is: If I take log backups every 60
minutes, and the database crashes 58 minutes into the hour, I will lose
any transactions in those 58 minutes? If I take log backups every 60
minutes, I could (or would?) lose 59 minutes of transactions?

Doesn't SQL Server have some form of what my other db calls an active
log (not an archived log), which is rolled forward during subsequent
crash recovery, and would decrease that 58 minutes of lost work to
something less than 58 minutes?
>This leads to a few questions:
1. Is it possible to back up SQL Server transaction logs based on size
rather than time?

You could set up an Agent job that checks the log size, and goes to
sleep again if the limit has not been exceeded. But that's a more complex
operation that just backing up the transaction log in the first place.
Yes, I actually thought about that myself. It does seem a lot more
complex, tho..
>2. Even if there is, is this a good idea?

No.
>3. Is there another method to back up transaction logs other than maint
plans/Agent? Perhaps one that uses size as its determination?

Well, you can use Windows Task Scheduler, if you dislike Agent for some
reason. You could also define a startup procedure that perfoms:

WHILE 1 = 1
BEGIN
WAITFOR DELAY '00:10:00'
BACKUP LOG db TO ...
END

Whatever, since log backups should be taken regularly, there is all
reason to schedule them. But you don't need maintenenance plans. The
advantage with maintenance plans is that the name the files uniquely,
and delete them after a while. If you were to run BACKUP LOG directly
from an Agent job, you would have to cater for this yourself. Then
again, that is not exactly rocket science.
No - I have no dislike for Agent. Actually, I think its pretty cool.
I just wanted to make sure I was not missing something..
>SQL Server SP2 9.0.3042 64-bit

Beware! There is a very serious bug with maintenance plans in that version
which causes DBCC CHECKDB to run in the same database when you schedule
it for many.
Great info. Is the bug only in 64 bit? Is there a KB number?
Get hold of version 9.0.3073 which is publically available. It also includes
two security fixes, whereof one looks quite serious.
Thanks.

cheers
aj
Oct 15 '08 #3
On Wed, 15 Oct 2008 09:55:45 -0400, aj <ro****@mcdonal ds.comwrote:
>Good points. So the way to look at is: If I take log backups every 60
minutes, and the database crashes 58 minutes into the hour, I will lose
any transactions in those 58 minutes? If I take log backups every 60
minutes, I could (or would?) lose 59 minutes of transactions?

Doesn't SQL Server have some form of what my other db calls an active
log (not an archived log), which is rolled forward during subsequent
crash recovery, and would decrease that 58 minutes of lost work to
something less than 58 minutes?
If the disk where the transaction log is stored is intact, then it is
possible to recover the data from the log that was not backed up as
part of the recovery. See Books Online for details.

Roy Harvey
Beacon Falls, CT
Oct 15 '08 #4
aj (ro****@mcdonal ds.com) writes:
Doesn't SQL Server have some form of what my other db calls an active
log (not an archived log), which is rolled forward during subsequent
crash recovery, and would decrease that 58 minutes of lost work to
something less than 58 minutes?
If the disk where the database files reside goes belly up, you can backup
tha active portion of the log, no sweat.

But if the disk where the log file dies, you are in dire straits if your
log backup is old. Yes, the database file is there, but there is a big
but: you don't know which state it is in. It may include half-baked
transactions and could be corrupt on both user level and internal level.
>>This leads to a few questions:
Beware! There is a very serious bug with maintenance plans in that
version which causes DBCC CHECKDB to run in the same database when you
schedule it for many.

Great info. Is the bug only in 64 bit? Is there a KB number?
As far as I now, the bug is bit-agnostic. The is
http://support.microsoft.com/?kbid=934458.

--
Erland Sommarskog, SQL Server MVP, es****@sommarsk og.se

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinf...ons/books.mspx

Oct 15 '08 #5
I always think of my recovery model and backup strategy in terms of
Recovery Time and Recovery Point Objctives (RTO, RPO), i.e. how long
can the business live without this database and how much data can they
lose? This pretty much dictates what you have to do.
Oct 24 '08 #6

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
5196
by: NOSPAM | last post by:
Hello, I am hoping you can help me with the following problem; I need to process the following steps every couple of hours in order to keep our Sql 2000 database a small as possible (the transaction log is 5x bigger than the db). 1.back-up the entire database 2.truncate the log 3.shrink the log 4.back-up once again.
1
2224
by: jane | last post by:
HI, I had a question on incremental backup. We had an incremental backup every weekend. We did full backup every other week. That is one week incremental + full , the other week is incremental only. My question is for the first weekend incremental backup, it took for example 1h, but the second weekend incremental took almost 3 hours. It seems always this way, so I suppose it is not related to the data change. And we almost had some...
6
3340
by: Eric Herber | last post by:
I've a question regarding db2 (V8.1) and database backups going to a storage manager like TSM for example. As I can see in the storage manager if I backup the complete database over the TSM API (no tablespace backups) the backup images is stored as a single backup object. Later I will be able to restore the complete database from this image and I will also be able to restore single tablespaces from this backup image as long as I...
4
3801
by: Hardy | last post by:
hi gurus, now I have to backup and restore a 8 T size db2 database. from two s85 to two 670. the partitions,tablespaces of the db should be redesigned then I plan to use redirected restore. but my concern is, such big size db, I'm afraid something unexpected will destory all the effort. Who have related experience? Can you give some advice? Thanks in advance:)
5
10577
by: David | last post by:
I am a little confused by DB2 Backup and Export. I used "db2 backup db QAS to /dev/rmt0" backuping 650GB database to a IBM LTO 3581 (1 drive) only 11 hours. But I used "db2 export to /dev/rmt0 of ixf messages MKPF.msg select * from mkpf" on the same 3581, and this table is only 5GB, but the export used 4 hours. What's the diffirent between Backup and Export? I mean except backup is used to backup whole database or tablespace, and export...
14
2041
by: johnm | last post by:
Hello All, I appreciate all of the help with my previous posts. It's nice having such a knowledgeable group to draw upon for help. I have taken all of your previous suggestions to heart and have upgraded our DB/2 system to 8.2. However, I still have a problem with backup times and backup window size.
0
3135
by: Willem | last post by:
Based on MK's TSI_SOON (http://www.trigeminal.com/)I've created a nifty little procedure that - whenever you compact you db you get an incremental backup copy. Given that you have a table with version information you get incremental backups on a per-version basis. SEE CODE BELOW Basic idea is: start TSISOON with the options: 1. "compact this db"
7
3218
by: yellr | last post by:
Hi, i've spent all resources, except this one. Hopefully anyone there out could help me with a idea for this problem. We have a db2 8.2 Enterprise Edition, on AIX 5.3 Platform, this is our production database. We had to do a full restore over this database for a problem with a tablespace. The problem cames after 5 days after full restore. The problem is the following, we are having erratic, fluctuating size on our backups, and this is...
5
3484
by: Roger | last post by:
backup log testdb with truncate_only DBCC SHRINKFILE (testdb_log, 100) WITH NO_INFOMSGS backup database testdb to disk = '\\DC01\Backups\DB01\testdb.bak' with init and does the shrinkfile command reduce the size of the ldf ?
0
8444
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8869
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8781
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
8551
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
7386
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6198
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4198
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4368
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2771
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.