By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,602 Members | 1,473 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,602 IT Pros & Developers. It's quick & easy.

DB2 logging options

P: n/a
I am a newbie to HADR and Admin of DB2 (I am websphere guy). My
question relates to DB2 logging. I've read that HADR does not allow
Infinite logging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
What logging options should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.

Jun 16 '07 #1
Share this Question
Share on Google+
4 Replies


P: n/a
On Jun 16, 11:56 am, lfhenry <lfhe...@gmail.comwrote:
I am a newbie to HADR and Admin of DB2 (I am websphere guy). My
question relates to DB2 logging. I've read that HADR does not allow
Infinite logging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
What logging options should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.
You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.

If you set LOGARCHMETH1 to a separate disk (which you need to do for
HADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used by HADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.

Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.

Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.

Jun 17 '07 #2

P: n/a
On Jun 17, 7:49 am, Mark A <m00...@yahoo.comwrote:
On Jun 16, 11:56 am, lfhenry <lfhe...@gmail.comwrote:
I am a newbie toHADRand Admin of DB2 (I am websphere guy). My
question relates to DB2logging. I've read thatHADRdoes not allow
Infinitelogging(-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
Whatloggingoptions should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.

You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.

If you set LOGARCHMETH1 to a separate disk (which you need to do forHADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used byHADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.

Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.

Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.

Thats great information.
To complete the jigsaw in my head, do the count of logs in the archive
count against the total log number i.e. When my LOGPRIMARY is 40 and
LOGSECOND is 200 and i have have used 50, that means i have 190
available. If 1 more log is archived, do i then have 191 available or
am i still on 190.
If i am still on 190 that means the count is always on the number of
logs created regardless whether active or archived, right?
If not then DB2 should always give me available logs.

One more thing, after backing up, I can check the last required log
file and then delete all previous logs i dont require it. Is this
better done by manually deleting or by using the Prune command.

Already said, i'm green at this.

Thanks,


Jun 17 '07 #3

P: n/a
On Jun 17, 7:09 am, lfhenry <lfhe...@gmail.comwrote:
On Jun 17, 7:49 am, Mark A <m00...@yahoo.comwrote:


On Jun 16, 11:56 am, lfhenry <lfhe...@gmail.comwrote:
I am a newbie toHADRand Admin of DB2 (I am websphere guy). My
question relates to DB2logging. I've read thatHADRdoes not allow
Infinitelogging(-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
Whatloggingoptions should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.
You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.
If you set LOGARCHMETH1 to a separate disk (which you need to do forHADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used byHADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.
Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.
Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.

Thats great information.
To complete the jigsaw in my head, do the count of logs in the archive
count against the total log number i.e. When my LOGPRIMARY is 40 and
LOGSECOND is 200 and i have have used 50, that means i have 190
available. If 1 more log is archived, do i then have 191 available or
am i still on 190.
If i am still on 190 that means the count is always on the number of
logs created regardless whether active or archived, right?
If not then DB2 should always give me available logs.

One more thing, after backing up, I can check the last required log
file and then delete all previous logs i dont require it. Is this
better done by manually deleting or by using the Prune command.

Already said, i'm green at this.

Thanks,- Hide quoted text -

- Show quoted text -
The number of PRIMARY and SECONDARY logs only refer to active logs.
Once a log file is archived by DB2, it does not count against the
total and the archive logs are kept until you delete them. Since the
archive log files must be manually deleted, you effectively have
infinite logging; however, all the updates in a single transaction
(Unit of Work within a single commit scope) must fit inside the active
logs.

When LOGRETAIN is set (either explicitly or via specifying an log
archive exit program or by specifying LOGARCHMETH1) the log file
numbers do not get reset, but that has no bearing on how many DB2 can
allocate in the PRIMARY and SECONDARY active logs. If circular logging
is used, then the same file numbers get reused.

If you have LOGARCHMETH1 set, you can delete logs from the archive
directory you specify. Most people keep a reasonable number of log
files in the archives, at least as many as might be needed by a
restore database command and rollforward. So the number of days of
archive logs to keep depends somewhat on how far back you might need
to go in your backups to do a restore. If you use HADR, you need to
make sure that your standby has processed all the logs before you
delete them (this would only be a factor if your standby was down for
an extended period and needed to catch-up using some of the archive
logs). One other consideration with HADR is that logs do not get
archived on the standby database, which means you need to manually
delete or archive them.

Jun 17 '07 #4

P: n/a
On Jun 17, 4:19 pm, Mark A <m00...@yahoo.comwrote:
On Jun 17, 7:09 am, lfhenry <lfhe...@gmail.comwrote:
On Jun 17, 7:49 am, Mark A <m00...@yahoo.comwrote:
On Jun 16, 11:56 am, lfhenry <lfhe...@gmail.comwrote:
I am a newbie toHADRand Admin of DB2 (I am websphere guy). My
question relates to DB2logging. I've read thatHADRdoes not allow
Infinitelogging(-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
Whatloggingoptions should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.
You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.
If you set LOGARCHMETH1 to a separate disk (which you need to do forHADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used byHADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.
Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.
Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.
Thats great information.
To complete the jigsaw in my head, do the count of logs in the archive
count against the total log number i.e. When my LOGPRIMARY is 40 and
LOGSECOND is 200 and i have have used 50, that means i have 190
available. If 1 more log is archived, do i then have 191 available or
am i still on 190.
If i am still on 190 that means the count is always on the number of
logs created regardless whether active or archived, right?
If not then DB2 should always give me available logs.
One more thing, after backing up, I can check the last required log
file and then delete all previous logs i dont require it. Is this
better done by manually deleting or by using the Prune command.
Already said, i'm green at this.
Thanks,- Hide quoted text -
- Show quoted text -

The number of PRIMARY and SECONDARY logs only refer to active logs.
Once a log file is archived by DB2, it does not count against the
total and the archive logs are kept until you delete them. Since the
archive log files must be manually deleted, you effectively have
infinite logging; however, all the updates in a single transaction
(Unit of Work within a single commit scope) must fit inside the active
logs.

When LOGRETAIN is set (either explicitly or via specifying an log
archive exit program or by specifying LOGARCHMETH1) the log file
numbers do not get reset, but that has no bearing on how many DB2 can
allocate in the PRIMARY and SECONDARY active logs. If circular logging
is used, then the same file numbers get reused.

If you have LOGARCHMETH1 set, you can delete logs from the archive
directory you specify. Most people keep a reasonable number of log
files in the archives, at least as many as might be needed by a
restore database command and rollforward. So the number of days of
archive logs to keep depends somewhat on how far back you might need
to go in your backups to do a restore. If you use HADR, you need to
make sure that your standby has processed all the logs before you
delete them (this would only be a factor if your standby was down for
an extended period and needed to catch-up using some of the archive
logs). One other consideration with HADR is that logs do not get
archived on the standby database, which means you need to manually
delete or archive them.

Thanks,
This is brilliant info. Jigsaw completed.

Jun 20 '07 #5

This discussion thread is closed

Replies have been disabled for this discussion.