473,608 Members | 2,479 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

DB2 logging options

I am a newbie to HADR and Admin of DB2 (I am websphere guy). My
question relates to DB2 logging. I've read that HADR does not allow
Infinite logging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
What logging options should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.

Jun 16 '07 #1
4 3423
On Jun 16, 11:56 am, lfhenry <lfhe...@gmail. comwrote:
I am a newbie to HADR and Admin of DB2 (I am websphere guy). My
question relates to DB2 logging. I've read that HADR does not allow
Infinite logging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
What logging options should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.
You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.

If you set LOGARCHMETH1 to a separate disk (which you need to do for
HADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used by HADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.

Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.

Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.

Jun 17 '07 #2
On Jun 17, 7:49 am, Mark A <m00...@yahoo.c omwrote:
On Jun 16, 11:56 am, lfhenry <lfhe...@gmail. comwrote:
I am a newbie toHADRand Admin of DB2 (I am websphere guy). My
question relates to DB2logging. I've read thatHADRdoes not allow
Infinitelogging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
Whatloggingopti ons should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.

You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.

If you set LOGARCHMETH1 to a separate disk (which you need to do forHADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used byHADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.

Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.

Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.

Thats great information.
To complete the jigsaw in my head, do the count of logs in the archive
count against the total log number i.e. When my LOGPRIMARY is 40 and
LOGSECOND is 200 and i have have used 50, that means i have 190
available. If 1 more log is archived, do i then have 191 available or
am i still on 190.
If i am still on 190 that means the count is always on the number of
logs created regardless whether active or archived, right?
If not then DB2 should always give me available logs.

One more thing, after backing up, I can check the last required log
file and then delete all previous logs i dont require it. Is this
better done by manually deleting or by using the Prune command.

Already said, i'm green at this.

Thanks,


Jun 17 '07 #3
On Jun 17, 7:09 am, lfhenry <lfhe...@gmail. comwrote:
On Jun 17, 7:49 am, Mark A <m00...@yahoo.c omwrote:


On Jun 16, 11:56 am, lfhenry <lfhe...@gmail. comwrote:
I am a newbie toHADRand Admin of DB2 (I am websphere guy). My
question relates to DB2logging. I've read thatHADRdoes not allow
Infinitelogging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
Whatloggingopti ons should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.
You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.
If you set LOGARCHMETH1 to a separate disk (which you need to do forHADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used byHADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.
Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.
Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.

Thats great information.
To complete the jigsaw in my head, do the count of logs in the archive
count against the total log number i.e. When my LOGPRIMARY is 40 and
LOGSECOND is 200 and i have have used 50, that means i have 190
available. If 1 more log is archived, do i then have 191 available or
am i still on 190.
If i am still on 190 that means the count is always on the number of
logs created regardless whether active or archived, right?
If not then DB2 should always give me available logs.

One more thing, after backing up, I can check the last required log
file and then delete all previous logs i dont require it. Is this
better done by manually deleting or by using the Prune command.

Already said, i'm green at this.

Thanks,- Hide quoted text -

- Show quoted text -
The number of PRIMARY and SECONDARY logs only refer to active logs.
Once a log file is archived by DB2, it does not count against the
total and the archive logs are kept until you delete them. Since the
archive log files must be manually deleted, you effectively have
infinite logging; however, all the updates in a single transaction
(Unit of Work within a single commit scope) must fit inside the active
logs.

When LOGRETAIN is set (either explicitly or via specifying an log
archive exit program or by specifying LOGARCHMETH1) the log file
numbers do not get reset, but that has no bearing on how many DB2 can
allocate in the PRIMARY and SECONDARY active logs. If circular logging
is used, then the same file numbers get reused.

If you have LOGARCHMETH1 set, you can delete logs from the archive
directory you specify. Most people keep a reasonable number of log
files in the archives, at least as many as might be needed by a
restore database command and rollforward. So the number of days of
archive logs to keep depends somewhat on how far back you might need
to go in your backups to do a restore. If you use HADR, you need to
make sure that your standby has processed all the logs before you
delete them (this would only be a factor if your standby was down for
an extended period and needed to catch-up using some of the archive
logs). One other consideration with HADR is that logs do not get
archived on the standby database, which means you need to manually
delete or archive them.

Jun 17 '07 #4
On Jun 17, 4:19 pm, Mark A <m00...@yahoo.c omwrote:
On Jun 17, 7:09 am, lfhenry <lfhe...@gmail. comwrote:
On Jun 17, 7:49 am, Mark A <m00...@yahoo.c omwrote:
On Jun 16, 11:56 am, lfhenry <lfhe...@gmail. comwrote:
I am a newbie toHADRand Admin of DB2 (I am websphere guy). My
question relates to DB2logging. I've read thatHADRdoes not allow
Infinitelogging (-1). I am expecting my Database to grow 8-10GB's a
month without compression (about 260 Megs day).
Whatloggingopti ons should i specify before i start my database.
If i set archiving using LOGARCHMETH1 to a separate disk, what should
the logprimary and logsecondary values be.
I would like to have smaller log files but because of the 256 log
limitation, i will have to set this to a high value.
What is the best way of maintaining this system without running out of
log space.What settings have other people here made to live with this.
Thanks.
You only need enough active log file space to contain all the activity
within the commit scope of active transactions that have not commited
yet. Once all transactions that contain information on a specific log
file have been commited, then DB2 can archive that particular log
file.
If you set LOGARCHMETH1 to a separate disk (which you need to do forHADR), then the archive logs are moved to the disk directory you
specify. These archived logs will be kept indefintely until you delete
them, and they can be used in roll-forward recovery, and used byHADR
when the standby is in catch up mode. Since you specified the location
of archive log files with the LOGARCHMETH1 parameter, DB2 knows where
to retrieve them.
Limiting the amount of active log space is a good way to limit the use
of very long running transactions, which should be discouraged in most
systems. Long running transactions typically reduce concurrency
(increase lock contention) and cause havic if the transactions fails
and needs to be rolled back by DB2. Instead of long running
transactions, CURSOR processing with intermediate commits should be
used. This is especially effective with Stored Procedures.
Generally speaking, a fewer number of large log files is more
efficient than a larger number of small log files. However, if the
system crashes, it might take slightly longer to recover in a crash
recovery scenario with a fewer number of large log files. But the
difference is usually not anything to worry about in most applications
unless you must absolutely have the fastest possible crash recovery
time, and are willing to sacrafice performance slightly when the
system is up and running in order to achieve faster crash recovery
times.
Thats great information.
To complete the jigsaw in my head, do the count of logs in the archive
count against the total log number i.e. When my LOGPRIMARY is 40 and
LOGSECOND is 200 and i have have used 50, that means i have 190
available. If 1 more log is archived, do i then have 191 available or
am i still on 190.
If i am still on 190 that means the count is always on the number of
logs created regardless whether active or archived, right?
If not then DB2 should always give me available logs.
One more thing, after backing up, I can check the last required log
file and then delete all previous logs i dont require it. Is this
better done by manually deleting or by using the Prune command.
Already said, i'm green at this.
Thanks,- Hide quoted text -
- Show quoted text -

The number of PRIMARY and SECONDARY logs only refer to active logs.
Once a log file is archived by DB2, it does not count against the
total and the archive logs are kept until you delete them. Since the
archive log files must be manually deleted, you effectively have
infinite logging; however, all the updates in a single transaction
(Unit of Work within a single commit scope) must fit inside the active
logs.

When LOGRETAIN is set (either explicitly or via specifying an log
archive exit program or by specifying LOGARCHMETH1) the log file
numbers do not get reset, but that has no bearing on how many DB2 can
allocate in the PRIMARY and SECONDARY active logs. If circular logging
is used, then the same file numbers get reused.

If you have LOGARCHMETH1 set, you can delete logs from the archive
directory you specify. Most people keep a reasonable number of log
files in the archives, at least as many as might be needed by a
restore database command and rollforward. So the number of days of
archive logs to keep depends somewhat on how far back you might need
to go in your backups to do a restore. If you use HADR, you need to
make sure that your standby has processed all the logs before you
delete them (this would only be a factor if your standby was down for
an extended period and needed to catch-up using some of the archive
logs). One other consideration with HADR is that logs do not get
archived on the standby database, which means you need to manually
delete or archive them.

Thanks,
This is brilliant info. Jigsaw completed.

Jun 20 '07 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
10871
by: Ville Vainio | last post by:
Just posting this for the sake of google: Like everyone else, I figured it's time to start using the 'logging' module. I typically want to dump "info" level (and up) log information to screen, and "debug" level (and up) to a log file for in-depth analysis. This is for scripts, so date/time/severity information is not wanted. I assumed such a simple use case would have an example in the docs (py 2.4), but no luck.
12
12705
by: Rob Cranfill | last post by:
Hello, I've successfully coded Python to do what I want with a RotatingFileHandler, but am having trouble getting the same behavior via a config file. I wanted to create one log file each time I run my app, with up to 10 files kept from the last invocations. This was accomplished with self._logger = logging.getLogger('PDGUI')
2
3106
by: Tor Erik Sønvisen | last post by:
Hi Have the following code: import logging logging.basicConfig(level = logging.DEBUG, format = ' %(message)s', filename = 'rfs.log', filemode = 'w')
2
1167
by: Alessandro Bottoni | last post by:
I just tried to use the python standard logging module in a small program and I found myself lost among all those features, all those configuration options and so on. Is there any better/simpler logging module around? What do you use for logging in your programs? Thanks in advance. ----------------------------------- Alessandro Bottoni
7
1536
by: Leo Breebaart | last post by:
I have another question where I am not so much looking for a solution but rather hoping to get some feedback on *which* solutions people here consider good Pythonic ways to approach a issue. The situation is this: I am writing fairly large console scripts in Python. They have quite a few command-line options, which lead to configuration variables that are needed all over the program (e.g. the "--verbose" option alone is used by just...
0
6680
by: Chrom_ | last post by:
Mysql is filling my /var partition because the log limit doesn't seem to be respected. I've tried many different settings in /etc/mysql/my.cnf but nothing works. Logrotate is not enabled/configured. I just want mysql to use a maximum of, let's say, 1GB of my /var partition The log is /var/log/mysql/
0
1424
by: db2green | last post by:
All, I am a newbie to HADR and Admin of DB2 (I am websphere guy). My question relates to DB2 logging. I've read that HADR does not allow Infinite logging (-1). I am expecting my Database to grow 8-10GB's a month without compression (about 260 Megs day). What logging options should i specify before i start my database. If i set archiving using LOGARCHMETH1 to a separate disk, what should the logprimary and logsecondary values be. I would...
4
1873
by: Alia Khouri | last post by:
I've been struggling with the logging module in the stdlib which seems to me rather counter-intuitive: For some reason it refuses to recognize configuration options when they are set inside a class so I have had to initialize logging and set the configuration options in the global scope of my module with logging.basicConfig. Here's what I did within the class setup method:
3
6414
by: Lowell Alleman | last post by:
Here is the situation: I wrote my own log handler class (derived from logging.Handler) and I want to be able to use it from a logging config file, that is, a config file loaded with the logging.config.fileConfig() function. Let say my logging class is called "MyLogHandler" and it's in a module called "mylogmodule", I want to be able to make an entry something like this in my logging config file:
6
7574
by: Larry Bates | last post by:
Every time I look at the logging module (up until now) I've given up and continue to use my home-grown logger that I've been using for years. I'm not giving up this time ;-) I find that I REALLY need to be able to monitor LOTS of running programs/processes and thought it would be nice to have them use SocketHandler logging and then I would write TCPServer to accept the log messages for real-time monitoring. I Googled (is that now a...
0
8059
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8000
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
8495
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
6815
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6011
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5475
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
3960
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4023
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
0
1328
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.