By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,004 Members | 1,253 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,004 IT Pros & Developers. It's quick & easy.

DB2 cutting logs frequently

P: n/a
I have a siebel crm application thats cutting and archiving logs every
minute. Here is the db cfg

Log buffer size (4KB) (LOGBUFSZ) = 512
Log file size (4KB) (LOGFILSIZ) = 15000
Number of primary log files (LOGPRIMARY) = 30
Number of secondary log files (LOGSECOND) = 40
Group commit count (MINCOMMIT) = 1

How can I reduce the number of logs being cut ? I don't understand
whats driving DB2 to cut logs every minute.

Roger

Nov 13 '07 #1
Share this Question
Share on Google+
5 Replies


P: n/a
On Nov 13, 4:44 pm, Roger <wondering...@gmail.comwrote:
I have a siebel crm application thats cutting and archiving logs every
minute. Here is the db cfg

Log buffer size (4KB) (LOGBUFSZ) = 512
Log file size (4KB) (LOGFILSIZ) = 15000
Number of primary log files (LOGPRIMARY) = 30
Number of secondary log files (LOGSECOND) = 40
Group commit count (MINCOMMIT) = 1

How can I reduce the number of logs being cut ? I don't understand
whats driving DB2 to cut logs every minute.

Roger
Hi Roger,

The workload generally drives the amount of data that is logged and in
this case it's about 60MB (15000 4K pages) per minute. Does that seem
unreasonable given what's happening on the system? Is this actually a
problem for you, or are you just wondering what's happening? You
could consider increasing LOGFILSIZ if this would help (but depending
on how you're using your archived logs -- are they for DR purposes? --
then you'll have to take the longer time between archives into
consideration).

Another possibility is that the application is connecting, doing some
work, disconnecting (leaving nobody else connected), and then
repeating the process. In this case the current log will be truncated
and it will be archived when the database starts up next. If this is
in fact the problem then you could consider keeping the database
activated (using ACTIVATE DATABASE). I don't know Siebel's behavior
but I would guess that this probably isn't the problem. Are the logs
being archived all 60MB in size, or do they generally look like
they've been truncated?

Regards,
Kelly Schlamb

Nov 13 '07 #2

P: n/a
aj
Is noting when logs are cut, along w/ the LOGFILSIZ, the best way
to measure general workload on a given database thru time? Is there
a better approach?
I think this would measure only WRITE workload, and not take into
account READ workload. Is there something that measures both?

TIA

aj

ks******@ca.ibm.com wrote:
On Nov 13, 4:44 pm, Roger <wondering...@gmail.comwrote:
>I have a siebel crm application thats cutting and archiving logs every
minute. Here is the db cfg

Log buffer size (4KB) (LOGBUFSZ) = 512
Log file size (4KB) (LOGFILSIZ) = 15000
Number of primary log files (LOGPRIMARY) = 30
Number of secondary log files (LOGSECOND) = 40
Group commit count (MINCOMMIT) = 1

How can I reduce the number of logs being cut ? I don't understand
whats driving DB2 to cut logs every minute.

Roger

Hi Roger,

The workload generally drives the amount of data that is logged and in
this case it's about 60MB (15000 4K pages) per minute. Does that seem
unreasonable given what's happening on the system? Is this actually a
problem for you, or are you just wondering what's happening? You
could consider increasing LOGFILSIZ if this would help (but depending
on how you're using your archived logs -- are they for DR purposes? --
then you'll have to take the longer time between archives into
consideration).

Another possibility is that the application is connecting, doing some
work, disconnecting (leaving nobody else connected), and then
repeating the process. In this case the current log will be truncated
and it will be archived when the database starts up next. If this is
in fact the problem then you could consider keeping the database
activated (using ACTIVATE DATABASE). I don't know Siebel's behavior
but I would guess that this probably isn't the problem. Are the logs
being archived all 60MB in size, or do they generally look like
they've been truncated?

Regards,
Kelly Schlamb
Nov 14 '07 #3

P: n/a
Kelly, The database is activated when it comes up. The archive log is
all 60 MB, so its not truncating. Given that its a CRM siebel
application, its hard to believe its logging this much of activity.
its sometimes cutting 4 logs in a minute.
I was concerned about it, because the database server is kind of I/O
bound.

Roger

Nov 14 '07 #4

P: n/a
On Nov 14, 9:28 am, aj <ron...@mcdonalds.comwrote:
Is noting when logs are cut, along w/ the LOGFILSIZ, the best way
to measure general workload on a given database thru time? Is there
a better approach?
I think this would measure only WRITE workload, and not take into
account READ workload. Is there something that measures both?

TIA
Correct. When I mentioned workload what I really meant to say was
"write workload". There are various snapshot monitor elements that
you can look at to get an idea of the amount of general work being
done over time (while the database remains active). For instance, you
can get things like number of rows read/inserted/updated/deleted,
number of update/insert/delete statements executed, number of select
statements, number of commits/rollbacks, number of DDL statements, log
pages written, etc. You should be able to get at this information via
SQL so you could store results in tables for historical purposes.

Kelly

Nov 14 '07 #5

P: n/a
On Nov 14, 10:50 am, Roger <wondering...@gmail.comwrote:
Kelly, The database is activated when it comes up. The archive log is
all 60 MB, so its not truncating. Given that its a CRM siebel
application, its hard to believe its logging this much of activity.
its sometimes cutting 4 logs in a minute.
I was concerned about it, because the database server is kind of I/O
bound.

Roger
Hi Roger,

As I just pointed out in another post under this thread, there are
ways for you to measure the number of various types of statements that
get executed, and how much work is being done. Also, you can see what
the rate of logging is by looking at some of the logging-related
fields within a database snapshot (such as log space used, log pages
written, etc).

If you're really interested in knowing what is being logged exactly,
you can write an application using the db2ReadLog() API. There are
also applications you can purchase that will examine your logs (such a
IBM's Recovery Expert and some audit tools from various vendors).

Kelly

Nov 14 '07 #6

This discussion thread is closed

Replies have been disabled for this discussion.