By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,590 Members | 2,174 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,590 IT Pros & Developers. It's quick & easy.

Transaction log full

P: n/a
Hi,
I got a problem while loading a table on DB2 database. It is saying that
transation log is full and do more commits in between. Is it done by a DB2
or can I fix this at BD2 server?

Thanks in advance,

Katta
Nov 12 '05 #1
Share this Question
Share on Google+
9 Replies


P: n/a
"Balaji" <ba******@sas.com> wrote in message
news:d6**********@license1.unx.sas.com...
Hi,
I got a problem while loading a table on DB2 database. It is saying that
transation log is full and do more commits in between. Is it done by a DB2
or can I fix this at BD2 server?

Thanks in advance,

Katta

Use the import syntax, and set the COMMITCOUNT parm to 1000.

You can also increase the number of primary/secondary logs, and increase the
size of each log file. The defaults are rather small. These are set at the
database level (db2 update db cfg ...) or you can use the Control Center to
change them.
Nov 12 '05 #2

P: n/a
Hi,
Use LOAD rather than IMPORT which used minimal(no) logging and also
faster...
Check with docs before proceeding things.

Cheers,
Thiru
WantedToBeDBA.

Nov 12 '05 #3

P: n/a
i have tried its not solved my problem. the thing is it has to adjusted at
server. so can u help me what r the exact parameters to be changed by using
control center.
thank you,
katta
"Mark A" <no****@nowhere.com> wrote in message
news:zb********************@comcast.com...
"Balaji" <ba******@sas.com> wrote in message
news:d6**********@license1.unx.sas.com...
Hi,
I got a problem while loading a table on DB2 database. It is saying that
transation log is full and do more commits in between. Is it done by a DB2 or can I fix this at BD2 server?

Thanks in advance,

Katta
Use the import syntax, and set the COMMITCOUNT parm to 1000.

You can also increase the number of primary/secondary logs, and increase

the size of each log file. The defaults are rather small. These are set at the
database level (db2 update db cfg ...) or you can use the Control Center to change them.

Nov 12 '05 #4

P: n/a
RdR
Hi,

Are you doing a DELETE before loading? If you do then that will contribute
to log items because the DELETE may do it in one commit and if the DELETE is
huge then that will be a big factor. To empty the table, you can LOAD,
REPLACE with an empty table to prevent the DELETEs from being logged. Of
course, everythign should be backed up just in case of disaster happening
along the way.

Hope this helps,

RdR
"Balaji" <ba******@sas.com> wrote in message
news:d6**********@license1.unx.sas.com...
i have tried its not solved my problem. the thing is it has to adjusted at
server. so can u help me what r the exact parameters to be changed by using control center.
thank you,
katta
"Mark A" <no****@nowhere.com> wrote in message
news:zb********************@comcast.com...
"Balaji" <ba******@sas.com> wrote in message
news:d6**********@license1.unx.sas.com...
Hi,
I got a problem while loading a table on DB2 database. It is saying that transation log is full and do more commits in between. Is it done by a DB2 or can I fix this at BD2 server?

Thanks in advance,

Katta

Use the import syntax, and set the COMMITCOUNT parm to 1000.

You can also increase the number of primary/secondary logs, and increase

the
size of each log file. The defaults are rather small. These are set at the database level (db2 update db cfg ...) or you can use the Control Center

to
change them.


Nov 12 '05 #5

P: n/a
I insist, the following idea is not good.This may lead to loss of data.
Try at ur own risk.

Disable logging for the particular table and perform load operation or
delete or what ever.

Make sure to QUIESCE the table/tablespace. So that so user will not be
allowed to access until UNQUIESCE is issued.

Thiru
WantedToBeDBA.

Nov 12 '05 #6

P: n/a
I missed a information in previous post. If you really wanted to change
the configuration then increase the LOGPRIMARY and LOGSECONDARY in
database configuration.

use update db cfg for <db name> using <parameter name> <value>

Cheers,
Thiru.
WantedToBeDBA.

Nov 12 '05 #7

P: n/a
I missed a information in previous post. If you really wanted to change
the configuration then increase the LOGPRIMARY and LOGSECONDARY in
database configuration.

use the following command.
update db cfg for <db name> using <parameter name> <value>

Cheers,
Thiru.
WantedToBeDBA.

Nov 12 '05 #8

P: n/a
RdR
Hi Thiru,

When you are doing the LOAD you are not logging the SQL inserts, updates and
deletes anyways, so a recovery from the logs will not reflect the INSERTS,
so you will not be able to recover even if you want to. I mentioned backing
up before the LOAD REPLACE action, yes there are risks but this backup
operation should be enough to cover the risks. And in reality, if it is the
DELETE that is causing the logs to be full and you are DELETEing at least a
million rows, you will need a lot of log volumes not to mention the amount
of time you have to wait.

RdR
"Thiru" <Wa***********@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
I insist, the following idea is not good.This may lead to loss of data.
Try at ur own risk.

Disable logging for the particular table and perform load operation or
delete or what ever.

Make sure to QUIESCE the table/tablespace. So that so user will not be
allowed to access until UNQUIESCE is issued.

Thiru
WantedToBeDBA.

Nov 12 '05 #9

P: n/a
IPL
Based on our experience, cleaning up the table by issuing a delete from and
then a load causes a lot of log information written. The delete from command
will delete everything in one commit. If we run out of logs, the delete will
run from the beggining again and the bad thing is the original delete from
command will be rolled back which causes additional waiting. We tried to add
log space and fail and continue with the cycle of adding logs, doing the
delete, running out'f log space, wait for the rollback of the deletes to
finish, then start again. Until we decided, we need to be up and running, so
we backed up our table being loaded, did a load with a replace option using
a dummie table without data on it, once the table is empty, we can do either
another load or an import, and did not experience log full issue.

It is not a dangerous procedure because if we failed, we have a backup we
can restore.

I hope I did not add to the confusion.

IPL

"RdR" <ro*@delrosario.ca> wrote in message
news:xZ********************@rogers.com...
Hi Thiru,

When you are doing the LOAD you are not logging the SQL inserts, updates
and
deletes anyways, so a recovery from the logs will not reflect the INSERTS,
so you will not be able to recover even if you want to. I mentioned
backing
up before the LOAD REPLACE action, yes there are risks but this backup
operation should be enough to cover the risks. And in reality, if it is
the
DELETE that is causing the logs to be full and you are DELETEing at least
a
million rows, you will need a lot of log volumes not to mention the amount
of time you have to wait.

RdR
"Thiru" <Wa***********@gmail.com> wrote in message
news:11**********************@g44g2000cwa.googlegr oups.com...
I insist, the following idea is not good.This may lead to loss of data.
Try at ur own risk.

Disable logging for the particular table and perform load operation or
delete or what ever.

Make sure to QUIESCE the table/tablespace. So that so user will not be
allowed to access until UNQUIESCE is issued.

Thiru
WantedToBeDBA.


Nov 12 '05 #10

This discussion thread is closed

Replies have been disabled for this discussion.