470,641 Members | 1,592 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,641 developers. It's quick & easy.

Problem with Modifing MQT

Hi,
we are using DB2 v8.1 on Linux64 for a DW application. We have a
table with around 800 million records in it. For performance purpose we
created about 2 or 3 MQT's(summary tables) on the table with refresh
immediate option. But when developers come up with some changes on the
MQT(which happens all most once every month). We drop the MQT and
recreate it. When the table was small we never had problem. But later
on we started having the transaction log full error.

As a work around, we are dropping all the MQTs. Renaming the table to
someother name. And then create the table again, create the MQT and
then insert the 800 million records again. Which requires around 3 -4
hrs time to reinsert all the records back into the table. Is there
anyother simple way to solve this problem.

Regards,
Koushik S

Aug 1 '06 #1
4 3248
Assuming that you're using circular logging; did you try increasing the
log space?

Phil Sherman
lpmsk wrote:
Hi,
we are using DB2 v8.1 on Linux64 for a DW application. We have a
table with around 800 million records in it. For performance purpose we
created about 2 or 3 MQT's(summary tables) on the table with refresh
immediate option. But when developers come up with some changes on the
MQT(which happens all most once every month). We drop the MQT and
recreate it. When the table was small we never had problem. But later
on we started having the transaction log full error.

As a work around, we are dropping all the MQTs. Renaming the table to
someother name. And then create the table again, create the MQT and
then insert the 800 million records again. Which requires around 3 -4
hrs time to reinsert all the records back into the table. Is there
anyother simple way to solve this problem.

Regards,
Koushik S
Aug 1 '06 #2
Hi Phil,
Yes, We are using circular log.
We already increased the log file size. Our log file size is 30 000
pages. Increasing more than this resulted in slower performance of the
ETL process the load the DW.
Log Retain No
Log primary 20
Log secondary 30
This combination worked good for us. So we retained it. Is there
anything I need to change in this.

Regards,
Koushik S
Phil Sherman wrote:
Assuming that you're using circular logging; did you try increasing the
log space?

Phil Sherman
lpmsk wrote:
Hi,
we are using DB2 v8.1 on Linux64 for a DW application. We have a
table with around 800 million records in it. For performance purpose we
created about 2 or 3 MQT's(summary tables) on the table with refresh
immediate option. But when developers come up with some changes on the
MQT(which happens all most once every month). We drop the MQT and
recreate it. When the table was small we never had problem. But later
on we started having the transaction log full error.

As a work around, we are dropping all the MQTs. Renaming the table to
someother name. And then create the table again, create the MQT and
then insert the 800 million records again. Which requires around 3 -4
hrs time to reinsert all the records back into the table. Is there
anyother simple way to solve this problem.

Regards,
Koushik S
Aug 1 '06 #3
It didn't quite work - your update job failed!

I'd try upping the "log secondary" parameter to at least 80 (doubling
the total log space) and verify that you have the disk space to support
it. After the job completes, do a "GET SNAPSHOT FOR DATABASE ON db_name"
and look at the "Maximum secondary log space used" line to determine how
to properly set the parameter.

Phil Sherman
lpmsk wrote:
Hi Phil,
Yes, We are using circular log.
We already increased the log file size. Our log file size is 30 000
pages. Increasing more than this resulted in slower performance of the
ETL process the load the DW.
Log Retain No
Log primary 20
Log secondary 30
This combination worked good for us. So we retained it. Is there
anything I need to change in this.

Regards,
Koushik S
Phil Sherman wrote:
>>Assuming that you're using circular logging; did you try increasing the
log space?

Phil Sherman
lpmsk wrote:
>>>Hi,
we are using DB2 v8.1 on Linux64 for a DW application. We have a
table with around 800 million records in it. For performance purpose we
created about 2 or 3 MQT's(summary tables) on the table with refresh
immediate option. But when developers come up with some changes on the
MQT(which happens all most once every month). We drop the MQT and
recreate it. When the table was small we never had problem. But later
on we started having the transaction log full error.

As a work around, we are dropping all the MQTs. Renaming the table to
someother name. And then create the table again, create the MQT and
then insert the 800 million records again. Which requires around 3 -4
hrs time to reinsert all the records back into the table. Is there
anyother simple way to solve this problem.

Regards,
Koushik S

Aug 1 '06 #4
Ian
lpmsk wrote:
Hi,
we are using DB2 v8.1 on Linux64 for a DW application. We have a
table with around 800 million records in it. For performance purpose we
created about 2 or 3 MQT's(summary tables) on the table with refresh
immediate option. But when developers come up with some changes on the
MQT(which happens all most once every month). We drop the MQT and
recreate it. When the table was small we never had problem. But later
on we started having the transaction log full error.
You can also do:

db2 +c alter table mqt activate not logged initially
db2 refresh table mqt;
db2 commit

Aug 4 '06 #5

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

3 posts views Thread by MaKiT | last post: by
117 posts views Thread by Peter Olcott | last post: by
13 posts views Thread by ashu | last post: by
28 posts views Thread by Jon Davis | last post: by
6 posts views Thread by Ammar | last post: by
39 posts views Thread by Marcin Zmyslowski | last post: by
reply views Thread by S. Justin Gengo | last post: by
2 posts views Thread by Mike Collins | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.