By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,226 Members | 1,032 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,226 IT Pros & Developers. It's quick & easy.

V9 Compression & Log Use

P: n/a
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?

I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.

Thanks
Dec 5 '07 #1
Share this Question
Share on Google+
6 Replies


P: n/a
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?

I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 5 '07 #2

P: n/a
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.

Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.

I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.

Dec 5 '07 #3

P: n/a
mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
>mike_dba wrote:
>>I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.

I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.
I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.

Make sure you run REORG after getting the dictionary

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 5 '07 #4

P: n/a
On Dec 5, 1:24 pm, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.
I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.

I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.

Make sure you run REORG after getting the dictionary

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab- Hide quoted text -

- Show quoted text -
I just double checked :

alter table schema.fact_tbl_comp COMPRESS YES;
reorg table schema.fact_tbl_comp resetdictionary;
runstats ...
update --sql0964 log full

Dec 5 '07 #5

P: n/a
On Dec 5, 2:55 pm, mike_dba <michaelaaldr...@yahoo.comwrote:
On Dec 5, 1:24 pm, Serge Rielau <srie...@ca.ibm.comwrote:


mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
>mike_dba wrote:
>>I have been testing compression for update operations. Can anyone
>>tell me why I require more log for an update of a compressed table
>>than I do for the same table that is not compressed ?
>>I tried an update for the same number of rows for two copies of a
>>table, one compressed and one not. The compressed UOW exceeds my log
>>allocation while the non-compressed does not.
>Huh? That's odd. The log records remain compressed. Simply speaking you
>should see a similar compression ration for the logs as for the table.
>Cheers
>Serge
>--
>Serge Rielau
>DB2 Solutions Development
>IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.
I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.
I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.
Make sure you run REORG after getting the dictionary
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab- Hide quoted text -
- Show quoted text -

I just double checked :

alter table schema.fact_tbl_comp COMPRESS YES;
reorg table schema.fact_tbl_comp resetdictionary;
runstats ...
update --sql0964 log full- Hide quoted text -

- Show quoted text -
Do you know if subsequent Inserts after the reorg simply use the
existing dictionary or add entries to the dictionary ?
Do you know the effect of Not Logged Initially on compression ?

I believe that you may have pointed my in the right direction with
regard to reorg. I'll test some more.
Dec 5 '07 #6

P: n/a
mike_dba wrote:
Do you know if subsequent Inserts after the reorg simply use the
existing dictionary or add entries to the dictionary ?
Do you know the effect of Not Logged Initially on compression ?
Once the dictionary is created it remains static.
NLI has no impact on compression and vice versa.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 6 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.