469,347 Members | 2,100 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,347 developers. It's quick & easy.

V9 Compression & Log Use

I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?

I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.

Thanks
Dec 5 '07 #1
6 2210
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?

I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 5 '07 #2
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.

Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.

I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.

Dec 5 '07 #3
mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
>mike_dba wrote:
>>I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.

I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.
I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.

Make sure you run REORG after getting the dictionary

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 5 '07 #4
On Dec 5, 1:24 pm, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.
I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.

I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.

Make sure you run REORG after getting the dictionary

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab- Hide quoted text -

- Show quoted text -
I just double checked :

alter table schema.fact_tbl_comp COMPRESS YES;
reorg table schema.fact_tbl_comp resetdictionary;
runstats ...
update --sql0964 log full

Dec 5 '07 #5
On Dec 5, 2:55 pm, mike_dba <michaelaaldr...@yahoo.comwrote:
On Dec 5, 1:24 pm, Serge Rielau <srie...@ca.ibm.comwrote:


mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
>mike_dba wrote:
>>I have been testing compression for update operations. Can anyone
>>tell me why I require more log for an update of a compressed table
>>than I do for the same table that is not compressed ?
>>I tried an update for the same number of rows for two copies of a
>>table, one compressed and one not. The compressed UOW exceeds my log
>>allocation while the non-compressed does not.
>Huh? That's odd. The log records remain compressed. Simply speaking you
>should see a similar compression ration for the logs as for the table.
>Cheers
>Serge
>--
>Serge Rielau
>DB2 Solutions Development
>IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.
I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.
I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.
Make sure you run REORG after getting the dictionary
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab- Hide quoted text -
- Show quoted text -

I just double checked :

alter table schema.fact_tbl_comp COMPRESS YES;
reorg table schema.fact_tbl_comp resetdictionary;
runstats ...
update --sql0964 log full- Hide quoted text -

- Show quoted text -
Do you know if subsequent Inserts after the reorg simply use the
existing dictionary or add entries to the dictionary ?
Do you know the effect of Not Logged Initially on compression ?

I believe that you may have pointed my in the right direction with
regard to reorg. I'll test some more.
Dec 5 '07 #6
mike_dba wrote:
Do you know if subsequent Inserts after the reorg simply use the
existing dictionary or add entries to the dictionary ?
Do you know the effect of Not Logged Initially on compression ?
Once the dictionary is created it remains static.
NLI has no impact on compression and vice versa.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 6 '07 #7

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

2 posts views Thread by Jim Hubbard | last post: by
1 post views Thread by Herbert Wildmeister via .NET 247 | last post: by
1 post views Thread by chris.atlee | last post: by
7 posts views Thread by Konstantin Andreev | last post: by
20 posts views Thread by chance | last post: by
21 posts views Thread by =?Utf-8?B?VkJB?= | last post: by
3 posts views Thread by GiJeet | last post: by
reply views Thread by zhoujie | last post: by
reply views Thread by suresh191 | last post: by
1 post views Thread by Marylou17 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.