473,394 Members | 1,811 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

V9 Compression & Log Use

I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?

I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.

Thanks
Dec 5 '07 #1
6 2355
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?

I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 5 '07 #2
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.

Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.

I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.

Dec 5 '07 #3
mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
>mike_dba wrote:
>>I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.

I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.
I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.

Make sure you run REORG after getting the dictionary

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 5 '07 #4
On Dec 5, 1:24 pm, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
mike_dba wrote:
I have been testing compression for update operations. Can anyone
tell me why I require more log for an update of a compressed table
than I do for the same table that is not compressed ?
I tried an update for the same number of rows for two copies of a
table, one compressed and one not. The compressed UOW exceeds my log
allocation while the non-compressed does not.
Huh? That's odd. The log records remain compressed. Simply speaking you
should see a similar compression ration for the logs as for the table.
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.
I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.

I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.

Make sure you run REORG after getting the dictionary

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab- Hide quoted text -

- Show quoted text -
I just double checked :

alter table schema.fact_tbl_comp COMPRESS YES;
reorg table schema.fact_tbl_comp resetdictionary;
runstats ...
update --sql0964 log full

Dec 5 '07 #5
On Dec 5, 2:55 pm, mike_dba <michaelaaldr...@yahoo.comwrote:
On Dec 5, 1:24 pm, Serge Rielau <srie...@ca.ibm.comwrote:


mike_dba wrote:
On Dec 5, 11:26 am, Serge Rielau <srie...@ca.ibm.comwrote:
>mike_dba wrote:
>>I have been testing compression for update operations. Can anyone
>>tell me why I require more log for an update of a compressed table
>>than I do for the same table that is not compressed ?
>>I tried an update for the same number of rows for two copies of a
>>table, one compressed and one not. The compressed UOW exceeds my log
>>allocation while the non-compressed does not.
>Huh? That's odd. The log records remain compressed. Simply speaking you
>should see a similar compression ration for the logs as for the table.
>Cheers
>Serge
>--
>Serge Rielau
>DB2 Solutions Development
>IBM Toronto Lab
I was thinking that maybe the updated column had some immense
compression on it (the table went from 22 Gb to 9 Gb). And the update
changed the dictionary and maybe there was no entry for the new value
in the dictionary so this caused some trickle down effect and caused
additional logging. But the fact that it fit into my logs for non-
compressed data is puzzling.
I am updating a 10 byte column in a 559 byte wide row. There are 72
million rows to update. The log started at 8 Gb and was increased to
12 Gb and still wouldn't fit.
I do have a thought.
Could it be that you do have the dictionary, but the data is NOT
compressed. Then when you do the UPDATE DB2 compresses the page in
question which, of course, requires to log the whole page.
So what you are logging is not the update, but the piecemeal compression
of the table.
Make sure you run REORG after getting the dictionary
Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab- Hide quoted text -
- Show quoted text -

I just double checked :

alter table schema.fact_tbl_comp COMPRESS YES;
reorg table schema.fact_tbl_comp resetdictionary;
runstats ...
update --sql0964 log full- Hide quoted text -

- Show quoted text -
Do you know if subsequent Inserts after the reorg simply use the
existing dictionary or add entries to the dictionary ?
Do you know the effect of Not Logged Initially on compression ?

I believe that you may have pointed my in the right direction with
regard to reorg. I'll test some more.
Dec 5 '07 #6
mike_dba wrote:
Do you know if subsequent Inserts after the reorg simply use the
existing dictionary or add entries to the dictionary ?
Do you know the effect of Not Logged Initially on compression ?
Once the dictionary is created it remains static.
NLI has no impact on compression and vice versa.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
Dec 6 '07 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

11
by: Shiperton Henethe | last post by:
Dreamweaver 4 Hi Can anyone recommend a decent utility for compressing HTML that is safe - i.e. that *definitely* doesn't mess with the appearance in any browsers. I run a growing website...
2
by: Jim Hubbard | last post by:
I went to the compression newsgroups, but all I saw was spam. So, I thought I'd post his question here to get the best info I could from other programmers. Which compression algorithm offers...
1
by: Herbert Wildmeister via .NET 247 | last post by:
Does anybody know a way to compress and decompress AVI videofiles in C# .NET using all installed video codecs? I've foundthe Video Compression Manager and a class wrapper for C++ but noCLS compliant...
1
by: chris.atlee | last post by:
I'm writing a program in python that creates tar files of a certain maximum size (to fit onto CD/DVD). One of the problems I'm running into is that when using compression, it's pretty much...
6
by: Chetan Patil | last post by:
I would like to obfuscate and compress (remove comments, newlines etc) my javascript files / ajax code. Is there a module that will do this on the fly with some level of caching? Does such a...
7
by: Konstantin Andreev | last post by:
Well known, during "CREATE TABLE" we could optionally specify "COMPRESS SYSTEM DEFAULT" and "VALUE COMPRESSION" options. The main idea about these statements is : NOT to keep on disk the values that...
20
by: chance | last post by:
Hello, I want to add compression to a memory stream and save it in an Oracle database. This is the code I have so far: //save the Word document to a binary field, MemoryStream dataStream = new...
21
by: =?Utf-8?B?VkJB?= | last post by:
I compressed a file with GZipStream class and is larger than the original file.... how can this be?, the original file is 737 KB and the "compressed" file is 1.1 MB. Did i miss something or is...
3
by: GiJeet | last post by:
Hello, we have an app that scans documents into TIFF format and we need to transfer them over the internet. If anyone knows of a SDK we can use that can compress TIFFs on the fly or even if it can...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.