By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
454,275 Members | 1,498 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 454,275 IT Pros & Developers. It's quick & easy.

MDC+partitioning+compression

P: n/a
Hi, All

I have strange situation with table design for DB2 9.1 on Windows

I have 3 tables with same structure :

1 table - is MDC
2 table - is partitioned MDC table
3 table - is compressed partitioned MDC table
Each table in separate DMS tablespace

Im loading in every table 3,6 million rows and see the next picture :

1. MDC table have size of 122 Mb
2. partitioned MDC table have size of 1340 Мб !! (very big!!)
3. compressed partitioned MDC table (after REORG TABLE
....RESETDICTIONARY)
have size 2680 Mb !!!! (double size!!!)

It's very strange.....
Can anybode comment this situation for me ?

P.S. Sorry for my english. :-(

Sep 15 '06 #1
Share this Question
Share on Google+
8 Replies


P: n/a
mitek wrote:
Hi, All

I have strange situation with table design for DB2 9.1 on Windows

I have 3 tables with same structure :

1 table - is MDC
2 table - is partitioned MDC table
Do you mean "database partitioning" or "table partitioning"? (But I don't
think this would make a difference here.)
3 table - is compressed partitioned MDC table
Each table in separate DMS tablespace

Im loading in every table 3,6 million rows and see the next picture :

1. MDC table have size of 122 Mb
2. partitioned MDC table have size of 1340 Мб !! (very big!!)
What are your clustering columns and how many distinct tuples do you have on
those? Because: if you use a unique column (e.g. primary key) then each
row will go to its own cluster. And that implies that each cluster has 1
extent allocated and only 1 row on the pages in this extent. Thus, if you
have too many clusters (which could be made worse with the partitioning
since each partition has its own, independent clusters), you will see a lot
of disk space being needed.

How did you measure the size of the tables?

p.s: I don't know enough about compression (yet) to comment an that part.

--
Knut Stolze
DB2 Information Integration Development
IBM Germany
Sep 15 '06 #2

P: n/a
Knut Stolze wrote:
mitek wrote:
>Hi, All

I have strange situation with table design for DB2 9.1 on Windows

I have 3 tables with same structure :

1 table - is MDC
2 table - is partitioned MDC table

Do you mean "database partitioning" or "table partitioning"? (But I don't
think this would make a difference here.)
>3 table - is compressed partitioned MDC table
Each table in separate DMS tablespace

Im loading in every table 3,6 million rows and see the next picture :

1. MDC table have size of 122 Mb
2. partitioned MDC table have size of 1340 Мб !! (very big!!)

What are your clustering columns and how many distinct tuples do you have on
those? Because: if you use a unique column (e.g. primary key) then each
row will go to its own cluster. And that implies that each cluster has 1
extent allocated and only 1 row on the pages in this extent. Thus, if you
have too many clusters (which could be made worse with the partitioning
since each partition has its own, independent clusters), you will see a lot
of disk space being needed.
I can only re-enforce what Knut is stating.
If you end up with near empty extends you get a situation like vacuum in
space. A whole lot of nothing.
When you apply partitioning in addition to MDC you will need to make
your MDC more coarse.
A good thumb rule by which I like to operate is to aim for no more than
20% growth when making a table MDC.
If you have extremely many ranges on a small table you may observe the
same effect. Keep in mind that each range will have, on average one half
full extend in a perfectly reorg-ed table.
If you have 10000 ranges * 128KB extend size / 2 (half full). That is a
sizable number.

If you pass along your design and the expected number of rows we can
certainly comment.

W.r.t. compression, each range will have it's own dictionary. If your
ranges are very small I could imagine the effect you are seeing.
Also compression will have no effect if your extends are mostly empty
anyway. Vaccuum doesn't shrink when you compress the few lost atoms
inside of it.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Sep 15 '06 #3

P: n/a
Serge Rielau wrote:
W.r.t. compression, each range will have it's own dictionary.
This could actually double the needed disk space? Where is the dictionary
stored in an partitioned MDC? Inquiring mind wants to know...

--
Knut Stolze
DB2 Information Integration Development
IBM Germany
Sep 15 '06 #4

P: n/a
Knut Stolze wrote:
Serge Rielau wrote:
>W.r.t. compression, each range will have it's own dictionary.

This could actually double the needed disk space? Where is the dictionary
stored in an partitioned MDC? Inquiring mind wants to know...
MDC is is still one table.
Each range partition is its own physical table.
Hence the duplication of dictionaries.
The same holds true for database partitions in DPF. Dictionaries are per
node.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Sep 15 '06 #5

P: n/a
Serge Rielau wrote:
Knut Stolze wrote:
>Serge Rielau wrote:
>>W.r.t. compression, each range will have it's own dictionary.

This could actually double the needed disk space? Where is the
dictionary
stored in an partitioned MDC? Inquiring mind wants to know...
MDC is is still one table.
Each range partition is its own physical table.
Hence the duplication of dictionaries.
The same holds true for database partitions in DPF. Dictionaries are per
node.
All right. So it's the partitioning and not the MDC stuff that results in
multiple dictionaries. That makes a lot of sense. But then: is the 2x
increase in space consumption in the range of possibilities?

--
Knut Stolze
DB2 Information Integration Development
IBM Germany
Sep 15 '06 #6

P: n/a
Knut Stolze wrote:
Serge Rielau wrote:
>Knut Stolze wrote:
>>Serge Rielau wrote:

W.r.t. compression, each range will have it's own dictionary.
This could actually double the needed disk space? Where is the
dictionary
stored in an partitioned MDC? Inquiring mind wants to know...
MDC is is still one table.
Each range partition is its own physical table.
Hence the duplication of dictionaries.
The same holds true for database partitions in DPF. Dictionaries are per
node.

All right. So it's the partitioning and not the MDC stuff that results in
multiple dictionaries. That makes a lot of sense. But then: is the 2x
increase in space consumption in the range of possibilities?
I have no reason to disbelieve it, but no desire to run a formal proof.
Compression is very effective for non pathological cases. That's what
counts. The highest ratio reported to us by a customer is 86%.
This case is already pathological without throwing compression into the
mix. No need to waste time on it as far as I am concerned.

Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Sep 15 '06 #7

P: n/a


Hi, Knut. Thanx for reply.
>
Do you mean "database partitioning" or "table partitioning"? (But I don't
think this would make a difference here.)
This is non DPF, only table partitioning.
>
What are your clustering columns and how many distinct tuples do you have on
those? Because: if you use a unique column (e.g. primary key) then each
row will go to its own cluster. And that implies that each cluster has 1
extent allocated and only 1 row on the pages in this extent. Thus, if you
have too many clusters (which could be made worse with the partitioning
since each partition has its own, independent clusters), you will see a lot
of disk space being needed.
DDL statement for my test is

CREATE TABLESPACE TSMDC
PAGESIZE 4K
MANAGED BY DATABASE
USING (FILE 'F:\VIPERDB\tsmdc' 10 M )
AUTORESIZE YES
INCREASESIZE 10 M
MAXSIZE NONE;

CREATE TABLESPACE TSMDCPART
PAGESIZE 4K
MANAGED BY DATABASE
USING (FILE 'F:\VIPERDB\tsmdcpart' 10 M )
AUTORESIZE YES
INCREASESIZE 10 M
MAXSIZE NONE;

CREATE TABLESPACE TSMDCPARTCOMP
PAGESIZE 4K
MANAGED BY DATABASE
USING (FILE 'F:\VIPERDB\tsmdcpartcomp' 10 M )
AUTORESIZE YES
INCREASESIZE 10 M
MAXSIZE NONE;

CREATE TABLE VIPER.TAB_MDC
(NOD DECIMAL(3, 0),
ROAD DECIMAL(3, 0),
CARS DECIMAL(7, 0),
RPS DECIMAL(3, 0),
DATEMODIFY DATE
)
IN TSMDC
ORGANIZE BY DIMENSIONS (NOD,ROAD);

CREATE TABLE VIPER.TAB_MDC_PART
(NOD DECIMAL(3, 0),
ROAD DECIMAL(3, 0),
CARS DECIMAL(7, 0),
RPS DECIMAL(3, 0),
DATEMODIFY DATE
)
IN TSMDCPART
PARTITION BY RANGE (DATEMODIFY)
(STARTING '2002-12-01' ENDING '2006-09-30' EVERY 1 MONTH)
ORGANIZE BY DIMENSIONS (NOD,ROAD);

CREATE TABLE VIPER.TAB_MDC_PART_COMP
(NOD DECIMAL(3, 0),
ROAD DECIMAL(3, 0),
CARS DECIMAL(7, 0),
RPS DECIMAL(3, 0),
DATEMODIFY DATE
)
IN TSMDCPARTCOMP
PARTITION BY RANGE (DATEMODIFY)
(STARTING '2002-12-01' ENDING '2006-09-30' EVERY 1 MONTH)
ORGANIZE BY DIMENSIONS (NOD,ROAD)
COMPRESS YES;

Values cardinality is NOD = 6, ROAD = 38, RPS = 13 , CARS is
randomly
How did you measure the size of the tables?
I'm inspect the size of tablespace container for every table after load
and reorg (for compressed table VIPER.TAB_MDC_PART_COMP)

Sep 18 '06 #8

P: n/a
mitek wrote:
>
Hi, Knut. Thanx for reply.
>Do you mean "database partitioning" or "table partitioning"? (But I don't
think this would make a difference here.)

This is non DPF, only table partitioning.
>What are your clustering columns and how many distinct tuples do you have on
those? Because: if you use a unique column (e.g. primary key) then each
row will go to its own cluster. And that implies that each cluster has 1
extent allocated and only 1 row on the pages in this extent. Thus, if you
have too many clusters (which could be made worse with the partitioning
since each partition has its own, independent clusters), you will see a lot
of disk space being needed.

DDL statement for my test is

CREATE TABLESPACE TSMDC
PAGESIZE 4K
MANAGED BY DATABASE
USING (FILE 'F:\VIPERDB\tsmdc' 10 M )
AUTORESIZE YES
INCREASESIZE 10 M
MAXSIZE NONE;

CREATE TABLESPACE TSMDCPART
PAGESIZE 4K
MANAGED BY DATABASE
USING (FILE 'F:\VIPERDB\tsmdcpart' 10 M )
AUTORESIZE YES
INCREASESIZE 10 M
MAXSIZE NONE;

CREATE TABLESPACE TSMDCPARTCOMP
PAGESIZE 4K
MANAGED BY DATABASE
USING (FILE 'F:\VIPERDB\tsmdcpartcomp' 10 M )
AUTORESIZE YES
INCREASESIZE 10 M
MAXSIZE NONE;

CREATE TABLE VIPER.TAB_MDC
(NOD DECIMAL(3, 0),
ROAD DECIMAL(3, 0),
CARS DECIMAL(7, 0),
RPS DECIMAL(3, 0),
DATEMODIFY DATE
)
IN TSMDC
ORGANIZE BY DIMENSIONS (NOD,ROAD);

CREATE TABLE VIPER.TAB_MDC_PART
(NOD DECIMAL(3, 0),
ROAD DECIMAL(3, 0),
CARS DECIMAL(7, 0),
RPS DECIMAL(3, 0),
DATEMODIFY DATE
)
IN TSMDCPART
PARTITION BY RANGE (DATEMODIFY)
(STARTING '2002-12-01' ENDING '2006-09-30' EVERY 1 MONTH)
ORGANIZE BY DIMENSIONS (NOD,ROAD);

CREATE TABLE VIPER.TAB_MDC_PART_COMP
(NOD DECIMAL(3, 0),
ROAD DECIMAL(3, 0),
CARS DECIMAL(7, 0),
RPS DECIMAL(3, 0),
DATEMODIFY DATE
)
IN TSMDCPARTCOMP
PARTITION BY RANGE (DATEMODIFY)
(STARTING '2002-12-01' ENDING '2006-09-30' EVERY 1 MONTH)
ORGANIZE BY DIMENSIONS (NOD,ROAD)
COMPRESS YES;

Values cardinality is NOD = 6, ROAD = 38, RPS = 13 , CARS is
randomly
>How did you measure the size of the tables?

I'm inspect the size of tablespace container for every table after load
and reorg (for compressed table VIPER.TAB_MDC_PART_COMP)
Run this query:
SELECT MAX(numrows) mx, MIN(numrows) mn, AVG(numrows) av
FROM
(SELECT month, COUNT(1) as numrows
FROM TAB_MDC_PART_COMP
GROUP BY month, nod, road) AS T

If I recall correctly DB2 9 got a some extra fields in SYSCAt.TABLES or
SYSSTAT.TABLES which expose the average row length.
That can be used to compute the average number of extends per MDC cluster.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Sep 18 '06 #9

This discussion thread is closed

Replies have been disabled for this discussion.