"shsandeep" <sa**********@g mail.com> wrote in message
news:ab******** *************** *******@localho st.talkaboutdat abases.com...
Hi all,
I have heard and read this many times: "Partitions should only be used for
'very large' tables".
What actually determines whether a table is 'very large' or not?
I have tables containing 0.5 million rows, 8 million rows, 14 & 29 million
rows as well.
How do I categorize them?
Any comments will be helpful.
Cheers,
San.
Partitioning (with DB2 DPF) should be used when there are a significant
number of queries run that would benefit from parallel processing. These
usually include queries where it necessary to read all (or a large
percentage) of the rows in a table in order to return the answer, and there
are a lot of rows in a table, and response time is unacceptable in a
non-parallel environment. This usually occurs when the access plan is one or
more table scans of large tables. There is no magic number of rows that
would tell you when DPF is recommended.
An OLTP system which uses indexes to quickly return only a few rows, may not
benefit at all from DPF, regardless of the number of rows in the table. In
fact, OLTP queries may run slower with DPF because of the overhead required
for each partition to process its portion of the table and for DB2 to
assemble the results into one answer. (There are some situations where DPF
can be used effectively with OLTP, but only experienced professionals should
attempt this).
DPF (data partitioning feature) enables inter-partition parallelism. But you
can also do intra-partition parallelism with only one partition (and without
DPF). UNION ALL views is one way to accomplish this, and I presume that
range-partitioning in V9.1 ( (Viper) due out later summer will do the same.