By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,924 Members | 1,651 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,924 IT Pros & Developers. It's quick & easy.

UDB v8.1 fp 6 Summary table with an index

P: n/a
Created summary table data intially deferred refresh deferred that
contains a 3 table join with cardinality of the join about 4.5 million
rows. If I do not have an index on the summary table, and do a refresh
the amount of transaction logging is reasonable. But when an index is
added and a refresh is tried. The logging seems to increase
exponentially (not double or triple but even more) to the point I have
to kill the refresh cause it's about to fill up our log file system.
The only way I see around this is to drop the index, refresh, recreate
the index. Does anybody know why the logging increases so much when
there is an index added to a summary table?

Nov 12 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
I would like to rephrase my question. Why does a refresh on a summary
table with indexes require so much more transcation logging than, a
summary table with no indexes and creating the indexes after the
refresh. In my case we're talking about 3.5 gig of additional log
space.

apple wrote:
Created summary table data intially deferred refresh deferred that
contains a 3 table join with cardinality of the join about 4.5 million
rows. If I do not have an index on the summary table, and do a refresh
the amount of transaction logging is reasonable. But when an index is
added and a refresh is tried. The logging seems to increase
exponentially (not double or triple but even more) to the point I have
to kill the refresh cause it's about to fill up our log file system.
The only way I see around this is to drop the index, refresh, recreate
the index. Does anybody know why the logging increases so much when
there is an index added to a summary table?


Nov 12 '05 #2

P: n/a

In case anyone cares:

Found that excessive amount of logs were being used during refresh of a
4.5 million row summary table with 3 indexes, which seemed suspicious.
Research showed that a previously refreshed summary table logs the
deletes in addition to the inserts. All of this work is done in one
unit of work, thus secondary log use becomes excessive. If there are
indexes on the summary table, they cause even more logs to be created.
Here are the results of log space used on a refresh with a 4.5 million
row summary table with 3 indexes:

In Bytes:

Refresh with 3 indexes = 8494062221
Drop indexes, refresh, create indexes = 4200950270
Drop/Recreate summary table, refresh, create indexes = 1474991094

To work around the amount of logging taking place, we decided to drop
and re-create the summary table before running the refresh and create
indexes after the refresh.

The order of events are:

1. drop summary table
2. create summary table
3. refresh summary table
4. create indexes
5. runstats

Nov 12 '05 #3

P: n/a
I've been waiting for a response from one of the guru's on this topic.
Seems to defeat the purpose if it is more efficient to simply recreate
from scratch each time. Hmmmm

Nov 12 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.