Friends:
I've written a stored procedure that, among other things, writes
several million rows to a declared global temporary table (DGTT)
declared as NOT LOGGED using a (cursor) FOR loop. My rows-inserted
throughput starts out lightning-fast, but eventually plateaus to about
1 row per second. On what I thought was a lark, I decided to alter the
loop's cursor to WITH HOLD (the DGTT is, of course, declared to
preserve rows on COMMIT) and added a COMMIT to the INSERTing FOR loop
every 2000 records. Performance is now MUCH improved.
Questions:
1. If the query the cursor is based on is FOR READ ONLY WITH UR, and if
DGTT's, as private session objects, don't acquire locks and the one
I've declared is not logged, why did the COMMITing help so much? There
should be no locks to release nor accumulated log buffer entries to
flush.
2. Conventional wisdom says that COMMITs are expensive and are thus
recommended to be used sparingly, yet they made things better for my
processing, not worse. Why?
3. Does a COMMIT--in addition to releasing locks and causing log buffer
entries to be written--also force all dirty pages in the buffer pool to
be written to disk?
Details: IBM DB2 UDB 8.2.3 on AIX 5.2
Thank you for your considered replies.
--Jeff