By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,287 Members | 1,284 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,287 IT Pros & Developers. It's quick & easy.

deleting large amounts of records

P: 2
I run a sql statement like the following. At times there are thousands of records selected for deletion. I have a db/2 server with 8 gig of RAM. The deletion takes several minutes and sometimes times out and hangs.

delete from db2inst1.results where updated <= (current_timestamp - 30 day )

What parameters can I tune to make this happen quickly?

thanks
Oct 15 '07 #1
Share this Question
Share on Google+
2 Replies


docdiesel
Expert 100+
P: 297
Hi,

DB2 is doing this deletion in 1 transaction. Well, at least it tries to. But selecting the rows to delete, marks them as deleted, write all this to the log files ... needs lots of time, memory and disk space (logs).
  1. Try to do this in off peak hours.
  2. Do this in more than 1 step, means smaller steps. E.g. if you run this step daily to delete the rows of [day -30], first "delete ... where date=... and time>='18:00:00' ", then 12:00, 6:00, 0:00 o'clock.
  3. Increase the buffer pool size and the NUM_IOCLEANERS if necessary/possible.
Regards, Bernd
Oct 15 '07 #2

P: 9
Jaclyn,
You can use LOCK TABLE which eliminates locking overhead. However, this technique makes the table unavailable for concurrent tasks.
Oct 16 '07 #3

Post your reply

Sign in to post your reply or Sign up for a free account.