By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,702 Members | 1,101 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,702 IT Pros & Developers. It's quick & easy.

Performance bottleneck in bulk updates/inserts

P: 8
I have been working on doing bulk updates and inserts through sqlapi++ using a bulk udpate extention library, developed, a long time ago, internally. The database is oracle. The bulk inserts/updates are done in batches of 50K. I am facing a very peculiar performance issue with regard to these bulk operations.

The initial batch of bulk insert (50K records) is finished within 3 seconds; while the next similar batch for bulk insert or update (50 records again) takes a whopping 373 seconds. Using 'top' on a RHEL-AS-4 server, I could see that it's oracle process that takes up the whole 373 secods for completion; so sqlapi++ or the internally developed extention is not the culprit.

The third batch of 50K records in the sequence take a lot more time (913 seconds). The time goes on increasing exponentially; and there doesnt seem to be >any< pattern out of it.

Surprisingly, this is not consistent. On a good day, I can have the seconds batch going through in 3 seconds. All records intact and perfect without any sort of defect in the data. In fact, all of the next batches would finish in or around 3 seconds.

Yet more surprisingly, if I truncate the table and start the process, the performance problem would reappear. It would again start taking 370-380 seconds for 2nd batch. Again, if I had used 'delete from' query instead of 'truncate table' query for deleteing all the records from the table, there wouldn't be any problem!

So in short, I came to conclusion that the performance bottleneck occurs when the table is truncated (or is created brand new), and not when all records are deleted using 'delete from' query.

Any guesses why it could be occuring? I confess I am not very good at databases, so any help would be very much appreciated.

Thanks in advance.

-
Shreyas
Sep 22 '08 #1
Share this Question
Share on Google+
4 Replies


debasisdas
Expert 5K+
P: 8,127
you need to check for any blocking locks in the tababase.
Sep 22 '08 #2

P: 8
you need to check for any blocking locks in the tababase.
i tried a 'select * from dba_waiters' and 'select * from dba_blockers'; but the views returned no rows. so i suppose there are no blocking locks, correct me if i am wrong.
i also rechecked for commits after sessions, and found they were in place. in fact, this bottleneck occurs only when the table is truncated. that is for sure.

what i dont understand is what could oracle be doing differently in truncating the table and deleting records from the table using 'delete from' query.
Sep 23 '08 #3

100+
P: 153
i tried a 'select * from dba_waiters' and 'select * from dba_blockers'; but the views returned no rows. so i suppose there are no blocking locks, correct me if i am wrong.
i also rechecked for commits after sessions, and found they were in place. in fact, this bottleneck occurs only when the table is truncated. that is for sure.

what i dont understand is what could oracle be doing differently in truncating the table and deleting records from the table using 'delete from' query.
when you truncate a table in oracle it removes all the allocated space from the table and puts it back into the free list. therefore the inserts need to go and request new space before it can insert. While i have seen this process slow things down some i wouldnt think that requesting enough space for 50K rows would take that long. what all is on the same I/O path that the datafiles you are using are on.

also using top to see that oracle is running a process on the oracle server doesnt really tell you a lot because if you installed the database under the linux user oracle then all the various processes will be oracle. you should (or get a dba to) perform a trace of your session so you can see what is taking all the time. AWR reports can also be a good indicator as to where all your time is being taken up.
Sep 25 '08 #4

P: 8
yes, i had tried truncate with space reuse.

anyways, the problem has been resolved. if you are interested, you can have a look at this thread on OTN -
Performance bottleneck in bulk updates-inserts
Sep 25 '08 #5

Post your reply

Sign in to post your reply or Sign up for a free account.