Potential performance issues caused by locking are always an issue when
multiple updates (inserts in your case) occur concurrently. The larger
the "blocks" of inserts are between commits, the higher the probability
of running into lock interference. Application knowledge will be useful
in determining potential interference effects of decreasing the commit
frequency.
Array inserts should always give better performance than traditional
because the array insert of 1k rows has a single interaction between the
cli and the database server instead of the 1k interactions for the
traditional case. I'd compare this to the difference between pulling a
single weed (from the garden) and carrying it 10' to a bag vs gathering
up all the weeds in reach and making a single trip to the bag. It's
almost always faster to batch up the work and avoid the overhead.
Phil Sherman
Michel Esber wrote:
Hello,
Environment: db2 V8 FP 13 LUW
Our application currently uses:
insert into table values ('A'),('B'),...('Z')
We have used CLI arrays inserts (1000 array and commit size) and
managed to insert 1 Million rows into an empty table in 32 seconds. Our
current model took exactly 270 seconds.
In average, the application will insert 50-100 rows at a time. There
are some cases that up to 5k rows may be inserted. Multiple
applications insert data into the same table concurrently.
In regards to concurrency, should I expect a performance impact using
CLI arrays with larger array/commit sizes ? Also, is it fair to say the
CLI array inserts is faster than the traditional insert, even with
small amounts of data ?
Thanks in advance.