By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
459,490 Members | 1,292 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 459,490 IT Pros & Developers. It's quick & easy.

High Performance DPF INSERT's

P: n/a
Hi Group
Are there any DB2 UDB ESE DPF V8.2 users exploiting "buffered inserts"
(BIND parm INSERT BUF) *and* "multi-row INSERTS" (many rows associated
with the VALUES clause of the INSERT to minimize number of calls to the
RDMBS engine) in an SQL PL Stored Procedure?
I ask, as the latter would imply embedded DYNAMIC SQL due to
potentially varying number of rows to be inserted (assume it's
varying), and so, the SQL INSERT would have be built dynamically; it's
not clear that DYNAMIC SQL and Buffered Inserts are compatible. i.e.
that one reaps either or both the benefits of these two
Hi-Performance-oriented features. It's quite possible they are
incomaptible, and I would have to abandon one or the other (multi-row
probably...)
Thanks
wombat53

Jul 12 '06 #1
Share this Question
Share on Google+
2 Replies


P: n/a
wombat53 wrote:
Hi Group
Are there any DB2 UDB ESE DPF V8.2 users exploiting "buffered inserts"
(BIND parm INSERT BUF) *and* "multi-row INSERTS" (many rows associated
with the VALUES clause of the INSERT to minimize number of calls to the
RDMBS engine) in an SQL PL Stored Procedure?
I ask, as the latter would imply embedded DYNAMIC SQL due to
potentially varying number of rows to be inserted (assume it's
varying), and so, the SQL INSERT would have be built dynamically; it's
not clear that DYNAMIC SQL and Buffered Inserts are compatible. i.e.
that one reaps either or both the benefits of these two
Hi-Performance-oriented features. It's quite possible they are
incomaptible, and I would have to abandon one or the other (multi-row
probably...)
Buffered insert works only if the insert follow each other immediately
_without_any_SQL_in_between_.
In a loop in an SQL Procedure it's rather unlikely that that is the
case. The loop control will likely cause SQL execution unless it's trivial.

If you want to speed up mass inserts in a DPF system keep in mind that
you are bottlenecking on the SQL Procedure logic itself (running on the
coordinator).
You may get big benefits from parallelizing the procedure. That is CALL
the proc on each data-node (or at least each available coordinator if
you have multiple) and have it process a subset of the data (preferably
local data).
I have seen for a computational heavy batch process reding from a
staging table linear scalability for 2 concurrent procedure calls per
data node.
Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Jul 13 '06 #2

P: n/a

Serge Rielau wrote:
wombat53 wrote:
Hi Group
Are there any DB2 UDB ESE DPF V8.2 users exploiting "buffered inserts"
(BIND parm INSERT BUF) *and* "multi-row INSERTS" (many rows associated
with the VALUES clause of the INSERT to minimize number of calls to the
RDMBS engine) in an SQL PL Stored Procedure?
I ask, as the latter would imply embedded DYNAMIC SQL due to
potentially varying number of rows to be inserted (assume it's
varying), and so, the SQL INSERT would have be built dynamically; it's
not clear that DYNAMIC SQL and Buffered Inserts are compatible. i.e.
that one reaps either or both the benefits of these two
Hi-Performance-oriented features. It's quite possible they are
incomaptible, and I would have to abandon one or the other (multi-row
probably...)
Buffered insert works only if the insert follow each other immediately
_without_any_SQL_in_between_.
In a loop in an SQL Procedure it's rather unlikely that that is the
case. The loop control will likely cause SQL execution unless it's trivial.

If you want to speed up mass inserts in a DPF system keep in mind that
you are bottlenecking on the SQL Procedure logic itself (running on the
coordinator).
You may get big benefits from parallelizing the procedure. That is CALL
the proc on each data-node (or at least each available coordinator if
you have multiple) and have it process a subset of the data (preferably
local data).
I have seen for a computational heavy batch process reding from a
staging table linear scalability for 2 concurrent procedure calls per
data node.
Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Thanks Serge.
We have decided to bypass the problems of commingling DYNAMIC SQL and
Buffered Inserts, and replace the SQL with STATIC. We will use CASE
expression to "jump" to the appropriate statically bound INSERT, for
however many rows we have available to us from out Message Queue (that
will be a parm, and taken care of within the CASE expr. within tyhe
STATIC INSERT). We are also mindful of the Special Consideratrions for
using Buffered Inserts abot as you have noted and as documented in ch.
31 of V8.2 "Prog. Client Applications".
I am following up on your comment "That is CALL
the proc on each data-node (or at least each available coordinator if
you have multiple) and have it process a subset of the data (preferably
local data)."
Is this something along the lines of intelligently identifying the
optimal co-ordinator node for SQL (INSERT) throught the two api's of
sqlugrpn - Get Row Partitioning Number, and
sqlugtpi - Get Table Partitioning Information (called only once) such
that co-ordinatoer node and data node are one, with co-ordinator
distributed across servers, and at the same thereby minimizing data
movement; or are you getting at something different, when you say
parallelizing the procedure? Perhaps sorting the messaged input in some
way, multi-threading the INSERTS, or simply having multiple
co-ordinator nodes, along then lines of the BCU/BPU methodology?etc..
We expect to be running ESE(DPF)/LINUX V9, most recent Beta, or GA of
July 28. We are looking at many million if SQL INSERT's/day, a
potential choke-point.
Thanks
wombat53

Jul 19 '06 #3

This discussion thread is closed

Replies have been disabled for this discussion.