472,778 Members | 2,363 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,778 software developers and data experts.

High Performance DPF INSERT's

Hi Group
Are there any DB2 UDB ESE DPF V8.2 users exploiting "buffered inserts"
(BIND parm INSERT BUF) *and* "multi-row INSERTS" (many rows associated
with the VALUES clause of the INSERT to minimize number of calls to the
RDMBS engine) in an SQL PL Stored Procedure?
I ask, as the latter would imply embedded DYNAMIC SQL due to
potentially varying number of rows to be inserted (assume it's
varying), and so, the SQL INSERT would have be built dynamically; it's
not clear that DYNAMIC SQL and Buffered Inserts are compatible. i.e.
that one reaps either or both the benefits of these two
Hi-Performance-oriented features. It's quite possible they are
incomaptible, and I would have to abandon one or the other (multi-row
probably...)
Thanks
wombat53

Jul 12 '06 #1
2 3612
wombat53 wrote:
Hi Group
Are there any DB2 UDB ESE DPF V8.2 users exploiting "buffered inserts"
(BIND parm INSERT BUF) *and* "multi-row INSERTS" (many rows associated
with the VALUES clause of the INSERT to minimize number of calls to the
RDMBS engine) in an SQL PL Stored Procedure?
I ask, as the latter would imply embedded DYNAMIC SQL due to
potentially varying number of rows to be inserted (assume it's
varying), and so, the SQL INSERT would have be built dynamically; it's
not clear that DYNAMIC SQL and Buffered Inserts are compatible. i.e.
that one reaps either or both the benefits of these two
Hi-Performance-oriented features. It's quite possible they are
incomaptible, and I would have to abandon one or the other (multi-row
probably...)
Buffered insert works only if the insert follow each other immediately
_without_any_SQL_in_between_.
In a loop in an SQL Procedure it's rather unlikely that that is the
case. The loop control will likely cause SQL execution unless it's trivial.

If you want to speed up mass inserts in a DPF system keep in mind that
you are bottlenecking on the SQL Procedure logic itself (running on the
coordinator).
You may get big benefits from parallelizing the procedure. That is CALL
the proc on each data-node (or at least each available coordinator if
you have multiple) and have it process a subset of the data (preferably
local data).
I have seen for a computational heavy batch process reding from a
staging table linear scalability for 2 concurrent procedure calls per
data node.
Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Jul 13 '06 #2

Serge Rielau wrote:
wombat53 wrote:
Hi Group
Are there any DB2 UDB ESE DPF V8.2 users exploiting "buffered inserts"
(BIND parm INSERT BUF) *and* "multi-row INSERTS" (many rows associated
with the VALUES clause of the INSERT to minimize number of calls to the
RDMBS engine) in an SQL PL Stored Procedure?
I ask, as the latter would imply embedded DYNAMIC SQL due to
potentially varying number of rows to be inserted (assume it's
varying), and so, the SQL INSERT would have be built dynamically; it's
not clear that DYNAMIC SQL and Buffered Inserts are compatible. i.e.
that one reaps either or both the benefits of these two
Hi-Performance-oriented features. It's quite possible they are
incomaptible, and I would have to abandon one or the other (multi-row
probably...)
Buffered insert works only if the insert follow each other immediately
_without_any_SQL_in_between_.
In a loop in an SQL Procedure it's rather unlikely that that is the
case. The loop control will likely cause SQL execution unless it's trivial.

If you want to speed up mass inserts in a DPF system keep in mind that
you are bottlenecking on the SQL Procedure logic itself (running on the
coordinator).
You may get big benefits from parallelizing the procedure. That is CALL
the proc on each data-node (or at least each available coordinator if
you have multiple) and have it process a subset of the data (preferably
local data).
I have seen for a computational heavy batch process reding from a
staging table linear scalability for 2 concurrent procedure calls per
data node.
Cheers
Serge

--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab

IOD Conference
http://www.ibm.com/software/data/ond...ness/conf2006/
Thanks Serge.
We have decided to bypass the problems of commingling DYNAMIC SQL and
Buffered Inserts, and replace the SQL with STATIC. We will use CASE
expression to "jump" to the appropriate statically bound INSERT, for
however many rows we have available to us from out Message Queue (that
will be a parm, and taken care of within the CASE expr. within tyhe
STATIC INSERT). We are also mindful of the Special Consideratrions for
using Buffered Inserts abot as you have noted and as documented in ch.
31 of V8.2 "Prog. Client Applications".
I am following up on your comment "That is CALL
the proc on each data-node (or at least each available coordinator if
you have multiple) and have it process a subset of the data (preferably
local data)."
Is this something along the lines of intelligently identifying the
optimal co-ordinator node for SQL (INSERT) throught the two api's of
sqlugrpn - Get Row Partitioning Number, and
sqlugtpi - Get Table Partitioning Information (called only once) such
that co-ordinatoer node and data node are one, with co-ordinator
distributed across servers, and at the same thereby minimizing data
movement; or are you getting at something different, when you say
parallelizing the procedure? Perhaps sorting the messaged input in some
way, multi-threading the INSERTS, or simply having multiple
co-ordinator nodes, along then lines of the BCU/BPU methodology?etc..
We expect to be running ESE(DPF)/LINUX V9, most recent Beta, or GA of
July 28. We are looking at many million if SQL INSERT's/day, a
potential choke-point.
Thanks
wombat53

Jul 19 '06 #3

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
by: Andras Kovacs | last post by:
We have a performance problem to replicate our environnement. Our java code is able to insert 100 000 rows in a table within 3 seconds using Batch Statement. For two oracle sites it takes 6...
3
by: Andy Tran | last post by:
I built a system using mysql innodb to archive SMS messages but the innodb databases are not keeping up with the number of SMS messages coming in. I'm looking for performance of 200 msgs/sec where...
1
by: Jean-Marc Blaise | last post by:
Hi Serge, I have a question on your exciting recent article on High Perf. SQL: why do we have to describe datatype for include columns in an INSERT .... INCLUDE .... statement - in your example,...
3
by: Michel Esber | last post by:
Hello, I have a Linux box (RH 4 Update 2 / kernel 2.6) running DB2 V8 FP 10. My application is continuosly inserting data to a table. The statement is a simple "insert into table values (a,b),...
0
by: phlype.johnson | last post by:
I'm struggling to find the best query from performance point of view and readability for a normalized DB design. To illustrate better my question on whether normalized designs lead to more complex...
2
by: Jay Loden | last post by:
All, In studying Python, I have predictably run across quite a bit of talk about the GIL and threading in Python. As my day job, I work with a (mostly Java) application that is heavily threaded....
0
by: Vinod Sadanandan | last post by:
IMPACT OF NOLOGGING OPERATIONS IN HIGH AVAILABILITY ENVIRONMENTS Logged operation has to generate redo for every change data or undo block, nologging operations indicate that the...
8
by: Diggla | last post by:
I was asked to look into a performance problem on a newly migrated DB server. The db server was moved from a local-physical-nt4-sybase to remote (10 mb wan link), virtual, Windows 2003, SQL...
0
by: dotnetrocks | last post by:
Hi, I'm writing a high performance tcp/ip server using IOCP. Recently I found XF.Server component at http://www.kodart.com They claim that it is the fastest server implementation. Is it possible?...
0
by: Rina0 | last post by:
Cybersecurity engineering is a specialized field that focuses on the design, development, and implementation of systems, processes, and technologies that protect against cyber threats and...
3
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 2 August 2023 starting at 18:00 UK time (6PM UTC+1) and finishing at about 19:15 (7.15PM) The start time is equivalent to 19:00 (7PM) in Central...
0
linyimin
by: linyimin | last post by:
Spring Startup Analyzer generates an interactive Spring application startup report that lets you understand what contributes to the application startup time and helps to optimize it. Support for...
0
by: erikbower65 | last post by:
Here's a concise step-by-step guide for manually installing IntelliJ IDEA: 1. Download: Visit the official JetBrains website and download the IntelliJ IDEA Community or Ultimate edition based on...
0
by: kcodez | last post by:
As a H5 game development enthusiast, I recently wrote a very interesting little game - Toy Claw ((http://claw.kjeek.com/))。Here I will summarize and share the development experience here, and hope it...
0
by: Rina0 | last post by:
I am looking for a Python code to find the longest common subsequence of two strings. I found this blog post that describes the length of longest common subsequence problem and provides a solution in...
5
by: DJRhino | last post by:
Private Sub CboDrawingID_BeforeUpdate(Cancel As Integer) If = 310029923 Or 310030138 Or 310030152 Or 310030346 Or 310030348 Or _ 310030356 Or 310030359 Or 310030362 Or...
0
by: lllomh | last post by:
Define the method first this.state = { buttonBackgroundColor: 'green', isBlinking: false, // A new status is added to identify whether the button is blinking or not } autoStart=()=>{
0
by: Mushico | last post by:
How to calculate date of retirement from date of birth

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.