473,692 Members | 2,747 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Performance problems when inserting into a large table

Hi all,

first apologies if this question looks the same as another one I recently
posted - its a different thing but for the same szenario:-).

We are having performance problems when inserting/deleting rows from a large
table.
My scenario:

Table (lets call it FACT1) with 1000 million rows distributed on 12
Partitions (3 physical hosts with 4 logical partitions each).
Overall size of table is 350 GB. Each night 1.5 Million new rows will be
added
and approx. the same amount of old records will be deleted (Roll in/Roll out
with SQL INSERT/DELETE).
The table is stored in SMS tablespace with 16K Pagesize and 64 Pages
Extentsize.
The tablespace has 6 containers on each partition. Each container is on a
separate IBM ESS array.
Prefetchsize is 384 (6 containers * 64 pages). Prefetch behaves very well
with these settings (DB2_PARALLEL_I O is set)
DB2 is V8.1 ESE (DPF) FP5 and runs on AIX.

It takes 7 hours to insert 1.5 Million Rows into FACT1 and up to 7 hours to
delete the same amount.
The Insert is done via INSERT INTO FACT1 ... SELECT * FROM STAGING_TABLE.
Both the fact and the staging table are in tablespaces in the same nodegroup
and do have the same partitioning key.

On a similar table (lets call it FACT2) with a comparable amount of
data/rows and nearly identical configuration the same process takes only 5
minutes.

The main difference between these two tables is that FACT1 has 7 indexes
defined on it and FACT2 only 4.
One of the indexes in each case is unique, the others not (all type 2).
There is no clustering index and the APPEND attribute is set to ON.
I'm aware of the pseudo-delete mechanism of type-2 indexes and the
corresponding longer search time for insert's in the index leaf pages .
But an exclusive lock on the table before inserting/deleting does not change
the needed runtime.
(And the docs say that with a X-lock on table pseudo-deletes will not
happen).
Also after reorg of table and indexes the insert runtime is the same as
before.

Is it possible that the additional index maintenace for FACT1 leads to such
a longer runtime ?
What exactly happens internal for index maintenance (searched the docs - but
do not found internals)?
Anyone seen similar behaviour ?

I can post additional infos if required (table and Index definitions,
statistics ...) - but wanted to keep the posting small in first place.

TIA for any comments
Joachim

PS: Feel free to send comments by email to joklassen at web dot de
PPS: We are parallel investigating in MDC tables, using smaller tables (and
combining them with a UNION ALL view) and the use of LOAD FROM CURSOR
instead of INSERT
Nov 12 '05 #1
3 6902
Joachim Klassen wrote:
Hi all,

first apologies if this question looks the same as another one I recently
posted - its a different thing but for the same szenario:-).

We are having performance problems when inserting/deleting rows from a large
table.
My scenario:

Table (lets call it FACT1) with 1000 million rows distributed on 12
Partitions (3 physical hosts with 4 logical partitions each).
Overall size of table is 350 GB. Each night 1.5 Million new rows will be
added
and approx. the same amount of old records will be deleted (Roll in/Roll out
with SQL INSERT/DELETE).
The table is stored in SMS tablespace with 16K Pagesize and 64 Pages
Extentsize.
The tablespace has 6 containers on each partition. Each container is on a
separate IBM ESS array.
Prefetchsize is 384 (6 containers * 64 pages). Prefetch behaves very well
with these settings (DB2_PARALLEL_I O is set)
DB2 is V8.1 ESE (DPF) FP5 and runs on AIX.

It takes 7 hours to insert 1.5 Million Rows into FACT1 and up to 7 hours to
delete the same amount.
The Insert is done via INSERT INTO FACT1 ... SELECT * FROM STAGING_TABLE.
Both the fact and the staging table are in tablespaces in the same nodegroup
and do have the same partitioning key.

On a similar table (lets call it FACT2) with a comparable amount of
data/rows and nearly identical configuration the same process takes only 5
minutes.

The main difference between these two tables is that FACT1 has 7 indexes
defined on it and FACT2 only 4.
One of the indexes in each case is unique, the others not (all type 2).
There is no clustering index and the APPEND attribute is set to ON.
I'm aware of the pseudo-delete mechanism of type-2 indexes and the
corresponding longer search time for insert's in the index leaf pages .
But an exclusive lock on the table before inserting/deleting does not change
the needed runtime.
(And the docs say that with a X-lock on table pseudo-deletes will not
happen).
Also after reorg of table and indexes the insert runtime is the same as
before.

Is it possible that the additional index maintenace for FACT1 leads to such
a longer runtime ?
What exactly happens internal for index maintenance (searched the docs - but
do not found internals)? I'm not privy of index maintenance internals, but could it be the 7
indexes cause a spill of some heap? Maybe sort heap? Have you checked
the snapshots?
Have you verified that the plans are good? You shouldn't see any TQs.
Also are you sure you don't have any other complicating factors (SQL
Functions, Triggers, check or RI constraints) (The plans will show). PPS: We are parallel investigating in MDC tables, using smaller tables (and
combining them with a UNION ALL view) and the use of LOAD FROM CURSOR
instead of INSERT

Be careful with LOAD FROM CURSOR, the cursor is a bottle neck. To do
that in a scalable fashion you would fire up concurrent LOADs on each
node filtering the source by DBPARTITION.
You shouldn't need UNION ALL.

Cheers
Serge

--
Serge Rielau
DB2 SQL Compiler Development
IBM Toronto Lab
Nov 12 '05 #2
Serge,
again thanks for your quick reply :-)

I will try to get snapshot information next days (Problem is that "get
snapshot for all " runs 1 hour on production and once crashed the instance
in the past :-) (problem is fixed in FP7 which will be applied in the near
time)).
Have you verified that the plans are good? You shouldn't see any TQs.
Also are you sure you don't have any other complicating factors (SQL
Functions, Triggers, check or RI constraints) (The plans will show). The plan looks good (for me). Maybe you can comment it:

Section Code Page = 819

Estimated Cost = 31926.718750
Estimated Cardinality = 75608.000000

Coordinator Subsection - Main Processing:
(-----) Distribute Subsection #1
| Broadcast to Node List
| | Nodes = 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
| | 11, 12

Subsection #1:
( 3) Access Table Name = DTMP1T.STAGING ID = 411,121
| #Columns = 24
| Volatile Cardinality
| Relation Scan
| | Prefetch: Eligible
| Lock Intents
| | Table: Intent Share
| | Row : Next Key Share
( 2) Insert: Table Name = DPERMT.FACT1 ID = 1714,2

End of section
Optimizer Plan:

INSERT
( 2)
/----/ \
TBSCAN Table:
( 3) DPERMT
| F7KB_F_A_T_Q_B_ K
Table:
DTMP1T
F7KB_F_A_T_Q_B_ K
Be careful with LOAD FROM CURSOR, the cursor is a bottle neck. To do that
in a scalable fashion you would fire up concurrent LOADs on each node
filtering the source by DBPARTITION.
Does that mean
DECLARE C1 CURSOR for select * from stage where dbpartitionnum( column) = 1
LOAD FROM C1 OF CURSOR INSERT INTO FACT1 ... OUTPUT_DBPARTNU MS 1
DECLARE C2 CURSOR for select * from stage where dbpartitionnum( column) = 2
LOAD FROM C2 OF CURSOR INSERT INTO FACT1 ... OUTPUT_DBPARTNU MS 2
and so on

Thanks
Joachim

"Serge Rielau" <sr*****@ca.ibm .com> schrieb im Newsbeitrag
news:35******** *****@individua l.net... Joachim Klassen wrote:
Hi all,

first apologies if this question looks the same as another one I recently
posted - its a different thing but for the same szenario:-).

We are having performance problems when inserting/deleting rows from a
large table.
My scenario:

Table (lets call it FACT1) with 1000 million rows distributed on 12
Partitions (3 physical hosts with 4 logical partitions each).
Overall size of table is 350 GB. Each night 1.5 Million new rows will be
added
and approx. the same amount of old records will be deleted (Roll in/Roll
out with SQL INSERT/DELETE).
The table is stored in SMS tablespace with 16K Pagesize and 64 Pages
Extentsize.
The tablespace has 6 containers on each partition. Each container is on a
separate IBM ESS array.
Prefetchsize is 384 (6 containers * 64 pages). Prefetch behaves very well
with these settings (DB2_PARALLEL_I O is set)
DB2 is V8.1 ESE (DPF) FP5 and runs on AIX.

It takes 7 hours to insert 1.5 Million Rows into FACT1 and up to 7 hours
to delete the same amount.
The Insert is done via INSERT INTO FACT1 ... SELECT * FROM STAGING_TABLE.
Both the fact and the staging table are in tablespaces in the same
nodegroup and do have the same partitioning key.

On a similar table (lets call it FACT2) with a comparable amount of
data/rows and nearly identical configuration the same process takes only
5 minutes.

The main difference between these two tables is that FACT1 has 7 indexes
defined on it and FACT2 only 4.
One of the indexes in each case is unique, the others not (all type 2).
There is no clustering index and the APPEND attribute is set to ON.
I'm aware of the pseudo-delete mechanism of type-2 indexes and the
corresponding longer search time for insert's in the index leaf pages .
But an exclusive lock on the table before inserting/deleting does not
change the needed runtime.
(And the docs say that with a X-lock on table pseudo-deletes will not
happen).
Also after reorg of table and indexes the insert runtime is the same as
before.

Is it possible that the additional index maintenace for FACT1 leads to
such a longer runtime ?
What exactly happens internal for index maintenance (searched the docs -
but do not found internals)?

I'm not privy of index maintenance internals, but could it be the 7
indexes cause a spill of some heap? Maybe sort heap? Have you checked the
snapshots?
Have you verified that the plans are good? You shouldn't see any TQs.
Also are you sure you don't have any other complicating factors (SQL
Functions, Triggers, check or RI constraints) (The plans will show).
PPS: We are parallel investigating in MDC tables, using smaller tables
(and combining them with a UNION ALL view) and the use of LOAD FROM
CURSOR instead of INSERT

Be careful with LOAD FROM CURSOR, the cursor is a bottle neck. To do that
in a scalable fashion you would fire up concurrent LOADs on each node
filtering the source by DBPARTITION.
You shouldn't need UNION ALL.

Cheers
Serge

--
Serge Rielau
DB2 SQL Compiler Development
IBM Toronto Lab

Nov 12 '05 #3
Joachim Klassen wrote:
Optimizer Plan:

INSERT
( 2)
/----/ \
TBSCAN Table:
( 3) DPERMT
| F7KB_F_A_T_Q_B_ K
Table:
DTMP1T
F7KB_F_A_T_Q_B_ K Doesn't get easier than that...
Be careful with LOAD FROM CURSOR, the cursor is a bottle neck. To do that
in a scalable fashion you would fire up concurrent LOADs on each node
filtering the source by DBPARTITION.

Does that mean

Connect to node 1: DECLARE C1 CURSOR for select * from stage where dbpartitionnum( column) = 1
LOAD FROM C1 OF CURSOR INSERT INTO FACT1 ... OUTPUT_DBPARTNU MS 1 Connect to node 2: DECLARE C2 CURSOR for select * from stage where dbpartitionnum( column) = 2
LOAD FROM C2 OF CURSOR INSERT INTO FACT1 ... OUTPUT_DBPARTNU MS 2 connect to node "and so on" and so on


Basically you are your own splitter.

This, btw is a great way to do batch processing with procedures.

Cheers
Serge

--
Serge Rielau
DB2 SQL Compiler Development
IBM Toronto Lab
Nov 12 '05 #4

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
949
by: Andy Tran | last post by:
I built a system using mysql innodb to archive SMS messages but the innodb databases are not keeping up with the number of SMS messages coming in. I'm looking for performance of 200 msgs/sec where 1 msg is 1 database row. I'm running on Red Linux: 2.4.20-8bigmem #1 SMP Thu Mar 13 17:32:29 EST 2003 i686 i686 i386 GNU/Linux The machine has dual CPU and 2G of RAM.
8
3262
by: Együd Csaba | last post by:
Hi All, how can I improve the query performance in the following situation: I have a big (4.5+ million rows) table. One query takes approx. 9 sec to finish resulting ~10000 rows. But if I run simultaneously 4 similar queries it takes nearly 5 minutes instead of 4 times 9 seconds or something near of that. here is a sample query: select mertido, fomeazon, ertektipus, mertertek from t_me30 where fomeazon in (select distinct fomeazon...
0
2395
by: Jindrich Prchal | last post by:
Hi there. We are running DB2 v7.2 for Win NT on Windows 2000 SP3 machine with poor configuration AMD 1800+, 512MB RAM and usual IDE harddisk. During tests of migration our batch programmes written in Visual Age Generator by IBM from OS390 to Win2000 was this machine our test machine and migrated programmes ran quick enough there. Batch programmes are generated into C++ code from VaGen 3.1, compiled by MSVC++ and run on the machine with...
8
5779
by: lyn.duong | last post by:
Hi, I have a large table (about 50G) which stores data for over 7 years. I decided to split this table up into a yearly basis and in order to allow minimum changes to the applications which access this table, I created a union all view over the 7 yearly tables. What I have noticed is that queries against the union all view is considerably slower than queries against the original table. When I ran db2batch, I noticed cpu usage was higher...
5
4000
by: Scott | last post by:
I have a customer that had developed an Access97 application to track their business information. The application grew significantly and they used the Upsizing Wizard to move the tables to SQL 2000. Of course there were no modifications made to the queries and they noticed significant performance issues. They recently upgraded the application to Access XP expecting the newer version to provide performance benefits and now queries take...
24
2779
by: Bob Alston | last post by:
Most of my Access database implementations have been fairly small in terms of data volume and number of concurrent users. So far I haven't had performance issues to worry about. <knock on wood> But I am curious about what techniques those of you who have done higher volume access implementations use to ensure high performance of the database in a multi-user 100mbps LAN implementation??? Thanks
7
6562
by: P. Adhia | last post by:
Sorry for quoting an old post and probably I am reading out of context so my concern is unfounded. But I would appreciate if I can get someone or Serge to confirm. Also unlike the question asked in the post below, my question involves non-partitioned table loads. I want to know if, in general, loading from cursor is slower than loading from a file? I was thinking cursor would normally be faster, because of DB2's superior buffer/prefetch...
13
4593
by: atlaste | last post by:
Hi, I'm currently developing an application that uses a lot of computational power, disk access and memory caching (to be more exact: an information retrieval platform). In these kind of applications the last thing that remains is bare performance tuning. So for example, you can do an 'if then else' on a bit like a 'case/ switch', an 'if/then/else' and as a multiplication with a static buffer. Or, you can do sorting with an inline...
3
12325
by: Michel Esber | last post by:
Hello, Environment: DB2 LUW v8 FP15 / Linux I have a table with 50+ Million rows. The table structure is basically (ID - Timestamp). I have two main applications - one inserting rows, and the other reading/deleting rows. The 'deleter' application runs a MIN/MAX (timestamp) for each ID and, if the difference between min/max is greater than 1h, it reads all
0
8611
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8548
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9090
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8970
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
8813
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
1
6462
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5823
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4564
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2989
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.