473,758 Members | 4,381 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Load performance on AIX w/ SAN

Hi all,

I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...

I am trying to do a load against a database on an AIX server into a
DB2 v9.1 database, using SAN for storage. The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format).
Here is the load command I am using:

db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"

Now I am loading about 440 rows/second, which to me is abysmally
slow. The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. So needless to say there should be all the I/O
power available to load this data.

My data file I am loading from lives on the same filesystem (and
therefore same logical volume), so I am aware that IO headseek issues
could be the problem. However, the way I understand SAN is that all
the disks are working together, so SAN takes care of the data
allocation so that headseek does not become a problem.

I guess what I am asking is a twofold question...firs t (and the most
appropriate for this forum), is my load command most appopriate for
what I am trying to do?

Second, does the output of topas below indicate that the disks are
really only working at 20% on average? Or is it a lie since I am
running on SAN? Anyone with simular experience please help!

If what is below is indeed correct, why is the load not using all the
disk and/or CPU?
Topas Monitor for host: server1 EVENTS/QUEUES FILE/TTY
Mon Nov 10 16:09:09 2008 Interval: 2 Cswitch 8071
Readch 1520.1K
Syscall 5308
Writech 1976.3K
CPU User% Kern% Wait% Idle% Reads 168
Rawin 0
cpu1 69.5 6.5 0.0 24.0 Writes 889
Ttyout 624
cpu2 1.0 4.0 7.5 87.5 Forks 0
Igets 0
cpu4 0.5 4.0 21.5 74.0 Execs 1
Namei 24
cpu0 0.5 3.0 0.0 96.5 Runqueue 1.0
Dirblk 0
cpu5 0.0 3.0 25.5 71.5 Waitqueue 0.0
cpu3 0.0 2.0 15.5 82.5
cpu6 0.0 2.0 15.0 83.0
PAGING MEMORY
cpu7 0.0 5.5 12.5 82.0
Faults 1032 Real,MB 24576

Steals 390 % Comp 39.9
Network KBPS I-Pack O-Pack KB-In KB-Out PgspIn 0 %
Noncomp 13.3
en6 0.8 1.0 1.5 0.1 0.7 PgspOut 0 %
Client 2.2
en5 0.0 0.0 0.0 0.0 0.0 PageIn 1267
lo0 0.0 0.0 0.0 0.0 0.0 PageOut 888
PAGING SPACE
Sios 2155
Size,MB 12032
Disk Busy% KBPS TPS KB-Read KB-Writ %
Used 0.0
hdisk17 18.0 944.0 236.0 518.0 426.0 NFS (calls/sec) %
Free 100.0
hdisk27 14.5 1.0K 237.0 456.0 584.0 ServerV2 0
hdisk42 14.0 780.0 195.0 384.0 396.0 ClientV2 0
Press:
hdisk22 13.5 876.0 219.0 424.0 452.0 ServerV3 0 "h"
for help
hdisk12 12.0 892.0 211.5 414.0 478.0 ClientV3 0 "q"
to quit
hdisk37 11.0 900.0 214.0 392.0 508.0
hdisk7 11.0 950.0 237.5 480.0 470.0
hdisk32 6.0 874.0 218.5 464.0 410.0
hdisk29 2.0 512.0 2.0 512.0 0.0
hdisk24 2.0 512.0 2.0 512.0 0.0
hdisk9 1.5 256.0 1.0 256.0 0.0
hdisk19 1.0 256.0 1.0 256.0 0.0
hdisk16 0.5 6.0 1.5 0.0 6.0
hdisk15 0.0 0.0 0.0 0.0 0.0
hdisk14 0.0 0.0 0.0 0.0 0.0
hdisk11 0.0 2.0 0.5 0.0 2.0
hdisk1 0.0 0.0 0.0 0.0 0.0
hdisk20 0.0 0.0 0.0 0.0 0.0

Name PID CPU% PgSp Owner
db2sysc 1839138 10.7 0.6 tvpi01
db2sysc 574364 0.1 0.5 tvpi01
db2sysc 1188050 0.1 0.6 tvpi01
db2sysc 663620 0.1 0.7 tvpi01
db2sysc 1225124 0.1 0.6 tvpi01
db2sysc 290888 0.1 0.7 tvpi01
db2sysc 1552616 0.1 0.6 tvpi01
db2sysc 671780 0.1 0.6 tvpi01
db2sysc 1339656 0.1 0.6 tvpi01
db2sysc 1511780 0.1 0.6 tvpi01
db2sysc 1323334 0.1 0.6 tvpi01
db2sysc 1679524 0.1 0.6 tvpi01
topas 1564806 0.1 3.5 tvpi01
db2sysc 983414 0.1 0.6 tvpi01
db2sysc 143704 0.1 0.7 tvpi01
db2sysc 1216702 0.1 0.6 tvpi01
db2sysc 975286 0.0 0.6 tvpi01
db2sysc 872862 0.0 0.7 tvpi01
db2sysc 962762 0.0 0.6 tvpi01
db2sysc 1180054 0.0 0.6 tvpi01



Nov 10 '08 #1
13 6598
After making my post I realize how difficult the topas output is to
read so let me post the highlights:

8 CPUs, 1 working at 65%, the all hovering near 1%, wait% from 5-25%.

20+disks, 10 at 10-20% busy, each reading and writing at around 500kb/
s, the rest hardly doing any work at all.

Memory 24GB, 55% free, 0% pageing space used
page in/out 1267/888 respectively.

Hope that helps!
Nov 10 '08 #2
Ian
rd*****@gmail.c om wrote:
Hi all,

I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...

I am trying to do a load against a database on an AIX server into a
DB2 v9.1 database, using SAN for storage. The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format).
Here is the load command I am using:

db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"

Now I am loading about 440 rows/second, which to me is abysmally
slow. The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. So needless to say there should be all the I/O
power available to load this data.
Can you post any more information, such as table DDL, tablespace
definition, maybe a few sample rows, etc?

Have you tried the load letting DB2 choose its own defaults for
data buffer / disk_parallelis m settings?

Nov 10 '08 #3
On Nov 10, 6:16*pm, Ian <ianb...@mobile audio.comwrote:
rdud...@gmail.c om wrote:
Hi all,
I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...
I am trying to do a load against a database on an AIX server into a
DB2 v9.1 database, using SAN for storage. *The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format).
Here is the load command I am using:
db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"
Now I am loading about 440 rows/second, which to me is abysmally
slow. *The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. *So needless to say there should be all the I/O
power available to load this data.

Can you post any more information, such as table DDL, tablespace
definition, maybe a few sample rows, etc?

Have you tried the load letting DB2 choose its own defaults for
data buffer / disk_parallelis m settings?- Hide quoted text -

- Show quoted text -
I have tried to allow the database to choose its own load values. I
experience simular performance.

I cant post exact DDL/sample rows for security reasons, but I will do
my best to describe:

The table itself is a parent table to other child tables ( but this
should not matter with a load). It has about 25 columns, all of which
are varchar, bigint, char, or date, except for 2, which are CLOB
(around 3000 characters/clob).

It is linked to 3 tablespaces, one for LOBs (32k pagesize), one for
indexes (4k pagesize), one for normal data (4k pagesize). All 3
tablespaces have 8 containers and live on the same filesystem which is
mounted on a VG that is tied to a SAN with 30+ disk (probably around
100 disk, but I dont know exactly as I am not the SAN adm).

Is that enough info to help?

Also a new discovery! After loading 15mil records to this table, I
did a few table scans and got the disk usage in topas to report up to
80% busy, so it seems that the load really is not using the full
capacity of the disk (as the load is using about 15% of the disk).
Also, the tablescan uses all 8 CPUs uniformially, where the load is
only using 1 or 2 CPUs.
Nov 11 '08 #4
On Nov 11, 12:07*am, rdudejr <rdud...@gmail. comwrote:
On Nov 10, 6:16*pm, Ian <ianb...@mobile audio.comwrote:
rdud...@gmail.c om wrote:
Hi all,
I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...
I am trying to do a load against a database on an AIX server into a
DB2 v9.1 database, using SAN for storage. *The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format).
Here is the load command I am using:
db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"
Now I am loading about 440 rows/second, which to me is abysmally
slow. *The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. *So needless to say there should be all the I/O
power available to load this data.
Can you post any more information, such as table DDL, tablespace
definition, maybe a few sample rows, etc?
Have you tried the load letting DB2 choose its own defaults for
data buffer / disk_parallelis m settings?- Hide quoted text -
- Show quoted text -

I have tried to allow the database to choose its own load values. *I
experience simular performance.

I cant post exact DDL/sample rows for security reasons, but I will do
my best to describe:

The table itself is a parent table to other child tables ( but this
should not matter with a load). *It has about 25 columns, all of which
are varchar, bigint, char, or date, except for 2, which are CLOB
(around 3000 characters/clob).

It is linked to 3 tablespaces, one for LOBs (32k pagesize), one for
indexes (4k pagesize), one for normal data (4k pagesize). *All 3
tablespaces have 8 containers and live on the same filesystem which is
mounted on a VG that is tied to a SAN with 30+ disk (probably around
100 disk, but I dont know exactly as I am not the SAN adm).

Is that enough info to help?

Also a new discovery! *After loading 15mil records to this table, I
did a few table scans and got the disk usage in topas to report up to
80% busy, so it seems that the load really is not using the full
capacity of the disk (as the load is using about 15% of the disk).
Also, the tablescan uses all 8 CPUs uniformially, where the load is
only using 1 or 2 CPUs.
Get the DB2 write i/o speeds for transaction log data and application
data from the database snapshot and work out the number of writes per
millisecond.

E.G.

Log write time (sec.ns) = 1730.000000004
Number write log IOs = 1035798

(1730.000000004 * 1000) / 1035798 = 1.67 milliseconds per write.

Direct writes = 40354784
Direct write elapsed time (ms) = 959854

959854 / 40354784 = 0.024 milliseconds per write.

Ask the storage admins for the i/o stats from the san and compare.

Where are the DB2 transaction logs located and how big are they?

Nov 11 '08 #5
On Nov 11, 8:15*am, Patrick Finnegan <finnegan.patr. ..@gmail.com>
wrote:
On Nov 11, 12:07*am, rdudejr <rdud...@gmail. comwrote:
On Nov 10, 6:16*pm, Ian <ianb...@mobile audio.comwrote:
rdud...@gmail.c om wrote:
Hi all,
I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...
I am trying to do a load against a database on an AIX server into a
DB2 v9.1 database, using SAN for storage. *The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format).
Here is the load command I am using:
db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"
Now I am loading about 440 rows/second, which to me is abysmally
slow. *The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. *So needless to say there should be all the I/O
power available to load this data.
Can you post any more information, such as table DDL, tablespace
definition, maybe a few sample rows, etc?
Have you tried the load letting DB2 choose its own defaults for
data buffer / disk_parallelis m settings?- Hide quoted text -
- Show quoted text -
I have tried to allow the database to choose its own load values. *I
experience simular performance.
I cant post exact DDL/sample rows for security reasons, but I will do
my best to describe:
The table itself is a parent table to other child tables ( but this
should not matter with a load). *It has about 25 columns, all of which
are varchar, bigint, char, or date, except for 2, which are CLOB
(around 3000 characters/clob).
It is linked to 3 tablespaces, one for LOBs (32k pagesize), one for
indexes (4k pagesize), one for normal data (4k pagesize). *All 3
tablespaces have 8 containers and live on the same filesystem which is
mounted on a VG that is tied to a SAN with 30+ disk (probably around
100 disk, but I dont know exactly as I am not the SAN adm).
Is that enough info to help?
Also a new discovery! *After loading 15mil records to this table, I
did a few table scans and got the disk usage in topas to report up to
80% busy, so it seems that the load really is not using the full
capacity of the disk (as the load is using about 15% of the disk).
Also, the tablescan uses all 8 CPUs uniformially, where the load is
only using 1 or 2 CPUs.

Get the DB2 write i/o speeds for transaction log data and application
data from the database snapshot and work out the number of writes per
millisecond.

E.G.

Log write time (sec.ns) * * * * = 1730.000000004
Number write log IOs * * * * * * * * = 1035798

(1730.000000004 * 1000) / 1035798 * * * = 1.67 milliseconds per write.

Direct writes * * * * * * * * * * * * * * =40354784
Direct write elapsed time (ms) * *= 959854

959854 / 40354784 * * * * * * * * * * * * * ** = 0.024 milliseconds per write.

Ask the storage admins for the i/o stats from the san and compare.

Where are the DB2 transaction logs located and how big are they?
If the SAN is EMC raid 5 and the write times are slow set the
following registry variables at the instance level.

DB2_PARALLEL_IO *

Set the following instance parameters.

Enable intra-partition parallelism INTRA_PARALLEL ON
Maximum query degree of parallelism MAX_QUERYDEGREE ANY

Set the following database parameters.

Default query optimization class DFT_QUERYOPT 5
Degree of parallelism DFT_DEGREE ANY

Switch on concurrent i/o at tablespace level for all tablespaces in
the database.

db2 ALTER TABLESPACE USERSPACE1 NO FILE SYSTEM CACHING

And if the load is still slow get the details for the san config.

E.G.

SAN is DMX-3.
RAID is RAID 5(3+1).
Data Block size is 256KB
Parity Block size is 256
Logical stripe(block) size is 768KB(256*3)

Recreate the tablspaces as DMS using a page size larger than 4k and
make the extent size the same as the san stripe size.

Something like this.

create regular tablespace xxxx
PAGESIZE 16K
MANAGED BY DATABASE
USING (FILE '/home/data/xxxx/DMSCONTAINERS/CONT_01' 2 G,
FILE '/home/data/xxxx/DMSCONTAINERS/CONT_02' 2 G,
FILE '/home/data/xxxx/DMSCONTAINERS/CONT_03' 2 G,
FILE '/home/data/xxxx/DMSCONTAINERS/CONT_04' 2 G)
BUFFERPOOL yyyy
EXTENTSIZE 768
PREFETCHSIZE 192
OVERHEAD 8.6
TRANSFERRATE 0.2
NO FILE SYSTEM CACHING;



Nov 11 '08 #6
On Nov 11, 4:48*am, Patrick Finnegan <finnegan.patr. ..@gmail.com>
wrote:
On Nov 11, 8:15*am, Patrick Finnegan <finnegan.patr. ..@gmail.com>
wrote:


On Nov 11, 12:07*am, rdudejr <rdud...@gmail. comwrote:
On Nov 10, 6:16*pm, Ian <ianb...@mobile audio.comwrote:
rdud...@gmail.c om wrote:
Hi all,
I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...
I am trying to do a load against a database on an AIX server intoa
DB2 v9.1 database, using SAN for storage. *The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format)..
Here is the load command I am using:
db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"
Now I am loading about 440 rows/second, which to me is abysmally
slow. *The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. *So needless to say there should be all the I/O
power available to load this data.
Can you post any more information, such as table DDL, tablespace
definition, maybe a few sample rows, etc?
Have you tried the load letting DB2 choose its own defaults for
data buffer / disk_parallelis m settings?- Hide quoted text -
- Show quoted text -
I have tried to allow the database to choose its own load values. *I
experience simular performance.
I cant post exact DDL/sample rows for security reasons, but I will do
my best to describe:
The table itself is a parent table to other child tables ( but this
should not matter with a load). *It has about 25 columns, all of which
are varchar, bigint, char, or date, except for 2, which are CLOB
(around 3000 characters/clob).
It is linked to 3 tablespaces, one for LOBs (32k pagesize), one for
indexes (4k pagesize), one for normal data (4k pagesize). *All 3
tablespaces have 8 containers and live on the same filesystem which is
mounted on a VG that is tied to a SAN with 30+ disk (probably around
100 disk, but I dont know exactly as I am not the SAN adm).
Is that enough info to help?
Also a new discovery! *After loading 15mil records to this table, I
did a few table scans and got the disk usage in topas to report up to
80% busy, so it seems that the load really is not using the full
capacity of the disk (as the load is using about 15% of the disk).
Also, the tablescan uses all 8 CPUs uniformially, where the load is
only using 1 or 2 CPUs.
Get the DB2 write i/o speeds for transaction log data and application
data from the database snapshot and work out the number of writes per
millisecond.
E.G.
Log write time (sec.ns) * * * * = 1730.000000004
Number write log IOs * * * * * * * * = 1035798
(1730.000000004 * 1000) / 1035798 * * * = 1.67 milliseconds perwrite.
Direct writes * * * * * * * * * * * * * * = 40354784
Direct write elapsed time (ms) * *= 959854
959854 / 40354784 * * * * * * * * * * * * * * * = 0.024 milliseconds per write.
Ask the storage admins for the i/o stats from the san and compare.
Where are the DB2 transaction logs located and how big are they?

If the SAN is EMC raid 5 and the write times are slow set the
following registry variables at the instance level.

DB2_PARALLEL_IO *

Set the following instance parameters.

Enable intra-partition parallelism * * *INTRA_PARALLEL *ON
Maximum query degree of parallelism * * MAX_QUERYDEGREE ANY

Set the following database parameters.

Default query optimization class * * * *DFT_QUERYOPT * *5
Degree of parallelism * DFT_DEGREE * * *ANY

Switch on concurrent i/o at tablespace level for all tablespaces in
the database.

db2 ALTER TABLESPACE USERSPACE1 NO FILE SYSTEM CACHING

And if the load is still slow get the details for the san config.

E.G.

SAN is DMX-3.
RAID is RAID 5(3+1).
Data Block size is 256KB
Parity Block size is 256
Logical stripe(block) size is 768KB(256*3)

Recreate the tablspaces as DMS using a page size larger than 4k and
make the extent size the same as the san stripe size.

Something like this.

create regular tablespace xxxx
PAGESIZE 16K
MANAGED BY DATABASE
USING (FILE '/home/data/xxxx/DMSCONTAINERS/CONT_01' 2 G,
* * * *FILE '/home/data/xxxx/DMSCONTAINERS/CONT_02' 2 G,
* * * *FILE '/home/data/xxxx/DMSCONTAINERS/CONT_03' 2 G,
* * * *FILE '/home/data/xxxx/DMSCONTAINERS/CONT_04' 2 G)
BUFFERPOOL * yyyy
EXTENTSIZE * 768
PREFETCHSIZE *192
OVERHEAD * * * 8.6
TRANSFERRATE * 0.2
NO FILE SYSTEM CACHING;- Hide quoted text -

- Show quoted text -
These are all really good suggestions, but most we are already
applying.

We are using DMS tablespaces, seperating LOB data from index data from
normal data.

We use 8 contianers to "trick" the database into thinking there are 8
Disks (I picked the number since we have 8 CPUs, so each CPU can
concentrate on sending IO requests...alth ough I think we could have
just as easily gone with 16 or 24 containers.) This trick forces
parallel IO for normal inserts if you read up on the DB2
documentation. The extent size, of course, is 8 times the prefetch
size.

I did research last night and found that when you load a table with
LOB data, DB2 forces parallelism to 1 cpu, and parallel_IO to 4
disks. This would match up with what I see on the load where only 4-6
disks are working and only 1 CPU is working. As I said earlier,
inserts use all 8 CPUs and 20 + disks @ 80% busy. Basically, the
disks are beating the CPU in regards to throughput. Can anyone
confirm this about LOB data loading into a table? And if this is the
case, this would leave me to believe that an IMPORT is actually faster
in my situation since it can take advantage of parallelism of all 8
CPUs???
Nov 11 '08 #7
>
Where are the DB2 transaction logs located and how big are they?- Hide quoted text -

- Show quoted text -
Also these are non-logged loads, that is not a factor. As I said
earlier, inserts are fine, it is the load that is giving me the issue
of not using the available resources.
Nov 11 '08 #8
db2 ALTER TABLESPACE USERSPACE1 NO FILE SYSTEM CACHING

I assume this is not for the tablespace with LOBs?

LOB's do not use DB2 bufferpools and therefore file system caching should be
on for any tablespaces that contain LOBs.
Nov 11 '08 #9
On Nov 11, 2:16*pm, rdudejr <rdud...@gmail. comwrote:
On Nov 11, 4:48*am, Patrick Finnegan <finnegan.patr. ..@gmail.com>
wrote:
On Nov 11, 8:15*am, Patrick Finnegan <finnegan.patr. ..@gmail.com>
wrote:
On Nov 11, 12:07*am, rdudejr <rdud...@gmail. comwrote:
On Nov 10, 6:16*pm, Ian <ianb...@mobile audio.comwrote:
rdud...@gmail.c om wrote:
Hi all,
I hardly ever make a post unless I am having a very purplexing issue,
so this one should be good...
I am trying to do a load against a database on an AIX server into a
DB2 v9.1 database, using SAN for storage. *The table has a few CLOBs
(smallish clobs but we are storing XML data in non-native format).
Here is the load command I am using:
db2 "load from loadset1 of del modified by chardel| coldel& insert
into testschema.test table nonrecoverable data buffer 240000
disk_parallelis m 32"
Now I am loading about 440 rows/second, which to me is abysmally
slow. *The tablespaces I am loading into have 8 containers and I
believe there are at least 30 disks on the SAN that the data
eventually lives on. *So needless to say there should be all the I/O
power available to load this data.
Can you post any more information, such as table DDL, tablespace
definition, maybe a few sample rows, etc?
Have you tried the load letting DB2 choose its own defaults for
data buffer / disk_parallelis m settings?- Hide quoted text -
- Show quoted text -
I have tried to allow the database to choose its own load values. *I
experience simular performance.
I cant post exact DDL/sample rows for security reasons, but I will do
my best to describe:
The table itself is a parent table to other child tables ( but this
should not matter with a load). *It has about 25 columns, all of which
are varchar, bigint, char, or date, except for 2, which are CLOB
(around 3000 characters/clob).
It is linked to 3 tablespaces, one for LOBs (32k pagesize), one for
indexes (4k pagesize), one for normal data (4k pagesize). *All 3
tablespaces have 8 containers and live on the same filesystem whichis
mounted on a VG that is tied to a SAN with 30+ disk (probably around
100 disk, but I dont know exactly as I am not the SAN adm).
Is that enough info to help?
Also a new discovery! *After loading 15mil records to this table,I
did a few table scans and got the disk usage in topas to report up to
80% busy, so it seems that the load really is not using the full
capacity of the disk (as the load is using about 15% of the disk).
Also, the tablescan uses all 8 CPUs uniformially, where the load is
only using 1 or 2 CPUs.
Get the DB2 write i/o speeds for transaction log data and application
data from the database snapshot and work out the number of writes per
millisecond.
E.G.
Log write time (sec.ns) * * * * = 1730.000000004
Number write log IOs * * * * * * * * = 1035798
(1730.000000004 * 1000) / 1035798 * * * = 1.67 milliseconds per write.
Direct writes * * * * * * * * * * * * * *= 40354784
Direct write elapsed time (ms) * *= 959854
959854 / 40354784 * * * * * * * * * * * * ** * = 0.024 milliseconds per write.
Ask the storage admins for the i/o stats from the san and compare.
Where are the DB2 transaction logs located and how big are they?
If the SAN is EMC raid 5 and the write times are slow set the
following registry variables at the instance level.
DB2_PARALLEL_IO *
Set the following instance parameters.
Enable intra-partition parallelism * * *INTRA_PARALLEL *ON
Maximum query degree of parallelism * * MAX_QUERYDEGREE ANY
Set the following database parameters.
Default query optimization class * * * *DFT_QUERYOPT * *5
Degree of parallelism * DFT_DEGREE * * *ANY
Switch on concurrent i/o at tablespace level for all tablespaces in
the database.
db2 ALTER TABLESPACE USERSPACE1 NO FILE SYSTEM CACHING
And if the load is still slow get the details for the san config.
E.G.
SAN is DMX-3.
RAID is RAID 5(3+1).
Data Block size is 256KB
Parity Block size is 256
Logical stripe(block) size is 768KB(256*3)
Recreate the tablspaces as DMS using a page size larger than 4k and
make the extent size the same as the san stripe size.
Something like this.
create regular tablespace xxxx
PAGESIZE 16K
MANAGED BY DATABASE
USING (FILE '/home/data/xxxx/DMSCONTAINERS/CONT_01' 2 G,
* * * *FILE '/home/data/xxxx/DMSCONTAINERS/CONT_02' 2 G,
* * * *FILE '/home/data/xxxx/DMSCONTAINERS/CONT_03' 2 G,
* * * *FILE '/home/data/xxxx/DMSCONTAINERS/CONT_04' 2 G)
BUFFERPOOL * yyyy
EXTENTSIZE * 768
PREFETCHSIZE *192
OVERHEAD * * * 8.6
TRANSFERRATE * 0.2
NO FILE SYSTEM CACHING;- Hide quoted text -
- Show quoted text -

These are all really good suggestions, but most we are already
applying.

We are using DMS tablespaces, seperating LOB data from index data from
normal data.

We use 8 contianers to "trick" the database into thinking there are 8
Disks (I picked the number since we have 8 CPUs, so each CPU can
concentrate on sending IO requests...alth ough I think we could have
just as easily gone with 16 or 24 containers.) * This trick forces
parallel IO for normal inserts if you read up on the DB2
documentation. *The extent size, of course, is 8 times the prefetch
size.

I did research last night and found that when you load a table with
LOB data, DB2 forces parallelism to 1 cpu, and parallel_IO to 4
disks. *This would match up with what I see on the load where only 4-6
disks are working and only 1 CPU is working. *As I said earlier,
inserts use all 8 CPUs and 20 + disks @ 80% busy. *Basically, the
disks are beating the CPU in regards to throughput. *Can anyone
confirm this about LOB data loading into a table? *And if this is the
case, this would leave me to believe that an IMPORT is actually faster
in my situation since it can take advantage of parallelism of all 8
CPUs???
If the disk write speeds from the database snapshot are less than .5
milliseconds per write and the load with lobs is single threaded then
an import may be faster.

Do you have a link for the research indicating single threading with
lobs?

If you still want to use load and the instance runs on an lpar then
ask the Aix admin for at least one dedicated physical cpu on the
lpar. Chances are you are running with virtual cpu which may amount
to less than one physical cpu i.e. DB2 may run faster on one physical
cpu than multiple virtual cpus and the load if single threaded will
probably run faster on that physical cpu.

Nov 12 '08 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
4302
by: John A Grandy | last post by:
I've got an app that has hundreds of medium-sized (100s of elements) XML files on disk (not in db). Right now these are loaded via XMLDocument.Load and searched with XPATH. The performance has become unacceptable. Performance improvment strategies I know of: 1. Switching to XMLReader
3
4702
by: Arti Potnis | last post by:
Hi, I am trying to execute the DB2 load utility from a C program. lPid = fork(); execl(csExePath, csExeName,csCmd, NULL); the contents of the parameter variables are as follows:-
2
9745
by: Curtis Justus | last post by:
Hi, I've been searching for solutions to two issues that are undoubtedly common to everybody. The first is how do my team and I adequately perform unit testing. The second is how can I measure a load on my application? In the past, we would write our class and then come up with some logic in a console or win app to test it. I have since found NUnit which seems to be pretty cool. BTW: if anybody has some XSLTs that provide...
10
2652
by: GeekBoy | last post by:
Okay, I have two identical web servers running Windows 2003 web server. I have an ASP.NET application which runs great on one of them. Dedicated IP address, behind our firewall, etc. Everyone's happy. Now -- how do I take advantage of that second computer to "load-balance" the web site? Will it really give my users a noticable performance increase? How do you accomplish this? I've read many of those MS articles and it's...
7
6576
by: P. Adhia | last post by:
Sorry for quoting an old post and probably I am reading out of context so my concern is unfounded. But I would appreciate if I can get someone or Serge to confirm. Also unlike the question asked in the post below, my question involves non-partitioned table loads. I want to know if, in general, loading from cursor is slower than loading from a file? I was thinking cursor would normally be faster, because of DB2's superior buffer/prefetch...
5
3267
by: mike_dba | last post by:
I am looking for comments on experience using a Load from Cursor across multiple db's databases. I have a multi-terrabyte database across many partitions that includes a large table (1 Tb+). The system also contains UTF-8 and LOB data. I am about to refresh the existing platform - going from V8.2 to V9 and leveraging new hardware. I an staying with SuSe Linux at 64-bit. Does anyone have experience using this? My initial plan was to...
9
13043
by: SAL | last post by:
I have an ASP.NET 2.0 app that takes about 17 seconds to load on first startup but then is very fast after that. As I understand it from some posts in June, this is caused by the loading of the App Domain. We have both Cold Fusion and ASP.NET apps on this server and the Cold Fusion apps do not display such slowness on their first start up of the day. Is there a way to improve the load times of ASP.NET apps? I'm having to justify to my boss...
39
2587
by: Gilles Ganault | last post by:
Hello, I'm no LAMP expert, and a friend of mine is running a site which is a bit overloaded. Before upgrading, he'd like to make sure there's no easy way to improve efficiency. A couple of things: - MySQL : as much as possible, he keeps query results in RAM, but apparently, each is session-specific, which means that results can't be shared with other users.
5
1706
by: =?Utf-8?B?U2FsYW1FbGlhcw==?= | last post by:
Hi, I know that VS 2005 has a lot of testing features and already used them for doing web load testing. I am wondering if it is possible to load test a win forms application. I don't mean writing unit tests, what I need to do is run a load test against my executable and watch CPU utilization and memory usage. When doing web load testing, a wizard help us to write the scenario through recording steps in a session. Ultimatly I need to...
0
9489
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9298
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10072
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9885
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
1
7286
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6562
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
1
3829
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
3399
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2698
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.