473,802 Members | 2,374 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Logs and a RAID5 subsystem

I have a new box that was purchased specificaly as our dedicated db2
server. The drive setup is
1 single ide device holding the filesystem
2 8 disk RAID 5 arrays one for data, one for logging
The problem I am having is that if the log directory is on the RAID,
database performance is heavily compromised. If I switch the log directory
to the single IDE drive perfomance is fine. I have read up on tablespaces
and extent/prefetch sizes and RAID systems and none of that seems to apply
here allthough I have been experimanting with those settings. The data in
the tablespaces doesn't appear to be the problem, it appears to be only
connected to logging. My test is understandably artificial I just devised
it when I was noticing performance problems with our more complex
applications. I script out a 10,000 row insert into a single table. I then
run it on my workstation, our production server logs on the RAID and
production logs on single the IDE.
Workstation 1400 inserts/s
Logs on RAID 135 inserts/s
Logs on IDE 1700 inserts/s
Interestingly enough if I save the commit until the entire 10,000 rows are
inserted, even with the logs on the RAID I get 1900 inserts/s.
Unfortunatly for our existing application autocommit is the norm and is
required. In the future as we become more comfortable with a transactional
database that will change.

Todd Huish

Nov 12 '05 #1
13 2012
Simprini wrote:
I have a new box that was purchased specificaly as our dedicated db2
server. The drive setup is
1 single ide device holding the filesystem
2 8 disk RAID 5 arrays one for data, one for logging
The problem I am having is that if the log directory is on the RAID,
database performance is heavily compromised. If I switch the log directory
to the single IDE drive perfomance is fine.
...
Todd Huish

Without detailed environment description it would be difficult to tune
the application for your criteria - and that would not be optimal for
other types of workload, but...

1. acquire (borrow) another 2 IDE drives - preferably of same capacity
and additional IDE controller;
2. connect additional IDE drives as PRIMARY on separate IDE channels
of the additional IDE controller;
3. using GUI interface or diskpart command line tool (real men use
command line) create a stripped volume across 2 dynamic volumes.
4. Instruct DB2 to use ase new stripped volume as location of your logs;
5. Rerun the benchmark.

If you are satisfied with the performance - consider buying IDE
controller which implements stripping in hardware - thus eliminating
Windows software stripping overhead.
Now concentrate on setting your RAID 5 parameters for optimal
performance (DMS vs SMS, striping containers, number of containers,
extend and prefetch size tuned to hardware stripe size on Raid array
etc etc.
If using SMS containers, consider use of db2empfa tool - and so on...

Jan M. Nelken
Nov 12 '05 #2
"Simprini" <to********@noh ighvolumeemaild eployment.these llingsource.com >
wrote in message
news:3a******** *************** *******@localho st.talkaboutdat abases.com...
I have a new box that was purchased specificaly as our dedicated db2
server. The drive setup is
1 single ide device holding the filesystem
2 8 disk RAID 5 arrays one for data, one for logging
The problem I am having is that if the log directory is on the RAID,
database performance is heavily compromised. If I switch the log directory
to the single IDE drive perfomance is fine. I have read up on tablespaces
and extent/prefetch sizes and RAID systems and none of that seems to apply
here allthough I have been experimanting with those settings. The data in
the tablespaces doesn't appear to be the problem, it appears to be only
connected to logging. My test is understandably artificial I just devised
it when I was noticing performance problems with our more complex
applications. I script out a 10,000 row insert into a single table. I then
run it on my workstation, our production server logs on the RAID and
production logs on single the IDE.
Workstation 1400 inserts/s
Logs on RAID 135 inserts/s
Logs on IDE 1700 inserts/s
Interestingly enough if I save the commit until the entire 10,000 rows are
inserted, even with the logs on the RAID I get 1900 inserts/s.
Unfortunatly for our existing application autocommit is the norm and is
required. In the future as we become more comfortable with a transactional
database that will change.

Todd Huish

Please consider the following:

1. Dramatically increase the size of your log buffer via the "logbufsz"
database parameter.

2. Increase the "mincommit" database parameter. This parameter allows you to
delay the writing of log records to disk until a minimum number of commits
have been performed. This delay can help reduce the database manager
overhead associated with writing log records. As a result, this will improve
performance when you have multiple applications running against a database
and many commits are requested by the applications within a very short time
frame.

3. Increase the size and number of your log files. Having a small number of
log files can cause problems if DB2 needs time to archive them often. You
can experiment with these parameters.

4.. I would also take a look at your RAID configuration to see if it can be
improved upon. Sounds like you are taking a very large hit vs. IDE. Some
difference should be apparent, but not as much as you are seeing. I doubt
this part has anything to do with DB2. You are probably not noticing the
data writes because of the DB2 bufferpools, which do asynchronous I/O for
the table and index pages written to disk.
Nov 12 '05 #3
Ian
Mark A wrote:
2. Increase the "mincommit" database parameter. This parameter allows you to
delay the writing of log records to disk until a minimum number of commits
have been performed. This delay can help reduce the database manager
overhead associated with writing log records. As a result, this will improve
performance when you have multiple applications running against a database
and many commits are requested by the applications within a very short time
frame.
Be very careful with MINCOMMIT. This can _slow down_ transactions if the
transaction rate is not high enough. i.e., a COMMIT will _wait_ until
at least MINCOMMIT commit statements have been issued OR 1 second has
passed since the COMMIT was issued. Unless you have a high transaction
load this could hurt you.

3. Increase the size and number of your log files. Having a small number of
log files can cause problems if DB2 needs time to archive them often. You
can experiment with these parameters.

4.. I would also take a look at your RAID configuration to see if it can be
improved upon. Sounds like you are taking a very large hit vs. IDE. Some
difference should be apparent, but not as much as you are seeing. I doubt
this part has anything to do with DB2. You are probably not noticing the
data writes because of the DB2 bufferpools, which do asynchronous I/O for
the table and index pages written to disk.


You might also consider using RAID-1 for your logs. There is certainly a
write performance hit for RAID-5, although this can be mitigated if your
RAID controller has a battery-backed cache.

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Nov 12 '05 #4
> > 2. Increase the "mincommit" database parameter. This parameter allows
you to
delay the writing of log records to disk until a minimum number of commits have been performed. This delay can help reduce the database manager
overhead associated with writing log records. As a result, this will improve performance when you have multiple applications running against a database and many commits are requested by the applications within a very short time frame.


Be very careful with MINCOMMIT. This can _slow down_ transactions if the
transaction rate is not high enough. i.e., a COMMIT will _wait_ until
at least MINCOMMIT commit statements have been issued OR 1 second has
passed since the COMMIT was issued. Unless you have a high transaction
load this could hurt you.

I don't think you read the original problem correctly.
Nov 12 '05 #5
Simprini wrote:
I have a new box that [...] 2 8 disk RAID 5 arrays


http://www.baarf.com/

Also Google at groups.google.c om in comp.databases. informix for Author
'Art Kagel' and 'RAID 5'.

YMMV.

--
Jonathan Leffler #include <disclaimer.h >
Email: jl******@earthl ink.net, jl******@us.ibm .com
Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/

Nov 12 '05 #6
Ian
Mark A wrote:
Be very careful with MINCOMMIT. This can _slow down_ transactions if the
transaction rate is not high enough. i.e., a COMMIT will _wait_ until
at least MINCOMMIT commit statements have been issued OR 1 second has
passed since the COMMIT was issued. Unless you have a high transaction
load this could hurt you.


I don't think you read the original problem correctly.


I do understand the original problem. I'm just not a fan of tweaking the
dials and knobs for a specific, "artificial " (original poster's word) test,
when it's easy to misunderstand the implications of the tweak.
-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Nov 12 '05 #7
This is also not what I am trying to accomplish. My artificial test was
devised only after seeing a major speed hit in switching to the new
machine and testing my data conversion from mysql to db2. On my local
workstation I was converting 60 full records/s (each record is the result
of a many multi table joins) which I then munge and insert into the new
db2 structure. On the new machine I was down to 14 full records/s. This is
what tipped me off to the problem and when I changed everything I could I
just simplified the entire problem down to something I could use as a
quick "benchmark" as I changed options. From the other posts I am seeing
it is going to be far more complicated than my boss is going to want to
hear.

Nov 12 '05 #8
"Simprini" <to********@noh ighvolumeemaild eployment.these llingsource.com >
wrote in message
news:8d******** *************** *******@localho st.talkaboutdat abases.com...
This is also not what I am trying to accomplish. My artificial test was
devised only after seeing a major speed hit in switching to the new
machine and testing my data conversion from mysql to db2. On my local
workstation I was converting 60 full records/s (each record is the result
of a many multi table joins) which I then munge and insert into the new
db2 structure. On the new machine I was down to 14 full records/s. This is
what tipped me off to the problem and when I changed everything I could I
just simplified the entire problem down to something I could use as a
quick "benchmark" as I changed options. From the other posts I am seeing
it is going to be far more complicated than my boss is going to want to
hear.


What is not what you are trying to accomplish? Please include some of the
post you are responding to.
Nov 12 '05 #9
On Thu, 13 May 2004 12:20:22 -0600, Mark A <ma@switchboard .net> wrote:
"Simprini" <to********@noh ighvolumeemaild eployment.these llingsource.com >
wrote in message
news:8d******** *************** *******@localho st.talkaboutdat abases.com...
This is also not what I am trying to accomplish. My artificial test was
devised only after seeing a major speed hit in switching to the new
machine and testing my data conversion from mysql to db2. On my local
workstation I was converting 60 full records/s (each record is the
result
of a many multi table joins) which I then munge and insert into the new
db2 structure. On the new machine I was down to 14 full records/s. This
is
what tipped me off to the problem and when I changed everything I could
I
just simplified the entire problem down to something I could use as a
quick "benchmark" as I changed options. From the other posts I am seeing
it is going to be far more complicated than my boss is going to want to
hear.

What is not what you are trying to accomplish? Please include some of the
post you are responding to.


I apologize for that. I posted through a website without realizing that
it was going to a newsgroup. I now have a real news client which should
make this much nicer. I was responding to

On Wed, 12 May 2004 23:25:05 -0700, Ian <ia*****@mobile audio.com> wrote:
Mark A wrote:
Be very careful with MINCOMMIT. This can _slow down_ transactions if
the
transaction rate is not high enough. i.e., a COMMIT will _wait_ until
at least MINCOMMIT commit statements have been issued OR 1 second has
passed since the COMMIT was issued. Unless you have a high transaction
load this could hurt you.

I don't think you read the original problem correctly.


I do understand the original problem. I'm just not a fan of tweaking the
dials and knobs for a specific, "artificial " (original poster's word)
test,
when it's easy to misunderstand the implications of the tweak.


My issue is that even in a real world application I'm seeing 25% the
performance I was getting otherwise. Much of this has, no doubt, to do
with my inexperience with db2. The sinking suspicion in the pit of my
stomach is that we have a server that is not going to perform the way I
need
it to.

Todd Huish
Nov 12 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2171
by: Malvin Su | last post by:
We have a Fire880 box running Solaris2.8 and Oracle8.1.7. And Use the DiskSuite to Create the RAID5 device md9, and mount it as a file system. Now When to create tablespace on the file system, the performance is so lower, only 10% CPU is in use. What's wrong with the system, why the io or cpu does not work hard? Any help is appreciated, thanks!
3
2221
by: Andrew S. | last post by:
I recently performed an SQL server 2000 installation. Other than placing the program files on C: and data on D:, I saw no option to install transaction logs in an alternate location. What is the best practice with SQL server 2000 for location of transaction logs? I remember that Microsoft used to take the position that transaction logs should be placed on their own FAT partition.
1
3944
by: babak | last post by:
Hi everyone I'm doing a project in Microsoft embedded Visual C++ 4.0 and I get the following error when I try to compile: LINK : fatal error LNK1195: target machine 'ARM' requires '/SUBSYSTEM:WINDOWSCE' Error executing clarm.exe. Creating browse info file... WVPNCECl.exe - 1 error(s), 2 warning(s)
5
3829
by: Prem K Mehrotra | last post by:
I come from Oracle background. In Oracle, when one wants to do a point in time recovery, one can specify recover database until timestmap. Oracle's database maps to a db2 subsystem, i.e., in Oracle database means entire database, i.e.. all tablespaces (including indexes). I see only recover tablespace and recover indexspace (or index) commands in db2. Therefore, one has to specify all the tablespaces and indexspaces and recover them
1
2904
by: Prem K Mehrotra | last post by:
I come from Oracle background. I am trying to find a script (job) which creates a db2 subsystem (create all system catalog tables etc), create necessary "DBA" logins,so I can create databases and then tables etc. I am also looking for the required JCL. I have looked at IBM DB2 administration guides, it does not show any JCL for db2 subsystem creation. Where can I get this information.
1
1279
by: Stanley Sinclair | last post by:
(I have never been involved with storage hardware before this.) Given several tablespaces with very different table size needs, eg, one needs a 4k page, another a 32k page, and another deals with BLOBs .. . . how does one assign stripe sizes, etc, in a RAID5 configuration? Stan
0
1457
by: Ramachandran Subramanian | last post by:
I am trying to clone a db2 subsystem DB2A to a new subsystem DB2B both the lpars (A where DB2A runs and B where DB2B will run) share the DASD. I wanted to take an ADRDSSU dump of DB2A datasets and lay it down with the DB2B High level qualifier and reassemble the ZPARM with DB2B HLQ and bring up DB2B .
0
936
by: Jkaatz | last post by:
I've recently upgraded from SQL 2000 to SQL 2005 and installed the latest service pack for SQL 2005. The issue I'm having is that I created a simple package to move data from a flat file to a SQL table. I've also imported the package into the File System, and it runs fine. I would like to schedule this package as a job. I created a new job and added a step (i.e. "Step1"). I clicked the Type dropdown looking for the subsystem "SQL Server...
3
3239
by: peanutbuttercravings | last post by:
I don't know much about db2 but I need to move a filesystem from a striped logical volume to raid5? And are there any implications moving the filesystems which hold db2 tables to sharks? Is there anything I have to do within db2? This is an aix environment. Thanks a lot.
0
10535
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10303
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
10061
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9111
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
6838
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
5494
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
1
4270
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
3792
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
2966
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.