By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
437,924 Members | 1,793 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 437,924 IT Pros & Developers. It's quick & easy.

Restoring to Different RAID Stripe / Extent Size

P: n/a
Hi folks,

I've tried reading just about every post I can on raid stiping / extent
size etc and I'm just getting myself more confused than ever. Here's my
situation.

On a Windows Server 2000 standard edition system with only 1gb ram
using UDB 8.2.3 Enterprise, I have an existing database that uses two
[2] raid arrays striped with 8kb. Presently the tablespaces have a
default extent of 24, default prefetch of 48 [however all are set to
automatic which a snapshot reveals is using something like prefetch
144].

Array 1 = raid 1 and has 2 physical discs sized 36GB OS
Array 2 = raid 5 and has 4 physical discs sized 109GB

Array 1 presently stores index tablespace and logs
Array 2 presently stores the data tablespace and catalogs etc DB2 is
also installed here.

Now, I need to copy this across to another server with the following
config:

Windows Advanced Server 6GB Ram.

Two raid arrays striped with 32k [not 8 as in above].

Array 1 = raid 1, 2 discs size 9GB
Array 2 = raid 5, 4 discs size 109GB

Now I'm thinking because of space I am going to have to move the index
tablespace back to array 2 and maybe leave the logs on array 1. This
would mean Array 2 now holds data and index tablespaces.

My main concern is the change in stripe size from 8kb to 32kb,
unfortunately I am stuck with this and cannot change it. I was hoping
to perform a redirected restore from a backup but I am concerned now
about the stripe size and the impact this will have on performance.

I've managed to figure out how to use AWE and PAE extentions to get the
benefit of larger bufferpools, and could use some serious assistance in
a suggested configuration of the new server.

Would it be better now with 6GB of ram to only have a single bufferpool
for both index and data tablespaces or split the two remembering
DB2_AWE only supports apparently a single BP of large size. If split,
what ratio index BP to data BP would be the best ? Given 6GB, what
would one set the IBMDEFAULT BP to if say a large index / data BP was
configured for say 3-4GB and what then would one set the
sortheap/sortheapthres to ?

Can I restore an extent size 24 tablespace onto an array with 32kb
striping without a reduction in performance ? I tried to figure the
math using extent 24 * 4 kb pages = 96 which is a multiple of the 3 *
stripe. Is that ok ?

Nov 12 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
p175 wrote:
Hi folks,

I've tried reading just about every post I can on raid stiping / extent
size etc and I'm just getting myself more confused than ever. Here's my
situation.

On a Windows Server 2000 standard edition system with only 1gb ram
using UDB 8.2.3 Enterprise, I have an existing database that uses two
[2] raid arrays striped with 8kb. Presently the tablespaces have a
default extent of 24, default prefetch of 48 [however all are set to
automatic which a snapshot reveals is using something like prefetch
144].
Your RAID5 array has three disks for data (one for parity), each disk
holding two 4k pages in its 8k stripe. The 24 extent will grab four
stripes on each of the three disks every time an extent is allocated.

Array 1 = raid 1 and has 2 physical discs sized 36GB OS
Array 2 = raid 5 and has 4 physical discs sized 109GB

Array 1 presently stores index tablespace and logs
Array 2 presently stores the data tablespace and catalogs etc DB2 is
also installed here.

Now, I need to copy this across to another server with the following
config:

Windows Advanced Server 6GB Ram.

Two raid arrays striped with 32k [not 8 as in above].

Array 1 = raid 1, 2 discs size 9GB
Array 2 = raid 5, 4 discs size 109GB

Now I'm thinking because of space I am going to have to move the index
tablespace back to array 2 and maybe leave the logs on array 1. This
would mean Array 2 now holds data and index tablespaces.

My main concern is the change in stripe size from 8kb to 32kb,
unfortunately I am stuck with this and cannot change it. I was hoping
to perform a redirected restore from a backup but I am concerned now
about the stripe size and the impact this will have on performance.

I've managed to figure out how to use AWE and PAE extentions to get the
benefit of larger bufferpools, and could use some serious assistance in
a suggested configuration of the new server.

Would it be better now with 6GB of ram to only have a single bufferpool
for both index and data tablespaces or split the two remembering
DB2_AWE only supports apparently a single BP of large size. If split,
what ratio index BP to data BP would be the best ? Given 6GB, what
would one set the IBMDEFAULT BP to if say a large index / data BP was
configured for say 3-4GB and what then would one set the
sortheap/sortheapthres to ?
The data manager does a very good job of keeping appropriate data in a
single large pool. Multiple pools usually improve performance of some
tablespaces at the expense of others. This works great for applications
with special needs - ie. lookup tables that need to be kept in storage
to avoid disk I/O for performance or a very large table that needs to be
restricted to a small amount of buffer pool to prevent performance
impacts on the rest of the system. There's also many other reasons to
use multiple buffer pools.

The needs of your appication will determine what is best for you in the
buffer pools.

SORTHEAP/SORTHEAPTHRESH - You need to look at snapshots (or other
sources) to see how your sorts are performing. Look at the number and
percentage of sorts that are "overflow" sorts to determine that you need
to increase sortheap. Determine average and maximum number of concurrent
sorts and use these numbers and SORTHEAP size to determine a value for
SORTHEAPTHRESH.

Can I restore an extent size 24 tablespace onto an array with 32kb
striping without a reduction in performance ? I tried to figure the
math using extent 24 * 4 kb pages = 96 which is a multiple of the 3 *
stripe. Is that ok ?


The new RAID5 array will allocate one stripe on each of the three data
disks for the extent. This is a better performance match to the hardware
than the old disk subsystem.

Writes to RAID devices can cause performance problems. Writing one byte
to your RAID5 array could require reading all four disks and will
require writing two (data and parity). You also need to check the
specifications on the RAID1 array to determine when the controller
returns with a "write successful" signal. It may be after one drive has
completed or may require that both drives complete first. In both cases,
a battery backed write cache can make a big performance difference.
Uncached disk writes can easily take 10ms or longer while cached writes
should be well under 500us.
I'd also look at the following article about DB2 memory use:
http://www-128.ibm.com/developerwork...cle/dm-0406qi/
Phil Sherman

Nov 12 '05 #2

P: n/a
Hi Phil, thanks for the response.

The Raid Arrays are on an IBM ServeRaid 4M card with 64 battery backed
up cache set to write-back so the delays should be minimal as far as
the 'write successful' signals are concerned.

Most of the processing done in the database is index querying or
updating as a result of the index query, thus the large index BP. There
are no online users and processing is done via batch for the most part
so I do not need to worry about concurrency etc. A daily process takes
daily transactions and updates a heap of existing statistics regarding
todays activity. reports are then generated once daily with adhoc
reporting during the day.

I'll run the snapshots on it tonight during processing to see how the
changes have impacted the overall performance.

Many thanks,

Tim

Nov 12 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.