On Aug 18, 1:31 am, peter <peter.p...@gmail.comwrote:
On Aug 17, 5:05 am, darko <darko.krs...@gmail.comwrote:
On Aug 16, 8:15 pm, "peanutbuttercravi...@gmail.com"
<peanutbuttercravi...@gmail.comwrote:
I don't know much about db2 but I need to move a filesystem from a
striped logical volume to raid5? And are there any implications moving
the filesystems which hold db2 tables to sharks? Is there anything I
have to do within db2? This is an aix environment.
Thanks a lot.
You should mention things like DB2 version, OS version, type of
workload etc.
Generally with RAID 5 you have write penalty and reduced reliability
comparing to RAID 1+0. You may want to look atwww.baarf.com. I
haven't ever dealt with Sharks so far, but generally there are some
recommendations to set the extent size to be equal or small multiple
of physical stripe size set for LUN on storage, and prefetch size a
small multiple of extent size. For start, number of ioservers may be
set to 1 or 2 above the number of active physical disks (in RAID 5 it
is the number of disks in array on wich LUN is created, minus 1).
It is advisable to have at least logs out of RAID 5, and table spaces
with intensive write activity, especially if you have an OLTP system.
You may consider using CIO if you have AIX 5.3 or 5.2 with certain ML.
I believe that there are some good materials about environments
similar to your at IBM's Developerworks site.
Darko Krstic
Putting logs on raid 5 is not a problem so long as the SAN system
handles caching of writes. In fact, if it does you will see no
difference between raid 5 or raid 1+0. As you are talking shark, it
does this so you should see a performance boost. Similar the shark
has prefetch logic so you should see a performance gains. On the
question of raid 5 versus raid 1+0, if we are talking about the same
number of physical disks you can actually get better performance out
of raid 5 than raid 1+0 in many scenarios depending upon how you
configure matter and your work profile (more so with intelligent
SANs). The main consideration is when a failure occurs on a disk as
raid 5 leaves you exposed to total loss of data if a second disk
failure occurs before the parity volume is reconstructed.
I was not meaning of the same number of disks, but of the same usable
capacity :-) For reliability and performance one has to pay. Like you
mentioned, when you lose one disk in RAID 5 array, until rebuilding is
finished, any further disk loss means disaster. And the risk resulting
from partial media failure is higher with RAID 5 also.
Writing to RAID 5 stresses harder caching subsystem of storage than
writing to RAID 1+0. Depending on the total configuration of SAN
(storages and hosts using LUNs in them, but among all the activity),
it may be a problem or not. If cachig subsystem is not stressed
already, then it should not be a problem, but if you have many write
activities to RAID 5 LUNs, there is a watermark for good performance
somewhere. There are RAID 5 write optimizations when a full stripe
writes are used, which helps substantially. EMC has them. I am not
sure about others, but probably they do.
I am planning a new DB2 configuration with DW type of load. Data will
be loaded in tables only nightly. Having limited number of disks
available, I've decided to use RAID 1+0 for logs and temporary table
spaces (expecting some heavy write activity to them during reports
creation), RAID 5 for dimensional and fact tables' table spaces and
RAID 3 for on-disk backup file system. If I could I would use RAID 1+0
everywhere, but some trade-offs have to be made.
Darko Krstic