By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,843 Members | 2,311 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,843 IT Pros & Developers. It's quick & easy.

Dell Raid 0+1

P: n/a
I am recommending that we change our Raid Configuration on some of our
Servers from Raid 5 to Raid 0+1; we are experiencing severe IO
bottlenecks.

Our hardware guys are pushing back a bit. They claim that Dell has a
weird implementation of 0+1 and told me something about one drive
filling up before it begins to write to the next. They claimed that
this gets rid of most of the benefits of 0+1.
I know that 0+1 is not as good as 10 for availability, fault tolerance,

and rebuilding, but shouldn't the write throughput be about the same?
Setup:
Poweredge 2850
Powervault 220S
Perc 4/DC Controller 1
Perc 4e/DI Controller 0

Mar 17 '06 #1
Share this Question
Share on Google+
5 Replies


P: n/a
Hi

I don't know about the Dell configuration but you would probably want to
look at all the other alternatives before doing this.

For instance, you don't say exactly what the disc configuration is for
instance have you put database data files and database log files on separate
spindles? Have you got different filegroups on different spindles? Have you
split filegroups into multiple files on different spindles? Are the system
database on different spindles? Is tempdb on different spindles? Have you
thought about adding extra discs to the current array? Have you checked to
see if rewriting the code would reduce the I/O?
John

"Dave" <da******@gmail.com> wrote in message
news:11**********************@e56g2000cwe.googlegr oups.com...
I am recommending that we change our Raid Configuration on some of our
Servers from Raid 5 to Raid 0+1; we are experiencing severe IO
bottlenecks.

Our hardware guys are pushing back a bit. They claim that Dell has a
weird implementation of 0+1 and told me something about one drive
filling up before it begins to write to the next. They claimed that
this gets rid of most of the benefits of 0+1.
I know that 0+1 is not as good as 10 for availability, fault tolerance,

and rebuilding, but shouldn't the write throughput be about the same?
Setup:
Poweredge 2850
Powervault 220S
Perc 4/DC Controller 1
Perc 4e/DI Controller 0

Mar 18 '06 #2

P: n/a
Very good suggestions John! I agree that log files should be on a
separate spindle, and that I should break down the file groups into
multiple physical files, etc, etc.

I will implement all of these suggestions as soon as I get the
opportunity, however I do not think they will increase the throughput
enough to meet our needs.

We are only using 2 Raid 5 arrays right now. That somewhat limits the
load balancing options. I am really pushing for another Raid
Controller card or even 2 so I can properly balance the system.

There is some bad code out there, but I have re-written most of it and
I'm sure I can make further improvements, though I do not think it is
a major issue anymore.

Mar 20 '06 #3

P: n/a
Hi Dave

Use perfmon to get the information you require regarding disc performance,
that will show if new hardware is needed. Look at disc queue length, idle
time, read/write rates etc see
http://www.sql-server-performance.co...nce_audit2.asp for
more.

John

"Dave" <da******@gmail.com> wrote in message
news:11**********************@i39g2000cwa.googlegr oups.com...
Very good suggestions John! I agree that log files should be on a
separate spindle, and that I should break down the file groups into
multiple physical files, etc, etc.

I will implement all of these suggestions as soon as I get the
opportunity, however I do not think they will increase the throughput
enough to meet our needs.

We are only using 2 Raid 5 arrays right now. That somewhat limits the
load balancing options. I am really pushing for another Raid
Controller card or even 2 so I can properly balance the system.

There is some bad code out there, but I have re-written most of it and
I'm sure I can make further improvements, though I do not think it is
a major issue anymore.

Mar 25 '06 #4

P: n/a
I have already done that. I am getting numbers around 50K on % write
time while I run my ETL.
John Bell wrote:
Hi Dave

Use perfmon to get the information you require regarding disc performance,
that will show if new hardware is needed. Look at disc queue length, idle
time, read/write rates etc see
http://www.sql-server-performance.co...nce_audit2.asp for
more.

John

"Dave" <da******@gmail.com> wrote in message
news:11**********************@i39g2000cwa.googlegr oups.com...
Very good suggestions John! I agree that log files should be on a
separate spindle, and that I should break down the file groups into
multiple physical files, etc, etc.

I will implement all of these suggestions as soon as I get the
opportunity, however I do not think they will increase the throughput
enough to meet our needs.

We are only using 2 Raid 5 arrays right now. That somewhat limits the
load balancing options. I am really pushing for another Raid
Controller card or even 2 so I can properly balance the system.

There is some bad code out there, but I have re-written most of it and
I'm sure I can make further improvements, though I do not think it is
a major issue anymore.


Mar 27 '06 #5

P: n/a
Hi Dave

I am not sure where 50K comes in when you are talking about a percentage?

John

"Dave" <da******@gmail.com> wrote in message
news:11*********************@t31g2000cwb.googlegro ups.com...
I have already done that. I am getting numbers around 50K on % write
time while I run my ETL.
John Bell wrote:
Hi Dave

Use perfmon to get the information you require regarding disc
performance,
that will show if new hardware is needed. Look at disc queue length, idle
time, read/write rates etc see
http://www.sql-server-performance.co...nce_audit2.asp
for
more.

John

"Dave" <da******@gmail.com> wrote in message
news:11**********************@i39g2000cwa.googlegr oups.com...
> Very good suggestions John! I agree that log files should be on a
> separate spindle, and that I should break down the file groups into
> multiple physical files, etc, etc.
>
> I will implement all of these suggestions as soon as I get the
> opportunity, however I do not think they will increase the throughput
> enough to meet our needs.
>
> We are only using 2 Raid 5 arrays right now. That somewhat limits the
> load balancing options. I am really pushing for another Raid
> Controller card or even 2 so I can properly balance the system.
>
> There is some bad code out there, but I have re-written most of it and
> I'm sure I can make further improvements, though I do not think it is
> a major issue anymore.
>

Apr 1 '06 #6

This discussion thread is closed

Replies have been disabled for this discussion.