From: Leythos on
In article <MPG.267d8077ef1a32df98a3dc(a)us.news.astraweb.com>,
spam999free(a)rrohio.com says...
>
> In article <61E010EC-FB2C-490B-A08C-81CE49A236C5(a)microsoft.com>,
> cgaliher(a)gmail.com says...
> >
> > Without turning this into too much of an inside-baseball type of
> > conversation, that is simply not true. SAS is also more reliable for many
> > reasons, but off the top of my head, here are a few:
> >
> > SAS is full-duplex, SATA is half-duplex. This has had demonstratable effects
> > when a RAID controller has to rely on its battery after a power outage and
> > power restore to finish playing the cached commands.
>
> Duplex has nothing to do with reliability if there isn't a failure.
>
> Battery is also an issue, many SATA controllers have a battery option.
>
> > SAS generally uses tagged command queueing. SATA uses native command
> > queueing. For reliability, lab tests have shown tagged command queueing to
> > be more reliable.
>
> More reliable? When, in a lab induced failure mode that can't be
> duplicated by 99.99999% of real world conditions?
>
> > SAS hardware, including most RAID backplanes, support multipath I/O. A
> > failure on a backplane properly supporting SAS can be bypassed and the array
> > will operate in a degraded state. A failure on a backplane running SATA
> > drives effects all drives at the point of failure *and* beyond, which in
> > RAID5 can be significant and cause data loss.
>
> Sorry to say, but if you have a failure of a typical MOTHERBOARD back-
> plane, like those on a cheap server board, you have a lot more problems
> than just RAID - in the real world.
>
> As for SAS operating in a degraded state with a backplane failure, not
> without a redundant backplane, and I've never had "part" of a backplane
> fail on any DELL or IBM server, it's always been an all or nothing type
> failure.
>
> Since the OS would normally be on a RAID-1, this isn't much of an issue
> for a typical cheap SATA solution, since you're going to have a RAID-1
> for the O/S and then a RAID-1 or RAID-5 for the data - the O/S array
> will operate in non-RAID and you can restore the RAID-5 array from
> backup, the same as you would from a failed SAS backplane - the
> difference is that the SAS system is most likely to have the RAID-1 OS
> and RAID-5 on the same backplane or controller, where a cheap server
> would have a controller for the RAID-1 and another controller for the
> RAID-5.
>
> >
> > SAS uses the SCSI command set (hence the name Serial Attached SCSI) which is
> > *far* more robust in error recovery than SATA's reliance on ATA SMART
> > commands.
> >
> > No, SAS has real error-recovery benefits beyond just performance that make
> > SATA unsuitable in any serious RAID configuration.
>
> I'm sorry to inform you that in a real-world setup, there is little
> difference, other than I/O performance. I say this from experience with
> hundreds of SATA and SAS arrays across everything from mom-pop servers
> to $35K IBM servers with a dozen drives, and $24K Dell Servers with 8
> Drives, SAS/SCSI, SATA.... The real issue is I/O performance.
>
> The drives, if you buy commercial drives, last just as long.
>
> You do know that there are very high quality IDE and SATA RAID
> controllers that allow HOT-SWAP, Battery Backup, predictive failure,
> multiple-online hot spares....

Just to point out something about Drives:

Seagate Constellation ES (Large Business) Drives
Interface: 6-Gb/s SAS
Cache: 16MB
Capacity: 500GB
Annual Failure Rate: 0.73%
Sustained Xfer Rate: 150Mb/s

Seagate Barracuda desktop Drives:
Interface: 3Gb/s SATA
Cache: 16MB
Capacity: 500GB
Annual Failure Rate: 0.32%
Sustained Xfer Rate: not shown

Seagate Barracuda XT SATA 6Gb/s 2TB drives
Interface: 6-Gb/s SATA
Cache: 64MB
Capacity: 2TB
Annual Failure Tate: Unknown, too new at this time
Sustained Xfer Rate: 138Mb/s

So, in looking at these drives, typical drives, about the same specs,
but the SATA drives have a lower failure rate, but, they also have a
lower sustained xfer rate.

I know we can find drives with faster spindles, but I went for something
where the drives were close to the same ratings on speed, size, and
stats were all in one place from one vendor.



--
You can't trust your best friends, your five senses, only the little
voice inside you that most civilians don't even hear -- Listen to that.
Trust yourself.
spam999free(a)rrohio.com (remove 999 for proper email address)
From: Cliff Galiher - MVP on
Ummmm....okay, I had to laugh at this (and no, I don't mean offense....)

> Duplex has nothing to do with reliability if there isn't a failure.

Ummmmm........


Yeahhhh.......


***RAID*** has nothing to do with reliability if there isn't a failure!!!
The whole point of this discussion is that we are talking about what happens
when there *IS* a failure. 'nuf said. I'm not getting into yet another
technocratic religious debate. I've made my case, others can choose.

--
Cliff Galiher
Microsoft has opened the Small Business Server forum on Technet! Check it
out!
http://social.technet.microsoft.com/Forums/en-us/smallbusinessserver/threads
Addicted to newsgroups? Read about the NNTP Bridge for MS Forums.

"Leythos" <spam999free(a)rrohio.com> wrote in message
news:MPG.267d8077ef1a32df98a3dc(a)us.news.astraweb.com...
> In article <61E010EC-FB2C-490B-A08C-81CE49A236C5(a)microsoft.com>,
> cgaliher(a)gmail.com says...
>>
>> Without turning this into too much of an inside-baseball type of
>> conversation, that is simply not true. SAS is also more reliable for many
>> reasons, but off the top of my head, here are a few:
>>
>> SAS is full-duplex, SATA is half-duplex. This has had demonstratable
>> effects
>> when a RAID controller has to rely on its battery after a power outage
>> and
>> power restore to finish playing the cached commands.
>
> Duplex has nothing to do with reliability if there isn't a failure.
>
> Battery is also an issue, many SATA controllers have a battery option.
>
>> SAS generally uses tagged command queueing. SATA uses native command
>> queueing. For reliability, lab tests have shown tagged command queueing
>> to
>> be more reliable.
>
> More reliable? When, in a lab induced failure mode that can't be
> duplicated by 99.99999% of real world conditions?
>
>> SAS hardware, including most RAID backplanes, support multipath I/O. A
>> failure on a backplane properly supporting SAS can be bypassed and the
>> array
>> will operate in a degraded state. A failure on a backplane running SATA
>> drives effects all drives at the point of failure *and* beyond, which in
>> RAID5 can be significant and cause data loss.
>
> Sorry to say, but if you have a failure of a typical MOTHERBOARD back-
> plane, like those on a cheap server board, you have a lot more problems
> than just RAID - in the real world.
>
> As for SAS operating in a degraded state with a backplane failure, not
> without a redundant backplane, and I've never had "part" of a backplane
> fail on any DELL or IBM server, it's always been an all or nothing type
> failure.
>
> Since the OS would normally be on a RAID-1, this isn't much of an issue
> for a typical cheap SATA solution, since you're going to have a RAID-1
> for the O/S and then a RAID-1 or RAID-5 for the data - the O/S array
> will operate in non-RAID and you can restore the RAID-5 array from
> backup, the same as you would from a failed SAS backplane - the
> difference is that the SAS system is most likely to have the RAID-1 OS
> and RAID-5 on the same backplane or controller, where a cheap server
> would have a controller for the RAID-1 and another controller for the
> RAID-5.
>
>>
>> SAS uses the SCSI command set (hence the name Serial Attached SCSI) which
>> is
>> *far* more robust in error recovery than SATA's reliance on ATA SMART
>> commands.
>>
>> No, SAS has real error-recovery benefits beyond just performance that
>> make
>> SATA unsuitable in any serious RAID configuration.
>
> I'm sorry to inform you that in a real-world setup, there is little
> difference, other than I/O performance. I say this from experience with
> hundreds of SATA and SAS arrays across everything from mom-pop servers
> to $35K IBM servers with a dozen drives, and $24K Dell Servers with 8
> Drives, SAS/SCSI, SATA.... The real issue is I/O performance.
>
> The drives, if you buy commercial drives, last just as long.
>
> You do know that there are very high quality IDE and SATA RAID
> controllers that allow HOT-SWAP, Battery Backup, predictive failure,
> multiple-online hot spares....
>
>
>
> --
> You can't trust your best friends, your five senses, only the little
> voice inside you that most civilians don't even hear -- Listen to that.
> Trust yourself.
> spam999free(a)rrohio.com (remove 999 for proper email address)

From: Cliff Galiher - MVP on
I'm not talking about failure rates of the drives. Clearly any drive can
fail at any time. I'm talking about what is better at protecting the data
and what recovers from failures better when a failure occurs. Read any of
the technical blogs, forums, and "in the know" sources and SAS wins in
reliability every time. Arstechnica, Toms Hardware, etc, *all* agree on
this point. Now maybe you like to tilt at windmills, and that is your
choice. What I'm saying is that you said in a previous post that the *only*
difference is performance. THAT is clearly not correct. You may feel that
the types of failures where SAS and SATA are different are rare. You may
feel that the cost difference in hardware doesn't justify those edge cases.
You can make many claims that I'd not argue. I may not agree with you, but I
wouldn't argue.

But to simply dismiss something and pretend it doesn't exist *at all* is not
accurate, so I'm going to point out that what you are currently stating
flies in the face of all the data out there. Accuracy matters when it comes
to these sorts of debates, as it makes a difference in allowing the people
who *don't* spend their days breathing technology to make a truly informed
choice.

With that said, I've made my case and I'm done regardless.


--
Cliff Galiher
Microsoft has opened the Small Business Server forum on Technet! Check it
out!
http://social.technet.microsoft.com/Forums/en-us/smallbusinessserver/threads
Addicted to newsgroups? Read about the NNTP Bridge for MS Forums.

"Leythos" <spam999free(a)rrohio.com> wrote in message
news:MPG.267d8b79c04eeef198a3dd(a)us.news.astraweb.com...
> In article <MPG.267d8077ef1a32df98a3dc(a)us.news.astraweb.com>,
> spam999free(a)rrohio.com says...
>>
>> In article <61E010EC-FB2C-490B-A08C-81CE49A236C5(a)microsoft.com>,
>> cgaliher(a)gmail.com says...
>> >
>> > Without turning this into too much of an inside-baseball type of
>> > conversation, that is simply not true. SAS is also more reliable for
>> > many
>> > reasons, but off the top of my head, here are a few:
>> >
>> > SAS is full-duplex, SATA is half-duplex. This has had demonstratable
>> > effects
>> > when a RAID controller has to rely on its battery after a power outage
>> > and
>> > power restore to finish playing the cached commands.
>>
>> Duplex has nothing to do with reliability if there isn't a failure.
>>
>> Battery is also an issue, many SATA controllers have a battery option.
>>
>> > SAS generally uses tagged command queueing. SATA uses native command
>> > queueing. For reliability, lab tests have shown tagged command queueing
>> > to
>> > be more reliable.
>>
>> More reliable? When, in a lab induced failure mode that can't be
>> duplicated by 99.99999% of real world conditions?
>>
>> > SAS hardware, including most RAID backplanes, support multipath I/O. A
>> > failure on a backplane properly supporting SAS can be bypassed and the
>> > array
>> > will operate in a degraded state. A failure on a backplane running SATA
>> > drives effects all drives at the point of failure *and* beyond, which
>> > in
>> > RAID5 can be significant and cause data loss.
>>
>> Sorry to say, but if you have a failure of a typical MOTHERBOARD back-
>> plane, like those on a cheap server board, you have a lot more problems
>> than just RAID - in the real world.
>>
>> As for SAS operating in a degraded state with a backplane failure, not
>> without a redundant backplane, and I've never had "part" of a backplane
>> fail on any DELL or IBM server, it's always been an all or nothing type
>> failure.
>>
>> Since the OS would normally be on a RAID-1, this isn't much of an issue
>> for a typical cheap SATA solution, since you're going to have a RAID-1
>> for the O/S and then a RAID-1 or RAID-5 for the data - the O/S array
>> will operate in non-RAID and you can restore the RAID-5 array from
>> backup, the same as you would from a failed SAS backplane - the
>> difference is that the SAS system is most likely to have the RAID-1 OS
>> and RAID-5 on the same backplane or controller, where a cheap server
>> would have a controller for the RAID-1 and another controller for the
>> RAID-5.
>>
>> >
>> > SAS uses the SCSI command set (hence the name Serial Attached SCSI)
>> > which is
>> > *far* more robust in error recovery than SATA's reliance on ATA SMART
>> > commands.
>> >
>> > No, SAS has real error-recovery benefits beyond just performance that
>> > make
>> > SATA unsuitable in any serious RAID configuration.
>>
>> I'm sorry to inform you that in a real-world setup, there is little
>> difference, other than I/O performance. I say this from experience with
>> hundreds of SATA and SAS arrays across everything from mom-pop servers
>> to $35K IBM servers with a dozen drives, and $24K Dell Servers with 8
>> Drives, SAS/SCSI, SATA.... The real issue is I/O performance.
>>
>> The drives, if you buy commercial drives, last just as long.
>>
>> You do know that there are very high quality IDE and SATA RAID
>> controllers that allow HOT-SWAP, Battery Backup, predictive failure,
>> multiple-online hot spares....
>
> Just to point out something about Drives:
>
> Seagate Constellation ES (Large Business) Drives
> Interface: 6-Gb/s SAS
> Cache: 16MB
> Capacity: 500GB
> Annual Failure Rate: 0.73%
> Sustained Xfer Rate: 150Mb/s
>
> Seagate Barracuda desktop Drives:
> Interface: 3Gb/s SATA
> Cache: 16MB
> Capacity: 500GB
> Annual Failure Rate: 0.32%
> Sustained Xfer Rate: not shown
>
> Seagate Barracuda XT SATA 6Gb/s 2TB drives
> Interface: 6-Gb/s SATA
> Cache: 64MB
> Capacity: 2TB
> Annual Failure Tate: Unknown, too new at this time
> Sustained Xfer Rate: 138Mb/s
>
> So, in looking at these drives, typical drives, about the same specs,
> but the SATA drives have a lower failure rate, but, they also have a
> lower sustained xfer rate.
>
> I know we can find drives with faster spindles, but I went for something
> where the drives were close to the same ratings on speed, size, and
> stats were all in one place from one vendor.
>
>
>
> --
> You can't trust your best friends, your five senses, only the little
> voice inside you that most civilians don't even hear -- Listen to that.
> Trust yourself.
> spam999free(a)rrohio.com (remove 999 for proper email address)

From: Leythos on
In article <7C7E5597-4880-4650-864E-7100C2B05566(a)microsoft.com>,
cgaliher(a)gmail.com says...
>
> Ummmm....okay, I had to laugh at this (and no, I don't mean offense....)
>
> > Duplex has nothing to do with reliability if there isn't a failure.
>
> Ummmmm........
>
>
> Yeahhhh.......
>
>
> ***RAID*** has nothing to do with reliability if there isn't a failure!!!
> The whole point of this discussion is that we are talking about what happens
> when there *IS* a failure. 'nuf said. I'm not getting into yet another
> technocratic religious debate. I've made my case, others can choose.

I took your "duplex" to mean something other than what I see you meant
now.

--
You can't trust your best friends, your five senses, only the little
voice inside you that most civilians don't even hear -- Listen to that.
Trust yourself.
spam999free(a)rrohio.com (remove 999 for proper email address)
From: Leythos on
In article <AFBE67F3-A507-4B02-9BF2-6706B372FE47(a)microsoft.com>,
cgaliher(a)gmail.com says...
> But to simply dismiss something and pretend it doesn't exist *at all* is not
> accurate, so I'm going to point out that what you are currently stating
> flies in the face of all the data out there.
>

I'm stating, based on the years of experience I have across multiple
O/S's, controllers, etc.... that there is no significant difference at
the cost level that SBS is marketed for.

Not many SBS users are going to spend $10,000+ for a server, and I'm
betting that the majority are spending less than $6,000 for their
servers. When it comes down to servers in the lower cost ranges, you
won't see much difference in failure recovery until until you get to the
bottom where some people are using the RAID controllers build into the
cheap motherboards.

--
You can't trust your best friends, your five senses, only the little
voice inside you that most civilians don't even hear -- Listen to that.
Trust yourself.
spam999free(a)rrohio.com (remove 999 for proper email address)