From: Mike Bird on
On Mon April 26 2010 14:44:32 Boyd Stephen Smith Jr. wrote:
> the chance of a double failure in a 5 (or less) drive array is minuscule.

A flaky controller knocking one drive out of an array and then
breaking another before you're rebuilt can really ruin your day.

Rebuild is generally the period of most intense activity so
figure on failures being much more likely during a rebuild.

--Mike Bird


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/201004261633.53139.mgb-debian(a)yosemite.net
From: Tim Clewlow on

> I don't know what your requirements / levels of paranoia are, but
> RAID 5 is
> probably better than RAID 6 until you are up to 6 or 7 drives; the
> chance of a
> double failure in a 5 (or less) drive array is minuscule.
>
..
I currently have 3 TB of data with another 1TB on its way fairly
soon, so 4 drives will become 5 quite soon. Also, I have read that a
common rating of drive failure is an unrecoverable read rate of 1
bit in 10^14 - that is 1 bit in every 10TB. While doing a rebuild
across 4 or 5 drives that would mean it is likely to hit an
unrecoverable read. With RAID 5 (no redundancy during rebuild due to
failed drive) that would be game over. Is this correct?

Tim.


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/706fc98e51cb5ceddd4e32ea1bc05cc3.squirrel(a)192.168.1.100
From: Alexander Samad on
Hi

I recently (last week), migrated from 10 x 1Tb to adaptec 51645 and 5
x 2T drives.

my experience, I can't get frub2 and the adaptec to work, so I am
booting from a SSD I had.

I carved up the 5x2T into 32G (mirror 1e - mirror stripe + parity) -
too boot from and mirrored against my ssd. the rest went into a raid5
till i moved my info over - this took around 18 hours, my data was
originally on a vgs on a pv on my raid6 mdadm I made the adaptec 5T
into a pv and added it to the vg and then did a pvmove - that took
time :)

next I went and got 2 more 2Tb drives and did an online upgrade from
5x2T raid5 to 7x2Tb raid6 now sitting at 9T - this took about 1 week
to resettle.

Other quirks I had to used parted to install gpt partition table on
the drive it was over some limit for mbr's. Had a bit of a scare when
I resized my pvs partition with parted, it submits each command once
its typed - I had to delete my pv partition and then recreate it -
same as deleting a partition and then recreating it, but with fdisk it
doesn't really happen until Write time.... 5 T of info gone
potentially .... could not use the resize command it did not under
stand lvm/pv's

But all is okay now resized and ready.

I choose raid6 because its just another drive and I value my data more
than another drive. I also have 3 x 1T in the box in the raid 1ee
setup which is stripe / mirror / parity setup.

I don't use batter backup to the machine, I have a ups attached, which
can run it for 40 min on battery.

Note - I also backup all my data to another server close by but
another bulding and all the important stuff get backed up off site. I
use rdiff-backup

Alex


On Tue, Apr 27, 2010 at 2:11 PM, Tim Clewlow <tim(a)clewlow.org> wrote:
>
>> I don't know what your requirements / levels of paranoia are, but
>> RAID 5 is
>> probably better than RAID 6 until you are up to 6 or 7 drives; the
>> chance of a
>> double failure in a 5 (or less) drive array is minuscule.
>>
> .
> I currently have 3 TB of data with another 1TB on its way fairly
> soon, so 4 drives will become 5 quite soon. Also, I have read that a
> common rating of drive failure is an unrecoverable read rate of 1
> bit in 10^14 - that is 1 bit in every 10TB. While doing a rebuild
> across 4 or 5 drives that would mean it is likely to hit an
> unrecoverable read. With RAID 5 (no redundancy during rebuild due to
> failed drive) that would be game over. Is this correct?
>
> Tim.
>
>
> --
> To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
> Archive: http://lists.debian.org/706fc98e51cb5ceddd4e32ea1bc05cc3.squirrel(a)192.168.1.100
>
>


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/x2i836a6dcf1004270256v9bf62c2bh9fa70af4907fe061(a)mail.gmail.com
From: Mark Allums on
On 4/26/2010 2:29 PM, Stan Hoeppner wrote:
> Mark Allums put forth on 4/26/2010 12:51 PM:
>
>> Put four drives in a RAID 1, you can suffer a loss of three drives.
>
> And you'll suffer pretty abysmal write performance as well.
>
> Also keep in mind that some software RAID implementations allow more than
> two drives in RAID 1, most often called a "mirror set". However, I don't
> know of any hardware RAID controllers that allow more than 2 drives in a
> RAID 1. RAID 10 yields excellent fault tolerance and a substantial boost to
> read and write performance. Anyone considering a 4 disk mirror set should
> do RAID 10 instead.
>


Yeah. All good points.

I never make things clear. OP was thinking of doing software RAID with
Linux md raid. Most of what I meant to say was/is with that in mind.


MAA



--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4BD7A487.8020705(a)allums.com
From: Mark Allums on
On 4/26/2010 11:11 PM, Tim Clewlow wrote:
>
>> I don't know what your requirements / levels of paranoia are, but
>> RAID 5 is
>> probably better than RAID 6 until you are up to 6 or 7 drives; the
>> chance of a
>> double failure in a 5 (or less) drive array is minuscule.
>>
> .
> I currently have 3 TB of data with another 1TB on its way fairly
> soon, so 4 drives will become 5 quite soon. Also, I have read that a
> common rating of drive failure is an unrecoverable read rate of 1
> bit in 10^14 - that is 1 bit in every 10TB. While doing a rebuild
> across 4 or 5 drives that would mean it is likely to hit an
> unrecoverable read. With RAID 5 (no redundancy during rebuild due to
> failed drive) that would be game over. Is this correct?


Uh. Well, I guess I would ballpark it similarly. Large arrays are
asking for trouble. For serious work, separate out the storage system
from the application as much as possible. Be prepared to spend money.

For DIY, always pair those drives. Consider RAID 10, RAID 50, RAID 60,
etc. Alas, that doubles the number of drives, and intensely decreases
the MTBF, which is the whole outcome you want to avoid.

But you have to start somewhere.

For offline storage, tape is still around...


MAA


I get the feeling some of this is overthinking. Plan ahead, but don't
spend money until you need to.


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4BD7AC0B.9070909(a)allums.com