From: Canuck57 on
On 26/03/2010 3:33 PM, Ian Collins wrote:
> On 03/27/10 02:55 AM, Michael Laajanen wrote:
>> Hi,
>>
>> A ZFS vdev should be up to 9 disk I have read, what is the disadvantage
>> with having 16 disk in one vdev, compared to two vdevs with 8 disk each
>> all set to raidz2?
>
> Two important things will suffer:
>
> Performance: will suck compared to a stripe of smaller vdevs.
>
> Reliability: you expose your self to a much greater risk or multiple
> drive failures. Resilver times will also be slower, compounding the risk.

How would risk of drives be an issue? Say one raidz2 of 16 versus 2 of
8 each?

I would really hate to think that if one drive failed, it's mirror is
spread across all others as if so it is inferior to traditional RAID
0+1. For any mirror to totally fail, it's opposing mirror must also
fail. So spreading the risk across 15 other disks...stupid as any 1 in
15 can fail and it is dead. Where as the probablitities of the exact
opposite is 1/15.

I dont see the risk. And larger stripes should help also.

But admit, I am no ZFS expert. But if doing this with SVM I would raid
0+1 the 16 disks as one on opposing disks assuming I needed all the
space as one.

--
--------------
Politicians don't provide anything, the tax payers do.
From: Canuck57 on
On 27/03/2010 1:17 PM, Ian Collins wrote:
> On 03/28/10 07:55 AM, Michael Laajanen wrote:
>> Hi,
>>
>> Ian Collins wrote:
>>> On 03/27/10 02:55 AM, Michael Laajanen wrote:
>>>> Hi,
>>>>
>>>> A ZFS vdev should be up to 9 disk I have read, what is the disadvantage
>>>> with having 16 disk in one vdev, compared to two vdevs with 8 disk each
>>>> all set to raidz2?
>>>
>>> Two important things will suffer:
>>>
>>> Performance: will suck compared to a stripe of smaller vdevs.
>
>> What is the optimal number of disks in a vdev?
>
> It depends what you want to do with the pool. I never go beyond 8 in a
> raidz2 configuration.
>
>>> Reliability: you expose your self to a much greater risk or multiple
>>> drive failures. Resilver times will also be slower, compounding the
>>> risk.
>>>
>> Then the HBAs must also be concidered I guess.
>
> Not really, they fail way less often than drives.

It would still be good to balance the load to use both HBAs as equally
as possible. System can write to 2 HBAs faster than one.

--
--------------
Politicians don't provide anything, the tax payers do.
From: Ian Collins on
On 03/29/10 06:01 AM, Canuck57 wrote:
> On 26/03/2010 3:33 PM, Ian Collins wrote:
>> On 03/27/10 02:55 AM, Michael Laajanen wrote:
>>> Hi,
>>>
>>> A ZFS vdev should be up to 9 disk I have read, what is the disadvantage
>>> with having 16 disk in one vdev, compared to two vdevs with 8 disk each
>>> all set to raidz2?
>>
>> Two important things will suffer:
>>
>> Performance: will suck compared to a stripe of smaller vdevs.
>>
>> Reliability: you expose your self to a much greater risk or multiple
>> drive failures. Resilver times will also be slower, compounding the risk.
>
> How would risk of drives be an issue? Say one raidz2 of 16 versus 2 of 8
> each?

All your eggs are in on basket. If you have 2 vdevs, you stand a
reasonable chance of surviving 3 failures and a smaller, but real chance
of surviving 4.

--
Ian Collins
From: Michael Laajanen on
Hi,

Canuck57 wrote:
> On 26/03/2010 3:33 PM, Ian Collins wrote:
>> On 03/27/10 02:55 AM, Michael Laajanen wrote:
>>> Hi,
>>>
>>> A ZFS vdev should be up to 9 disk I have read, what is the disadvantage
>>> with having 16 disk in one vdev, compared to two vdevs with 8 disk each
>>> all set to raidz2?
>>
>> Two important things will suffer:
>>
>> Performance: will suck compared to a stripe of smaller vdevs.
>>
>> Reliability: you expose your self to a much greater risk or multiple
>> drive failures. Resilver times will also be slower, compounding the risk.
>
> How would risk of drives be an issue? Say one raidz2 of 16 versus 2 of
> 8 each?
>
> I would really hate to think that if one drive failed, it's mirror is
> spread across all others as if so it is inferior to traditional RAID
> 0+1. For any mirror to totally fail, it's opposing mirror must also
> fail. So spreading the risk across 15 other disks...stupid as any 1 in
> 15 can fail and it is dead. Where as the probablitities of the exact
> opposite is 1/15.
>
> I dont see the risk. And larger stripes should help also.
>
> But admit, I am no ZFS expert. But if doing this with SVM I would raid
> 0+1 the 16 disks as one on opposing disks assuming I needed all the
> space as one.
>
Below is my write tests using two BHA each having two channels with 4
drives, all standard E450. 4 400MHz and 4GB RAM.

This is only write ofcourse.

bash-3.00# time mkfile 1000G /pool00/foo
non raidz

real 368m55.000s
user 6m1.477s
sys 329m37.046s

raidz2 1 vdev

real 475m48.784s
user 6m6.856s
sys 338m45.906s

raidz2 2 vdevs

time mkfile 1000G /pool00/foo

real 485m18.255s
user 5m57.202s
sys 346m38.191s


/michael
From: Ian Collins on
On 03/30/10 04:15 AM, Michael Laajanen wrote:
>>
> Below is my write tests using two BHA each having two channels with 4
> drives, all standard E450. 4 400MHz and 4GB RAM.
>
> This is only write ofcourse.

Try a tool like bonnie++ to get a better idea of the all round performance.

--
Ian Collins
First  |  Prev  |  Next  |  Last
Pages: 1 2 3
Prev: jumpstart requests wrong file
Next: Panic strings