From: Roger Leigh on
On Mon, Jul 12, 2010 at 05:13:16PM -0500, Stan Hoeppner wrote:
> Arcady Genkin put forth on 7/12/2010 11:52 AM:
> > On Mon, Jul 12, 2010 at 02:05, Stan Hoeppner <stan(a)hardwarefreak.com> wrote:
> >
> >> lvcreate -i 10 -I [stripe_size] -l 102389 vg0
> >>
> >> I believe you're losing 10x performance because you have a 10 "disk" mdadm
> >> stripe but you didn't inform lvcreate about this fact.
> >
> > Hi, Stan:
> >
> > I believe that the -i and -I options are for using *LVM* to do the
> > striping, am I wrong?
>
> If this were the case, lvcreate would require the set of physical or pseudo
> (mdadm) device IDs to stripe across wouldn't it? There are no options in
> lvcreate to specify physical or pseudo devices. The only input to lvcreate is
> a volume group ID. Therefor, lvcreate is ignorant of the physical devices
> underlying it, is it not?

Have a closer look at lvcreate(8). The last arguments are:

[-Z|--zero y|n] VolumeGroupName [PhysicalVolumePath[:PE[-PE]]...]

So after the VG, you can specify explicitly the exact PEs within
that VG to stripe across that the -I/-i options configure.

I'm unsure why one would necessarily /want/ to do that. I run LVM
on top of md RAID1. Here, I have a single PV on top of the RAID
array, and I can't see that adding additional striping on top of
that would benefit performance in any way. I can only assume it
makes sense if you /don't/ have underlying RAID and want to tell
LVM to stripe over multiple PVs on different physical discs,
which /would/ have some performance impact since you spread the
I/O over multiple discs.

AFAICT the striping options are entirely pointless when layered on
RAID, and could be responsible for the performance issues if it
can have a negative impact (such as thrashing the disks if you
tell it to write multiple stripes to a single disc).


Regards,
Roger

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' Printing on GNU/Linux? http://gutenprint.sourceforge.net/
`- GPG Public Key: 0x25BFB848 Please GPG sign your mail.
From: Mike Bird on
On Mon July 12 2010 15:16:47 Aaron Toponce wrote:
> Incorrect. The Linux RAID implementation can do level 10 across 3 disks.
> In fact, it can even do it across 2 disks.
>
> http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10

Thanks, I learned something new today.

Now I guess the question is, does LVM understand the performance
implications of 10 RAID-1E PV's, or would the OP be better off
assigning his 30 devices as 15 RAID-1 PV's.

--Mike Bird


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/201007121559.05684.mgb-debian(a)yosemite.net
From: Aaron Toponce on
On 7/12/2010 5:52 PM, Stan Hoeppner wrote:
> Aaron Toponce put forth on 7/12/2010 5:16 PM:
>> On 7/12/2010 4:13 PM, Stan Hoeppner wrote:
>>> Is that a typo, or are you turning those 3 disk mdadm sets into RAID10 as
>>> shown above, instead of the 3-way mirror sets you stated previously? RAID 10
>>> requires a minimum of 4 disks, you have 3. Something isn't right here...
>>
>> Incorrect. The Linux RAID implementation can do level 10 across 3 disks.
>> In fact, it can even do it across 2 disks.
>
> Only throw the bold "incorrect" or "correct" statements around when you really
> know the subject material. You don't. Linux md RAID 10 is not standard RAID
> 10 when used on 2 and 3 drives. When used on 3 drives it's actually RAID 1E,
> and on two drives it's the same as RAID1. Another Wikipedia article linked
> within the one you quoted demonstrates this. Note the page title
> "Non-standard_RAID_levels".

The argument is not whether Linux software RAID 10 is standard or not,
but the requirement of the number of disks that Linux software RAID
supports. In this case, it supports 2+ disks, regardless what its
"effectiveness" is.

Try again.

--
. O . O . O . . O O . . . O .
. . O . O O O . O . O O . . O
O O O . O . . O O O O . O O O

From: Stan Hoeppner on
Aaron Toponce put forth on 7/12/2010 5:16 PM:
> On 7/12/2010 4:13 PM, Stan Hoeppner wrote:
>> Is that a typo, or are you turning those 3 disk mdadm sets into RAID10 as
>> shown above, instead of the 3-way mirror sets you stated previously? RAID 10
>> requires a minimum of 4 disks, you have 3. Something isn't right here...
>
> Incorrect. The Linux RAID implementation can do level 10 across 3 disks.
> In fact, it can even do it across 2 disks.

Only throw the bold "incorrect" or "correct" statements around when you really
know the subject material. You don't. Linux md RAID 10 is not standard RAID
10 when used on 2 and 3 drives. When used on 3 drives it's actually RAID 1E,
and on two drives it's the same as RAID1. Another Wikipedia article linked
within the one you quoted demonstrates this. Note the page title
"Non-standard_RAID_levels".

http://en.wikipedia.org/wiki/Non-standard_RAID_levels
Linux MD RAID 10

The Linux kernel software RAID driver (called md, for "multiple device") can
be used to build a classic RAID 1+0 array, but also (since version 2.6.9) as a
single level[4] with some interesting extensions[5].

The standard "near" layout, where each chunk is repeated n times in a k-way
stripe array, is equivalent to the standard RAID-10 arrangement, but it does
not require that n divide k. For example an n2 layout on 2, 3 and 4 drives
would look like:

2 drives 3 drives 4 drives
-------- ---------- --------------
A1 A1 A1 A1 A2 A1 A1 A2 A2
A2 A2 A2 A3 A3 A3 A3 A4 A4
A3 A3 A4 A4 A5 A5 A5 A6 A6
A4 A4 A5 A6 A6 A7 A7 A8 A8
... .. .. .. .. .. .. .. ..

*The 4-drive example is identical to a standard RAID-1+0 array, while the
3-drive example is a software implementation of RAID-1E. The 2-drive example
is equivalent RAID 1.*

--
Stan





--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C3BAABE.2080902(a)hardwarefreak.com
From: Stan Hoeppner on
Mike Bird put forth on 7/12/2010 4:00 PM:
> On Mon July 12 2010 12:45:57 Arcady Genkin wrote:
>> Creating the ten 3-way RAID1 triplets - for N in 0 through 9:
>> mdadm --create /dev/mdN -v --raid-devices=3 --level=raid10 \
>> --layout=n3 --metadata=0 --bitmap=internal --bitmap-chunk=2048 \
>> --chunk=1024 /dev/sdX /dev/sdY /dev/sdZ
>
> RAID 10 with three devices?

I had the same reaction Mike. Turns out mdadm actually performs RAID 1E with
3 disks when you specify RAID 10. I'm not sure what, if any, benefit RAID 1E
yields here--almost nobody uses it.

RAID 0 over (10 * RAID 1E) over 6 iSCSI targets isn't something I've ever seen
anyone do. Not saying it's bad, just...unique.

I just hope the OP gets prompt and concise drive failure information the
instant one goes down, and has a tested array rebuild procedure in place.
Rebuilding a failed drive in this kind of setup may get a bit hairy.

--
Stan



--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C3BADFF.9050907(a)hardwarefreak.com