From: Mark Allums on
On 4/27/2010 9:56 PM, Mark Allums wrote:
> On 4/26/2010 1:37 PM, Mike Bird wrote:
>> On Mon April 26 2010 10:51:38 Mark Allums wrote:
>>> RAID 6 (and 5) perform well when less than approximately 1/3 full.
>>> After that, even reads suffer.
>>
>> Mark,
>>
>> I've been using various kinds of RAID for many many years and
>> was not aware of that. Do you have a link to an explanation?
>>
>> Thanks,
>>
>> --Mike Bird
>>
>>
>
>
> YMMV.



I should explain better. Three-drive motherboard-fake-RAID-5's perform
abysmally when they start to fill up.

I do not have a link handy. I could try to find some, if you like.

In my experience, real-world performance is always less than theoretical
or reported performance. That is, I've never had a RAID 5 work as well
as advertised. If you have had better experiences than me, more power
to you.

MAA



--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4BD7AE8B.30608(a)allums.com
From: Stan Hoeppner on
Mike Bird put forth on 4/26/2010 3:04 PM:
> On Mon April 26 2010 12:29:43 Stan Hoeppner wrote:
>> Mark Allums put forth on 4/26/2010 12:51 PM:
>>> Put four drives in a RAID 1, you can suffer a loss of three drives.
>>
>> And you'll suffer pretty abysmal write performance as well.
>
> Write performance of RAID-1 is approximately as good as a simple drive,
> which is good enough for many applications.

That's simply not correct. The number of operations the software RAID
driver performs is equal to the number of drives in the RAID1 set per
application I/O operation. If the application writes one sector, 512 bytes,
then the RAID driver is going to write 2048 bytes in 4 I/O transfers, one
512 byte write to each disk. The RAID driver code will loop 4 times instead
of once on a single drive setup, once per physical I/O, quadrupling the
amount of code executed in additional to a quadrupling of the physical I/O
transfers to the platters.

On a sufficiently fast system that is not loaded, the user will likely see
no performance degradation, especially given Linux' buffered I/O
architecture. However, on a loaded system, such as a transactional database
server or busy ftp upload server, such a RAID setup will bring the system to
its knees in short order as the CPU overhead for each 'real' disk I/O is now
increased 4x and the physical I/O bandwidth is increased 4x.

>> Also keep in mind that some software RAID implementations allow more than
>> two drives in RAID 1, most often called a "mirror set". However, I don't
>> know of any hardware RAID controllers that allow more than 2 drives in a
>> RAID 1. RAID 10 yields excellent fault tolerance and a substantial boost
>> to read and write performance. Anyone considering a 4 disk mirror set
>> should do RAID 10 instead.
>
> Some of my RAIDs are N-way RAID-1 because of the superior read performance.

If you perform some actual benchmarks you'll notice that the Linux RAID1
read performance boost is negligible, and very highly application dependent,
regardless of the number of drives. The RAID1 driver code isn't optimized
for parallel reads. It's mildly opportunistic at best.

Don't take my word for it, Google for "Linux software RAID performance" or
similar. What you find should be eye opening.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4BD7F575.6010102(a)hardwarefreak.com
From: Stan Hoeppner on
Mark Allums put forth on 4/27/2010 10:31 PM:

> For DIY, always pair those drives. Consider RAID 10, RAID 50, RAID 60,
> etc. Alas, that doubles the number of drives, and intensely decreases
> the MTBF, which is the whole outcome you want to avoid.

This is my preferred mdadm 4 drive setup for a light office server or home
media/vanity server. Some minor setup details are omitted from the diagram
to keep it simple, such as the fact that /boot is a mirrored 100MB partition
set and that there are two non mirrored 1GB swap partitions. / and /var are
mirrored partitions in the remaining first 30GB. These sizes are arbitrary,
and can be seasoned to taste. I find these sizes work fairly well for a non
GUI Debian server.

md raid, 4 x 500GB 7.2K rpm SATAII drives:

mirror mirror
/ \ / \
-------- 3 -------- -------- 3 --------
| /boot | 0 | /boot | | swap1 | 0 | swap2 |
| / | G | / | | /var | G | /var |
|--------| |--------| |--------| |--------|
| /home | | /home | | /home | | /home |
| /samba | | /samba | | /samba | | /samba |
| other | | other | | other | | other |
| | | | | | | |
| | | | | | | |
-------- -------- -------- --------
\ \ / /
-------------------------------
RAID 10
940 GB NET

For approximately the same $$ outlay one could simply mirror two 1TB 7.2K
rpm drives and have the same usable space and a little less power draw. The
4 drive RAID 10 setup will yield better read and write performance due to
the striping, especially under a multiuser workload, and especially for IMAP
serving of large mailboxen. For a small/medium office server running say
Postfix/Dovecot/Samba/lighty+Roundcube webmail, a small intranet etc, the 4
drive setup would yield significantly better performance than the higher
capacity 2 drive setup. Using Newegg's prices, each solution will run a
little below or above $200.

This 4 drive RAID 10 makes for a nice little inexpensive and speedy setup.
1TB of user space may not seem like much given the capacity of today's
drives, but most small/medium offices won't come close to using that much
space for a number of years, assuming you have sane email attachment policies.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4BD80B6E.2010401(a)hardwarefreak.com
From: Mark Allums on
Stan,
We are on the same wavelength, I do the same thing myself. (Except that
I go ahead and mirror swap.) I love RAID 10.

MAA


On 4/28/2010 5:18 AM, Stan Hoeppner wrote:
> Mark Allums put forth on 4/27/2010 10:31 PM:
>
>> For DIY, always pair those drives. Consider RAID 10, RAID 50, RAID 60,
>> etc. Alas, that doubles the number of drives, and intensely decreases
>> the MTBF, which is the whole outcome you want to avoid.
>
> This is my preferred mdadm 4 drive setup for a light office server or home
> media/vanity server. Some minor setup details are omitted from the diagram
> to keep it simple, such as the fact that /boot is a mirrored 100MB partition
> set and that there are two non mirrored 1GB swap partitions. / and /var are
> mirrored partitions in the remaining first 30GB. These sizes are arbitrary,
> and can be seasoned to taste. I find these sizes work fairly well for a non
> GUI Debian server.
>
> md raid, 4 x 500GB 7.2K rpm SATAII drives:
>
> mirror mirror
> / \ / \
> -------- 3 -------- -------- 3 --------
> | /boot | 0 | /boot | | swap1 | 0 | swap2 |
> | / | G | / | | /var | G | /var |
> |--------| |--------| |--------| |--------|
> | /home | | /home | | /home | | /home |
> | /samba | | /samba | | /samba | | /samba |
> | other | | other | | other | | other |
> | | | | | | | |
> | | | | | | | |
> -------- -------- -------- --------
> \ \ / /
> -------------------------------
> RAID 10
> 940 GB NET
>
> For approximately the same $$ outlay one could simply mirror two 1TB 7.2K
> rpm drives and have the same usable space and a little less power draw. The
> 4 drive RAID 10 setup will yield better read and write performance due to
> the striping, especially under a multiuser workload, and especially for IMAP
> serving of large mailboxen. For a small/medium office server running say
> Postfix/Dovecot/Samba/lighty+Roundcube webmail, a small intranet etc, the 4
> drive setup would yield significantly better performance than the higher
> capacity 2 drive setup. Using Newegg's prices, each solution will run a
> little below or above $200.
>
> This 4 drive RAID 10 makes for a nice little inexpensive and speedy setup.
> 1TB of user space may not seem like much given the capacity of today's
> drives, but most small/medium offices won't come close to using that much
> space for a number of years, assuming you have sane email attachment policies.
>


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4BD81A5E.4040107(a)allums.com
From: Mike Bird on
On Wed April 28 2010 01:44:37 Stan Hoeppner wrote:
> On a sufficiently fast system that is not loaded, the user will likely see
> no performance degradation, especially given Linux' buffered I/O
> architecture. However, on a loaded system, such as a transactional
> database server or busy ftp upload server, such a RAID setup will bring the
> system to its knees in short order as the CPU overhead for each 'real' disk
> I/O is now increased 4x and the physical I/O bandwidth is increased 4x.

I've designed commercial database managers and OLTP systems.

If CPU usage had ever become a factor in anything I had designed
I would have been fired. If they're not I/O bound they're useless.

With a few exceptions such as physical backups, any I/O bound
application is going to be seek bound, not bandwidth bound.

--Mike Bird


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/201004281148.16763.mgb-debian(a)yosemite.net