From: Rahul on
Aragorn <aragorn(a)chatfactory.invalid> wrote in news:i3fbr4$dvf$2
@news.eternal-september.org:

> 1 GHz is impossible. :-) It's 1 MHz - based upon what you've pasted -
> but that applies for each CPU in your system.

Stupid me. Of course, you are right. :)

gcc timer.c
../a.out
>>>kernel timer interrupt frequency is approx. 1001 Hz<<<

--
Rahul
From: Pascal Hambourg on
Hello,

Rahul a �crit :
> Aragorn <aragorn(a)chatfactory.invalid> wrote in news:i3fbr4$dvf$2
> @news.eternal-september.org:
>
>> 1 GHz is impossible. :-) It's 1 MHz - based upon what you've pasted -
>> but that applies for each CPU in your system.
>
> Stupid me. Of course, you are right. :)
>
> gcc timer.c
> ./a.out
>>>> kernel timer interrupt frequency is approx. 1001 Hz<<<

That's 1 kHz. Even 1 MHz (1000000 Hz) would be very high.
From: Pascal Hambourg on
Rahul a �crit :
>
> I am running a 2.6 kernel (2.6.18-164.el51). This is supposed to have
> preemption. But is there a way to find out if this feature was complied
> into my current kernel?

grep PREEMPT /path/to/config
From: David Brown on
On 05/08/2010 23:22, Rahul wrote:
> Aragorn<aragorn(a)chatfactory.invalid> wrote in
> news:i3f31g$jum$1(a)news.eternal-september.org:
>
>> On Thursday 05 August 2010 18:49 in comp.os.linux.hardware, somebody
>> identifying as Rahul wrote...

>>> The reason I'm switching over from RAID5 to RAID6 is that this time I
>>> am using eight 1-Terabyte drives in the RAID. And at this capacity I
>>> am scared by the horror stories about "unrecoverable read errors"
>>
>> Why not use RAID 10? It's reliable and fast, and depending on what
>> disk fails, you may be able to sustain a second disk failure in the
>> same array before the first faulty disk was replaced. And with an
>> array of eight disks, you definitely want to be having a couple of
>> spares as well.
>
> RAID10 could work as well. But is that going to be faster on mdadm? I
> have heard reports that RAID10, RAID5 and RAID6 are where the HWRAID
> really wins over SWRAID. So not sure how to decide between RAID10 and
> RAID6.
>
> This is the pipeline I aim to use:
>
> [ primary server ] -> rsync -> ethernet -> [ storage server ] ->
> rsnapshot -> LVM -> mdadm -> SATA disk
>

Raid10 is easy, and thus fast - it's just stripes of mirrors and there
are no calculations needed. It's fast with either hardware or software
raid.

It /may/ be faster with mdadm raid10 than hardware raid10 - this depends
on the particular hardware in question. mdadm will often be faster
since you are cutting out a layer of indirection, and there are no
calculations to offload. However, if you are doing a lot of writes, the
host IO is double for software raid10 (since it needs to write
everything twice explicitly), while with hardware raid10 the host IO is
single and the raid card doubles it up. But if you have plenty of ram
the writes will cache and you will see little difference.

With software mdadm raid10 you can do things like different layouts
(such as -p f2 for "far" layout) that can speed up many types of access.
You can also do weird things if you want, such as making a three-way
mirror covering all your 8 disks.


There are a number of differences between raid6 and raid10.

First, with 8 disks in use, raid6 gives you 6 TB while raid10 gives you
4 TB.

Theoretically, raid6 can survive two dead disks. But with even a single
dead disk, you've got a very long re-build time which involves reading
all the data from all the other disks - performance is trashed and you
risk provoking another disk failure. With raid10, you can survive
between 1 and 4 dead disks, as long as only one disk per pair goes bad.
Rebuilds are simple copies, which are fast and only run through a
single disk. You can always triple-mirror if you want extra redundancy.

Performance differences will vary according to the load. Small writes
are terrible with raid6, but no problem with raid10. Large writes will
be better with raid6 compared to standard layout raid10 ("near" layout
in mdadm terms, or hardware raid10), but there will be less difference
if you use "far" layout mdadm raid10. Large reads are similar, except
that mdadm "far" raid10 should be faster than raid6.
From: Keith Keller on
["Followup-To:" header set to comp.os.linux.misc.]
On 2010-08-05, David Brown <david(a)westcontrol.removethisbit.com> wrote:
>
> Theoretically, raid6 can survive two dead disks. But with even a single
> dead disk, you've got a very long re-build time which involves reading
> all the data from all the other disks - performance is trashed

I've had actual disk failures on a hardware RAID6, and performance
wasn't as badly trashed as I thought it'd be. It wasn't fabulous, but
it was certainly usable.

> and you risk provoking another disk failure.

All too true. I've been lucky so far on this front.

> With raid10, you can survive
> between 1 and 4 dead disks, as long as only one disk per pair goes bad.
> Rebuilds are simple copies, which are fast and only run through a
> single disk. You can always triple-mirror if you want extra redundancy.

....but with a triple-mirror you've now taken 9 1TB disks to 3TB usable,
which doesn't sound very appetizing to me. The suggestion to use a
RAID51 seems more reasonable: for 8 1TB disks, you get 3TB usable and
can tolerate any three disks failing, and up to five failed disks in the
right situation (e.g., four on one RAID5, one on the other). With
RAID10 in simple two-disk mirrors, as you point out, if you lose two
disks from the same RAID1 the array is lost.

Unfortunately the only real way to know what performance will be like is
to do realistic benchmarking with the hardware you plan to use.
Obviously this isn't easy; who has a spare 8TB fileserver around for
testing?

--keith

--
kkeller-usenet(a)wombat.san-francisco.ca.us
(try just my userid to email me)
AOLSFAQ=http://www.therockgarden.ca/aolsfaq.txt
see X- headers for PGP signature information