From: Aragorn on 5 Aug 2010 19:47 On Friday 06 August 2010 00:35 in comp.os.linux.hardware, somebody identifying as Pascal Hambourg wrote... > Hello, > > Rahul a écrit : >> Aragorn <aragorn(a)chatfactory.invalid> wrote in news:i3fbr4$dvf$2 >> @news.eternal-september.org: >> >>> 1 GHz is impossible. :-) It's 1 MHz - based upon what you've pasted >>> - but that applies for each CPU in your system. >> >> Stupid me. Of course, you are right. :) >> >> gcc timer.c >> ./a.out >>>>> kernel timer interrupt frequency is approx. 1001 Hz<<< > > That's 1 kHz. Even 1 MHz (1000000 Hz) would be very high. Then stupid me as well, because I apparently mixed up "kilo" and "mega". :p -- *Aragorn* (registered GNU/Linux user #223157)
From: Robert Heller on 5 Aug 2010 19:55 At Thu, 5 Aug 2010 21:22:19 +0000 (UTC) Rahul <nospam(a)nospam.invalid> wrote: > > Aragorn <aragorn(a)chatfactory.invalid> wrote in > news:i3f31g$jum$1(a)news.eternal-september.org: > > > On Thursday 05 August 2010 18:49 in comp.os.linux.hardware, somebody > > identifying as Rahul wrote... > > > The Linux kernel supports *many* hardware RAID controllers out of the > > box, and most of those would fall under the "tried and tested" > > category. ;-) > > True. But I find the user base of any HW-controller is always smaller > than that of something like mdadm. This should not really be suprising and is really a meaningless comparison. That is like saying the user base of 'generic car' is larger that the userbase of 'Ford F100 Pickup Truck'. This does not mean that somehow 'Ford F100 Pickup Truck' is any less supported or works poorly relative to 'generic car'. In the case of mdadm, just about *any* one can use mdadm -- all they need are two disk drives (actually not, mdadm will be perfectly happy to mirror two partitions on ONE disk, although that is probably not really a useful thing to do). HW RAID controllers (good ones anyway) cost money and for many users software RAID is good enough, since for many users the performance gain of a HW RAID controller is not worth the added cost. Mostly people with higher-end servers bother with HW RAID controllers, since they are the sorts of people who need the performance, etc. > > > Another consideration with regard to a software implementation of RAID > > 6 is that the speed will also be impacted by whether your kernel is > > preemptible or not and what interrupt timer frequency it was built > > with. > > I am running a 2.6 kernel (2.6.18-164.el51). This is supposed to have > preemption. But is there a way to find out if this feature was complied > into my current kernel? Same for the interrupt timer freq. How do I find > it out? > > Based on some googling I did: > > cat /boot/config-`uname -r` | grep HZ > # CONFIG_HZ_100 is not set > # CONFIG_HZ_250 is not set > CONFIG_HZ_1000=y > CONFIG_HZ=1000 > CONFIG_MACHZ_WDT=m > [rpnabar(a)eu001 july]$ cat /boot/config-`uname -r` | grep HIGH_RES_TIMERS > [no output] > > But not sure how to interpret this. > > Based on another C test code that I found online I estimate that my > interrupt freq. is about 1GHz. > (http://www.advenage.com/topics/linux-timer-interrupt-frequency.php) > > >> The reason I'm switching over from RAID5 to RAID6 is that this time I > >> am using eight 1-Terabyte drives in the RAID. And at this capacity I > >> am scared by the horror stories about "unrecoverable read errors" > > > > Why not use RAID 10? It's reliable and fast, and depending on what > > disk fails, you may be able to sustain a second disk failure in the > > same array before the first faulty disk was replaced. And with an > > array of eight disks, you definitely want to be having a couple of > > spares as well. > > RAID10 could work as well. But is that going to be faster on mdadm? I > have heard reports that RAID10, RAID5 and RAID6 are where the HWRAID > really wins over SWRAID. So not sure how to decide between RAID10 and > RAID6. > > This is the pipeline I aim to use: > > [ primary server ] -> rsync -> ethernet -> [ storage server ] -> > rsnapshot -> LVM -> mdadm -> SATA disk > > > -- Robert Heller -- 978-544-6933 Deepwoods Software -- Download the Model Railroad System http://www.deepsoft.com/ -- Binaries for Linux and MS-Windows heller(a)deepsoft.com -- http://www.deepsoft.com/ModelRailroadSystem/
From: Trevor Hemsley on 5 Aug 2010 20:15 On Thu, 5 Aug 2010 21:22:19 UTC in comp.os.linux.hardware, Rahul <nospam(a)nospam.invalid> wrote: > I am running a 2.6 kernel (2.6.18-164.el51). That sounds like a standard RHEL5/Centos5 kernel (and quite an old one too since the latest is 2.6.18-194.8.1-el5). If so then it won't be a preemptible kernel. And there's at least one bug in it that affects software RAID - and if memory serves correctly it was to do with RAID 6. -- Trevor Hemsley, Brighton, UK Trevor dot Hemsley at ntlworld dot com
From: Trevor Hemsley on 5 Aug 2010 20:23 On Thu, 5 Aug 2010 21:22:19 UTC in comp.os.linux.hardware, Rahul <nospam(a)nospam.invalid> wrote: > I am running a 2.6 kernel (2.6.18-164.el51). I'm thinking about the problem described here > http://www.issociate.de/board/post/455533/RAID6_mdadm_--grow_bug?.html -- Trevor Hemsley, Brighton, UK Trevor dot Hemsley at ntlworld dot com
From: David Brown on 6 Aug 2010 03:45 On 06/08/2010 01:09, Keith Keller wrote: > ["Followup-To:" header set to comp.os.linux.misc.] > On 2010-08-05, David Brown<david(a)westcontrol.removethisbit.com> wrote: >> >> Theoretically, raid6 can survive two dead disks. But with even a single >> dead disk, you've got a very long re-build time which involves reading >> all the data from all the other disks - performance is trashed > > I've had actual disk failures on a hardware RAID6, and performance > wasn't as badly trashed as I thought it'd be. It wasn't fabulous, but > it was certainly usable. > >> and you risk provoking another disk failure. > > All too true. I've been lucky so far on this front. > >> With raid10, you can survive >> between 1 and 4 dead disks, as long as only one disk per pair goes bad. >> Rebuilds are simple copies, which are fast and only run through a >> single disk. You can always triple-mirror if you want extra redundancy. > > ...but with a triple-mirror you've now taken 9 1TB disks to 3TB usable, Or 8 x 1TB disks to get 2.66 TB usable using mdadm raid10 - I wonder how the stripes and mirrors are arranged in that case? I don't know if anyone does much triple mirroring in practice - it's not very space efficient, though it is fast and gives a lot of failure tolerance. One other advantage of mdadm that I don't think you can get with hardware raid is that you can do it at the partition level, not just the disk level. So if you've got a small amount of very critical data you could partition your disks. A small partition on each disk could be used for a 4-way raid10 mirror for the vital data, and the rest could be set with a more efficient raid10 or raid51. I actually did something similar with a three disk server - I had a small three-way raid1 set for /boot, and the rest of each disk was used for a large raid10 partition for an LVM physical disk. But the choice of triple-mirror raid1 was not about extra tolerance - it's just that grub can't boot from an lvm partition on a mdadm raid10 disk. > which doesn't sound very appetizing to me. The suggestion to use a > RAID51 seems more reasonable: for 8 1TB disks, you get 3TB usable and > can tolerate any three disks failing, and up to five failed disks in the > right situation (e.g., four on one RAID5, one on the other). With > RAID10 in simple two-disk mirrors, as you point out, if you lose two > disks from the same RAID1 the array is lost. > > Unfortunately the only real way to know what performance will be like is > to do realistic benchmarking with the hardware you plan to use. > Obviously this isn't easy; who has a spare 8TB fileserver around for > testing? > > --keith >
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 Prev: NFS hangs during mount Next: [Troll] Flatfish still, in general, lying about Linux |