From: thanat0s on 4 May 2010 14:53 Hi all, I got the following problem. I have 3 Sata drive used as an raid5 array with a Suse linux 2.6.31.12-0.2-xen 64 bits kernel Raw raid 5 seems to have good performances. Fuel:~ # dd if=/dev/md5 of=/dev/null bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 41.3477 s, 254 MB/s I had setup an Ext4 partition on it throught a LVM volume.. On it, the performances drop down ( really down ). Fuel:~ # dd if=/dev/vg/data of=/dev/null bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 320.733 s, 32.7 MB/s I already try the "sudo blockdev --setra 8192 /dev/vg/data" tips.. well it's better in "sequential read" (150MB/s) ... but not as fast as raid5 ... and do nothing in write speed (near 30). I doubt really that a partition data alignement could do that. Is anybody got an good idea... Google didn't help me on this case. Thanks ------------------------------------------------------------------------------------ Here, additionnal informations : vmstat during DD on LVM procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 17688 1536992 443172 0 0 32085 0 2618 2854 0 0 75 25 0 2 0 0 18896 1535712 443228 0 0 32085 0 2872 2878 0 0 75 25 0 0 1 0 19960 1534704 443188 0 0 32000 9 2884 2869 0 0 75 25 0 0 1 0 19056 1535728 443116 0 0 32085 0 2887 2872 0 0 75 25 0 1 0 0 18000 1536752 443144 0 0 32043 0 2881 2876 0 0 75 25 0 0 1 0 19064 1535728 443124 0 0 32000 0 2914 2872 0 0 75 25 0 vmstat during DD on Raid procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 18736 1536276 443168 0 0 242005 0 5625 3953 0 7 81 12 0 1 0 0 19968 1534740 443224 0 0 222720 0 5845 4749 0 6 84 9 1 1 0 0 18296 1536576 442996 0 0 234667 0 5628 4229 0 6 85 9 0 0 1 0 19064 1535816 443148 0 0 250027 4 6491 5182 0 7 85 7 1 1 0 0 18912 1536072 443048 0 0 240469 0 5594 4126 0 6 86 7 0 1 0 0 19072 1535952 443352 0 0 239360 0 6163 4892 0 7 84 9 1 disks information === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K160 Device Model: Hitachi HDS721616PLA380 Serial Number: PVF904ZHUV9ATN Firmware Version: P22OABEA User Capacity: 160,041,885,696 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 1 Local Time is: Tue May 4 20:44:08 2010 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled Fuel:~ # vgdisplay --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.82 TB PE Size 4.00 MB Total PE 476933 Alloc PE / Size 476928 / 1.82 TB Free PE / Size 5 / 20.00 MB VG UUID kxW00F-7c4A-ptLI-IADl-ovT5-gb4k-jNByvn 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [IDE mode] (prog-if 01 [AHCI 1.0]) Subsystem: ASUSTeK Computer Inc. Device 8389 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 22 I/O ports at c000 [size=8] I/O ports at b000 [size=4] I/O ports at a000 [size=8] I/O ports at 9000 [size=4] I/O ports at 8000 [size=16] Memory at fe8ffc00 (32-bit, non-prefetchable) [size=1K] Capabilities: [60] Power Management version 2 Capabilities: [70] SATA HBA <?> Kernel driver in use: ahci Fuel:~ # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md5 : active raid5 sdb1[0] sdc1[2] sdd1[1] 1953519872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> Fuel:~ # fdisk -l /dev/sdc Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xbcc29a4a Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Fuel:~ # fdisk -l /dev/md5 Disk /dev/md5: 2000.4 GB, 2000404348928 bytes 2 heads, 4 sectors/track, 488379968 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x76ddbf37 Device Boot Start End Blocks Id System /dev/md5p1 1 488379968 1953519870 8e Linux LVM Fuel:~ # lvm version LVM version: 2.02.45 (2009-03-03) Library version: 1.02.31 (2009-03-03) Driver version: 4.15.0
From: habibielwa7id on 5 May 2010 03:43 On May 4, 9:53 pm, thanat0s <thans...(a)trollprod.org> wrote: > Hi all, > > I got the following problem. > > I have 3 Sata drive used as an raid5 array with a Suse linux > 2.6.31.12-0.2-xen 64 bits kernel > > Raw raid 5 seems to have good performances. > > Fuel:~ # dd if=/dev/md5 of=/dev/null bs=1M count=10000 > 10000+0 records in > 10000+0 records out > 10485760000 bytes (10 GB) copied, 41.3477 s, 254 MB/s > > I had setup an Ext4 partition on it throught a LVM volume.. > On it, the performances drop down ( really down ). > > Fuel:~ # dd if=/dev/vg/data of=/dev/null bs=1M count=10000 > 10000+0 records in > 10000+0 records out > 10485760000 bytes (10 GB) copied, 320.733 s, 32.7 MB/s > > I already try the "sudo blockdev --setra 8192 /dev/vg/data" tips.. well > it's better in "sequential read" (150MB/s) ... but not as fast as raid5 > .. and do nothing in write speed (near 30). > > I doubt really that a partition data alignement could do that. > > Is anybody got an good idea... Google didn't help me on this case. > > Thanks > > ------------------------------------------------------------------------------------ > Here, additionnal informations : > > vmstat during DD on LVM > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 0 1 0 17688 1536992 443172 0 0 32085 0 2618 2854 0 > 0 75 25 0 > 2 0 0 18896 1535712 443228 0 0 32085 0 2872 2878 0 > 0 75 25 0 > 0 1 0 19960 1534704 443188 0 0 32000 9 2884 2869 0 > 0 75 25 0 > 0 1 0 19056 1535728 443116 0 0 32085 0 2887 2872 0 > 0 75 25 0 > 1 0 0 18000 1536752 443144 0 0 32043 0 2881 2876 0 > 0 75 25 0 > 0 1 0 19064 1535728 443124 0 0 32000 0 2914 2872 0 > 0 75 25 0 > > vmstat during DD on Raid > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 2 0 0 18736 1536276 443168 0 0 242005 0 5625 3953 0 > 7 81 12 0 > 1 0 0 19968 1534740 443224 0 0 222720 0 5845 4749 0 > 6 84 9 1 > 1 0 0 18296 1536576 442996 0 0 234667 0 5628 4229 0 > 6 85 9 0 > 0 1 0 19064 1535816 443148 0 0 250027 4 6491 5182 0 > 7 85 7 1 > 1 0 0 18912 1536072 443048 0 0 240469 0 5594 4126 0 > 6 86 7 0 > 1 0 0 19072 1535952 443352 0 0 239360 0 6163 4892 0 > 7 84 9 1 > > disks information > === START OF INFORMATION SECTION === > Model Family: Hitachi Deskstar 7K160 > Device Model: Hitachi HDS721616PLA380 > Serial Number: PVF904ZHUV9ATN > Firmware Version: P22OABEA > User Capacity: 160,041,885,696 bytes > Device is: In smartctl database [for details use: -P show] > ATA Version is: 7 > ATA Standard is: ATA/ATAPI-7 T13 1532D revision 1 > Local Time is: Tue May 4 20:44:08 2010 CEST > SMART support is: Available - device has SMART capability. > SMART support is: Enabled > > Fuel:~ # vgdisplay > --- Volume group --- > VG Name vg > System ID > Format lvm2 > Metadata Areas 1 > Metadata Sequence No 9 > VG Access read/write > VG Status resizable > MAX LV 0 > Cur LV 1 > Open LV 0 > Max PV 0 > Cur PV 1 > Act PV 1 > VG Size 1.82 TB > PE Size 4.00 MB > Total PE 476933 > Alloc PE / Size 476928 / 1.82 TB > Free PE / Size 5 / 20.00 MB > VG UUID kxW00F-7c4A-ptLI-IADl-ovT5-gb4k-jNByvn > > 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA > Controller [IDE mode] (prog-if 01 [AHCI 1.0]) > Subsystem: ASUSTeK Computer Inc. Device 8389 > Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 22 > I/O ports at c000 [size=8] > I/O ports at b000 [size=4] > I/O ports at a000 [size=8] > I/O ports at 9000 [size=4] > I/O ports at 8000 [size=16] > Memory at fe8ffc00 (32-bit, non-prefetchable) [size=1K] > Capabilities: [60] Power Management version 2 > Capabilities: [70] SATA HBA <?> > Kernel driver in use: ahci > > Fuel:~ # cat /proc/mdstat > Personalities : [raid6] [raid5] [raid4] > md5 : active raid5 sdb1[0] sdc1[2] sdd1[1] > 1953519872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] > > unused devices: <none> > > Fuel:~ # fdisk -l /dev/sdc > > Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes > 255 heads, 63 sectors/track, 121601 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Disk identifier: 0xbcc29a4a > > Device Boot Start End Blocks Id System > /dev/sdc1 1 121601 976760001 fd Linux raid > autodetect > Fuel:~ # fdisk -l /dev/md5 > > Disk /dev/md5: 2000.4 GB, 2000404348928 bytes > 2 heads, 4 sectors/track, 488379968 cylinders > Units = cylinders of 8 * 512 = 4096 bytes > Disk identifier: 0x76ddbf37 > > Device Boot Start End Blocks Id System > /dev/md5p1 1 488379968 1953519870 8e Linux LVM > > Fuel:~ # lvm version > LVM version: 2.02.45 (2009-03-03) > Library version: 1.02.31 (2009-03-03) > Driver version: 4.15.0 -Iam not sure, I think you should examine the hard disks with smartctl may you find any hard disk performance not good or has problems, Also you can measure the speed of every hard disk, As only 1 slow hard disk may slow the whole system. something like, /sbin/hdparm -tT /dev/sda Try 1 time for every hard disk you have, Also smartmontools will measure the performance of every hard disk, smartctl -t short /dev/sda after it finishes issue smartctl -l selftest /dev/sda and smartctl -A /dev/sda This all will give you more details about the performance of every hard disk may you find a problem anywhere.
From: thanat0s on 5 May 2010 18:51 Thanks for your help, but it really was an misalignment problem. Here what i have done now. Fuel:/tmp/LVM2.2.02.62 # pvcreate -M2 --dataalignment 64K /dev/md5 Physical volume "/dev/md5" successfully created Fuel:/tmp/LVM2.2.02.62 # vgcreate vg /dev/md5 Volume group "vg" successfully created Fuel:/tmp/LVM2.2.02.62 # vgdisplay --- Volume group --- VG Name vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.82 TB PE Size 4.00 MB Total PE 476933 Alloc PE / Size 0 / 0 Free PE / Size 476933 / 1.82 TB VG UUID ZwUMYP-vV7U-4qfQ-v7xb-m57U-pAWj-hHD7Dt Fuel:/tmp/LVM2.2.02.62 # lvcreate -l 476933 -n md5 vg Logical volume "md5" created Then it's works normally Write, 840Mbit/sec Fuel:/mnt # dd if=/dev/zero of=./file.dump bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 99.7505 s, 105 MB/s READ .. 2 Gbits/sec Fuel:/mnt # dd if=./file.dump of=/dev/null bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 41.0315 s, 256 MB/s On 05/05/2010 09:43 AM, habibielwa7id wrote: > On May 4, 9:53 pm, thanat0s<thans...(a)trollprod.org> wrote: >> Hi all, >> >> I got the following problem. >> >> I have 3 Sata drive used as an raid5 array with a Suse linux >> 2.6.31.12-0.2-xen 64 bits kernel >> >> Raw raid 5 seems to have good performances. >> >> Fuel:~ # dd if=/dev/md5 of=/dev/null bs=1M count=10000 >> 10000+0 records in >> 10000+0 records out >> 10485760000 bytes (10 GB) copied, 41.3477 s, 254 MB/s >> >> I had setup an Ext4 partition on it throught a LVM volume.. >> On it, the performances drop down ( really down ). >> >> Fuel:~ # dd if=/dev/vg/data of=/dev/null bs=1M count=10000 >> 10000+0 records in >> 10000+0 records out >> 10485760000 bytes (10 GB) copied, 320.733 s, 32.7 MB/s >> >> I already try the "sudo blockdev --setra 8192 /dev/vg/data" tips.. well >> it's better in "sequential read" (150MB/s) ... but not as fast as raid5 >> .. and do nothing in write speed (near 30). >> >> I doubt really that a partition data alignement could do that. >> >> Is anybody got an good idea... Google didn't help me on this case. >> >> Thanks >> >> ------------------------------------------------------------------------------------ >> Here, additionnal informations : >> >> vmstat during DD on LVM >> procs -----------memory---------- ---swap-- -----io---- -system-- >> -----cpu------ >> r b swpd free buff cache si so bi bo in cs us sy >> id wa st >> 0 1 0 17688 1536992 443172 0 0 32085 0 2618 2854 0 >> 0 75 25 0 >> 2 0 0 18896 1535712 443228 0 0 32085 0 2872 2878 0 >> 0 75 25 0 >> 0 1 0 19960 1534704 443188 0 0 32000 9 2884 2869 0 >> 0 75 25 0 >> 0 1 0 19056 1535728 443116 0 0 32085 0 2887 2872 0 >> 0 75 25 0 >> 1 0 0 18000 1536752 443144 0 0 32043 0 2881 2876 0 >> 0 75 25 0 >> 0 1 0 19064 1535728 443124 0 0 32000 0 2914 2872 0 >> 0 75 25 0 >> >> vmstat during DD on Raid >> procs -----------memory---------- ---swap-- -----io---- -system-- >> -----cpu------ >> r b swpd free buff cache si so bi bo in cs us sy >> id wa st >> 2 0 0 18736 1536276 443168 0 0 242005 0 5625 3953 0 >> 7 81 12 0 >> 1 0 0 19968 1534740 443224 0 0 222720 0 5845 4749 0 >> 6 84 9 1 >> 1 0 0 18296 1536576 442996 0 0 234667 0 5628 4229 0 >> 6 85 9 0 >> 0 1 0 19064 1535816 443148 0 0 250027 4 6491 5182 0 >> 7 85 7 1 >> 1 0 0 18912 1536072 443048 0 0 240469 0 5594 4126 0 >> 6 86 7 0 >> 1 0 0 19072 1535952 443352 0 0 239360 0 6163 4892 0 >> 7 84 9 1 >> >> disks information >> === START OF INFORMATION SECTION === >> Model Family: Hitachi Deskstar 7K160 >> Device Model: Hitachi HDS721616PLA380 >> Serial Number: PVF904ZHUV9ATN >> Firmware Version: P22OABEA >> User Capacity: 160,041,885,696 bytes >> Device is: In smartctl database [for details use: -P show] >> ATA Version is: 7 >> ATA Standard is: ATA/ATAPI-7 T13 1532D revision 1 >> Local Time is: Tue May 4 20:44:08 2010 CEST >> SMART support is: Available - device has SMART capability. >> SMART support is: Enabled >> >> Fuel:~ # vgdisplay >> --- Volume group --- >> VG Name vg >> System ID >> Format lvm2 >> Metadata Areas 1 >> Metadata Sequence No 9 >> VG Access read/write >> VG Status resizable >> MAX LV 0 >> Cur LV 1 >> Open LV 0 >> Max PV 0 >> Cur PV 1 >> Act PV 1 >> VG Size 1.82 TB >> PE Size 4.00 MB >> Total PE 476933 >> Alloc PE / Size 476928 / 1.82 TB >> Free PE / Size 5 / 20.00 MB >> VG UUID kxW00F-7c4A-ptLI-IADl-ovT5-gb4k-jNByvn >> >> 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA >> Controller [IDE mode] (prog-if 01 [AHCI 1.0]) >> Subsystem: ASUSTeK Computer Inc. Device 8389 >> Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 22 >> I/O ports at c000 [size=8] >> I/O ports at b000 [size=4] >> I/O ports at a000 [size=8] >> I/O ports at 9000 [size=4] >> I/O ports at 8000 [size=16] >> Memory at fe8ffc00 (32-bit, non-prefetchable) [size=1K] >> Capabilities: [60] Power Management version 2 >> Capabilities: [70] SATA HBA<?> >> Kernel driver in use: ahci >> >> Fuel:~ # cat /proc/mdstat >> Personalities : [raid6] [raid5] [raid4] >> md5 : active raid5 sdb1[0] sdc1[2] sdd1[1] >> 1953519872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] >> >> unused devices:<none> >> >> Fuel:~ # fdisk -l /dev/sdc >> >> Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes >> 255 heads, 63 sectors/track, 121601 cylinders >> Units = cylinders of 16065 * 512 = 8225280 bytes >> Disk identifier: 0xbcc29a4a >> >> Device Boot Start End Blocks Id System >> /dev/sdc1 1 121601 976760001 fd Linux raid >> autodetect >> Fuel:~ # fdisk -l /dev/md5 >> >> Disk /dev/md5: 2000.4 GB, 2000404348928 bytes >> 2 heads, 4 sectors/track, 488379968 cylinders >> Units = cylinders of 8 * 512 = 4096 bytes >> Disk identifier: 0x76ddbf37 >> >> Device Boot Start End Blocks Id System >> /dev/md5p1 1 488379968 1953519870 8e Linux LVM >> >> Fuel:~ # lvm version >> LVM version: 2.02.45 (2009-03-03) >> Library version: 1.02.31 (2009-03-03) >> Driver version: 4.15.0 > > -Iam not sure, I think you should examine the hard disks with smartctl > may you find any hard disk performance not good or has problems, Also > you can measure the speed of every hard disk, As only 1 slow hard disk > may slow the whole system. > something like, > /sbin/hdparm -tT /dev/sda > Try 1 time for every hard disk you have, Also smartmontools will > measure the performance of every hard disk, > smartctl -t short /dev/sda after it finishes issue > smartctl -l selftest /dev/sda > and smartctl -A /dev/sda > This all will give you more details about the performance of every > hard disk may you find a problem anywhere. > >
|
Pages: 1 Prev: Turn off Xterm key functions? Next: Problem with IPCop URLFilter... |