From: NeilBrown on 3 Nov 2009 05:50 On Tue, November 3, 2009 9:06 pm, Christoph Hellwig wrote: > On Mon, Nov 02, 2009 at 01:55:54PM -0800, Andrew Morton wrote: >> On Tue, 13 Oct 2009 12:09:55 +0200 >> Laurent CORBES <laurent.corbes(a)smartjog.com> wrote: >> >> > Hi all, >> > >> > While benchmarking some systems I discover a big sequential read >> performance >> > drop using ext3 on ~ big files. The drop seems to be introduced in >> 2.6.30. I'm >> > testing with 2.6.28.6 -> 2.6.29.6 -> 2.6.30.4 -> 2.6.31.3. >> >> Seems that large performance regressions aren't of interest to this >> list :( > > No sure which list you mean, but dm-devel is for dm, not md. We're also > seeing similarly massive performance drops with md and ext3/xfs as > already reported on the list. Someone tracked it down to writeback > changes as usual, but there it got stuck. I'm still looking - running some basic tests on 4 filesystems over half a dozen recent kernels to see what has been happening. I have a suspicion that there a multiple problems. In particular, XFS has a strange degradation which was papered over by commit c8a4051c3731b. I'm beginning to wonder if it was caused by commit 17bc6c30cf6bf but I haven't actually tested that yet. NeilBrown -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Laurent CORBES on 3 Nov 2009 06:00 Hi all, > >> > Hi all, > >> > > >> > While benchmarking some systems I discover a big sequential read > >> performance > >> > drop using ext3 on ~ big files. The drop seems to be introduced in > >> 2.6.30. I'm > >> > testing with 2.6.28.6 -> 2.6.29.6 -> 2.6.30.4 -> 2.6.31.3. > >> > >> Seems that large performance regressions aren't of interest to this > >> list :( Or +200MB/s is enough for a lot of people :) > > No sure which list you mean, but dm-devel is for dm, not md. We're also > > seeing similarly massive performance drops with md and ext3/xfs as > > already reported on the list. Someone tracked it down to writeback > > changes as usual, but there it got stuck. > > I'm still looking - running some basic tests on 4 filesystems over > half a dozen recent kernels to see what has been happening. > > I have a suspicion that there a multiple problems. > In particular, XFS has a strange degradation which was papered over > by commit c8a4051c3731b. > I'm beginning to wonder if it was caused by commit 17bc6c30cf6bf > but I haven't actually tested that yet. What is really strange is that from all the tests I did the raw md perfs never dropped. only a few MB of diff between kernel (~2%). This is maybe related to the way upper FS write datas on the md layer. I'll make the tests on raw disks to see if there is some troubles here also. I can also test with other raid layers. Is there any tuning/debug I can make for you ? I can also setup a remote access to this system if needed. Thanks. -- Laurent Corbes - laurent.corbes(a)smartjog.com SmartJog SAS | Phone: +33 1 5868 6225 | Fax: +33 1 5868 6255 | www.smartjog.com 27 Blvd Hippolyte Marqu�s, 94200 Ivry-sur-Seine, France A TDF Group company -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Andrew Morton on 3 Nov 2009 12:00 On Tue, 3 Nov 2009 21:42:30 +1100 "NeilBrown" <neilb(a)suse.de> wrote: > On Tue, November 3, 2009 9:06 pm, Christoph Hellwig wrote: > > On Mon, Nov 02, 2009 at 01:55:54PM -0800, Andrew Morton wrote: > >> On Tue, 13 Oct 2009 12:09:55 +0200 > >> Laurent CORBES <laurent.corbes(a)smartjog.com> wrote: > >> > >> > Hi all, > >> > > >> > While benchmarking some systems I discover a big sequential read > >> performance > >> > drop using ext3 on ~ big files. The drop seems to be introduced in > >> 2.6.30. I'm > >> > testing with 2.6.28.6 -> 2.6.29.6 -> 2.6.30.4 -> 2.6.31.3. > >> > >> Seems that large performance regressions aren't of interest to this > >> list :( > > > > No sure which list you mean, but dm-devel is for dm, not md. bah. > We're also > > seeing similarly massive performance drops with md and ext3/xfs as > > already reported on the list. Someone tracked it down to writeback > > changes as usual, but there it got stuck. > > I'm still looking - running some basic tests on 4 filesystems over > half a dozen recent kernels to see what has been happening. > > I have a suspicion that there a multiple problems. > In particular, XFS has a strange degradation which was papered over > by commit c8a4051c3731b. > I'm beginning to wonder if it was caused by commit 17bc6c30cf6bf > but I haven't actually tested that yet. I think Laurent's workload involves only reads, with no writes. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Neil Brown on 4 Nov 2009 02:20 On Tuesday November 3, laurent.corbes(a)smartjog.com wrote: > > What is really strange is that from all the tests I did the raw md perfs never > dropped. only a few MB of diff between kernel (~2%). This is maybe related to > the way upper FS write datas on the md layer. That isn't all that strange. It just says that the problem isn't with MD, but is in some other part of Linux closer to the filesystem. I did some tests with a range of kernels (all 'mainline', not the 'stable' versions that you used) and while I do see a noticeable dip at 2.6.30 (except with ext3) is see improved performance in 2.6.31 and even greater improvements with 2.6.32-rc5. So while I confirm that 2.6.30 is worse than earlier kernels, and that there was a general decline leading to that point, things have become dramatically better. So I don't think it is worth exploring very deeply. All the numbers in the graph come from 'bonnie' over the various file-systems on a 5-drive RAID6. NeilBrown
|
Pages: 1 Prev: why kernel implement "udelay" by cpu instructions? Next: dream: glue for mmc controller |