From: Andi Kleen on 2 Jan 2010 15:40 On Sat, Jan 02, 2010 at 09:11:39PM +0100, Frederic Weisbecker wrote: > I've never lost any datas since I began this work. And > I run it every day. If I had experienced lock inversions, > and sometimes soft lockups, I did not experienced serious > damages. It's a journalized filesystem that can fixup the things > pretty well. So are you confident that 2.6.33 will not have regular soft-lockups for reiserfs users? > > Also we are talking about potential lock inversions, in potential > rare path, that could potentially raise soft lockups. That makes > a lot of potentials, for things that are going to be fixed and > for which I've never seen serious damages. A soft lockup is a problem. Perhaps not totally serious, but a user who experiences them regularly would rightly consider such a release very broken. > We could make a new reiserfs version by duplicating the code > base. But nobody will test it. That would require to patch > mkreiserfs, waiting for distros to ship it, waiting for > users to ship the distros. Assuming at this time there > will be remaining users to set up new reiserfs partitions. I suspect your estimates on how widely reiserfs is used are quite off. However as usual a large part of the user base simply only uses what their distribution ships. > We could also have a reiserfs-no-bkl config option that > would pick the duplicated code base. Again I fear few people > will test it. That sounds reasonable, at least have a workaround if there are too many problems. > Sometimes I do. Sometimes it's just wasteful. We don't want to relax > the lock just because of a kmalloc(__GFP_NOFS). If that's the problem you can always split the allocation: first try it with __GFP_NOWAIT without lock dropped, then if that fails do again with full __GFP_NOFS and lock drop. However it's hard to believe that a few instructions more or less would make much difference; I would normally expect any larger changes coming from changes IO patterns or cache line bouncing. > > Better some mildew than a seriously-broken-for-enough people's > > release (although I have my doubts that's the right metapher > > for the BKL anyways) > > > > Having stable releases is an important part for > > getting enough testers (we already have too little). And > > if we start breaking their $HOMEs they might become > > even less. > > > This is very unlikely to break their $HOME. Well break access to their $HOME -Andi -- ak(a)linux.intel.com -- Speaking for myself only. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Frederic Weisbecker on 2 Jan 2010 16:00 On Sat, Jan 02, 2010 at 09:33:05PM +0100, Andi Kleen wrote: > On Sat, Jan 02, 2010 at 09:11:39PM +0100, Frederic Weisbecker wrote: > > I've never lost any datas since I began this work. And > > I run it every day. If I had experienced lock inversions, > > and sometimes soft lockups, I did not experienced serious > > damages. It's a journalized filesystem that can fixup the things > > pretty well. > > So are you confident that 2.6.33 will not have regular soft-lockups > for reiserfs users? Yep. > > > > Also we are talking about potential lock inversions, in potential > > rare path, that could potentially raise soft lockups. That makes > > a lot of potentials, for things that are going to be fixed and > > for which I've never seen serious damages. > > A soft lockup is a problem. Perhaps not totally serious, > but a user who experiences them regularly would rightly consider > such a release very broken. If there will be softlockups, these will be rare. The same risk apply to every new features in Linux (which includes improvements in existing features). > > We could make a new reiserfs version by duplicating the code > > base. But nobody will test it. That would require to patch > > mkreiserfs, waiting for distros to ship it, waiting for > > users to ship the distros. Assuming at this time there > > will be remaining users to set up new reiserfs partitions. > > I suspect your estimates on how widely reiserfs is used > are quite off. However as usual a large part of the user > base simply only uses what their distribution ships. Sure, but few (none?) distributions continue to propose reiserfs as a default. I'm not telling there are no users anymore, I'm just telling there won't be new users anymore. > > We could also have a reiserfs-no-bkl config option that > > would pick the duplicated code base. Again I fear few people > > will test it. > > That sounds reasonable, at least have a workaround if there > are too many problems. No, I'll be the only guy that is going to test it. This will basically paralyze the progresses and once we'll merge it back as the reiserfs 3 mainline, we'll have to face the same issues. The only thing that makes this work progressing is the report done by testers. Reviews on such a huge and complicated code base is useful but is also strong in proving its own limits. > > Sometimes I do. Sometimes it's just wasteful. We don't want to relax > > the lock just because of a kmalloc(__GFP_NOFS). > > If that's the problem you can always split the allocation: > first try it with __GFP_NOWAIT without lock dropped, then > if that fails do again with full __GFP_NOFS and lock drop. > > However it's hard to believe that a few instructions > more or less would make much difference; I would normally > expect any larger changes coming from changes IO > patterns or cache line bouncing. The problem is not there. Each sites where we are going to sleep have to be treated as an isolated case. Should or shouldn't we relax here? Depending on the case, it's sometimes wasteful to relax even if we are going to block, sometimes it's useful to relax even if it doesn't appear to be needed. Only reviews proven by benchmarks can help. Which is what I did. But blindly apply the "sleep if we are going to block on io or something" is not always right. > > > Better some mildew than a seriously-broken-for-enough people's > > > release (although I have my doubts that's the right metapher > > > for the BKL anyways) > > > > > > Having stable releases is an important part for > > > getting enough testers (we already have too little). And > > > if we start breaking their $HOMEs they might become > > > even less. > > > > > > This is very unlikely to break their $HOME. > > Well break access to their $HOME Never experienced that. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Frederic Weisbecker on 2 Jan 2010 16:10 On Sat, Jan 02, 2010 at 04:01:01PM -0500, tytso(a)mit.edu wrote: > On Sat, Jan 02, 2010 at 09:11:39PM +0100, Frederic Weisbecker wrote: > > > > I've never lost any datas since I began this work. And > > I run it every day. If I had experienced lock inversions, > > and sometimes soft lockups, I did not experienced serious > > damages. It's a journalized filesystem that can fixup the things > > pretty well. > > Have you tried using the xfsqa regression test suite? Despite the > name, it will work on non-xfs filesystems (although there are some > XFS-specific tests in the test suite.) Both the btrfs and ext4 > developers use it to debug their file systems, and it's a good way of > stressing the file system in all sorts of different ways that might > not be seen during normal desktop usage. I suspect it would be a good > way of flushing out potential problems for reiserfs as well. > > Regards, Thanks! I'm going to test it now. I've been running a stress test from Chris Mason which basically checks races on parallel writes/read. If this testsuite includes more checks, like xattr and some other things, then that's exactly what I was searching. I guess this is the right place to get it? git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: tytso on 2 Jan 2010 16:10 On Sat, Jan 02, 2010 at 09:11:39PM +0100, Frederic Weisbecker wrote: > > I've never lost any datas since I began this work. And > I run it every day. If I had experienced lock inversions, > and sometimes soft lockups, I did not experienced serious > damages. It's a journalized filesystem that can fixup the things > pretty well. Have you tried using the xfsqa regression test suite? Despite the name, it will work on non-xfs filesystems (although there are some XFS-specific tests in the test suite.) Both the btrfs and ext4 developers use it to debug their file systems, and it's a good way of stressing the file system in all sorts of different ways that might not be seen during normal desktop usage. I suspect it would be a good way of flushing out potential problems for reiserfs as well. Regards, - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on 2 Jan 2010 16:20 * Andi Kleen <andi(a)firstfloor.org> wrote: > > We could also have a reiserfs-no-bkl config option that would pick the > > duplicated code base. Again I fear few people will test it. > > That sounds reasonable, at least have a workaround if there are too many > problems. Uhm, i think Frederic meant that a bit mockingly. Judging on past experience related to the BKL, introducing such a config option is one of the worst options possible. IMHO you are giving pretty bad unsolicited advice here. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 Prev: [PATCH -next] libs: force lzma_wrapper to be retained Next: staging Patch 02/03: Crystal HD |