Prev: [2.6.31 and later] "struct pid" leak.
Next: vesafb: use platform_driver_probe() instead of platform_driver_register()
From: Dave Chinner on 29 Mar 2010 18:30 On Mon, Mar 29, 2010 at 06:29:00PM +0200, Peter Zijlstra wrote: > On Mon, 2010-03-29 at 12:11 -0400, Rik van Riel wrote: > > On 03/27/2010 08:43 AM, Kent Overstreet wrote: > > > commit 5beb49305251e5669852ed541e8e2f2f7696c53e > > > Author: Rik van Riel <riel(a)redhat.com> > > > Date: Fri Mar 5 13:42:07 2010 -0800 > > > > > > mm: change anon_vma linking to fix multi-process server scalability issue > > > > > > I get this when starting kvm. The warning hasn't caused me problems, but > > > I've also been getting a scheduling while atomic panic when I start kvm > > > that I can only reproduce when I don't want to. It's definitely config > > > dependent, I'd guess preempt might have something to do with it. > > > > From your trace, it looks like mm_take_all_locks is taking close > > to 256 locks, which is where the preempt_count could overflow into > > the softirq count. > > > > Since kvm-qemu is exec'd, I am guessing you have a very large > > number of VMAs in your qemu process. Is that correct? > > > > Peter, would it be safe to increase PREEMPT_BITS to eg. 10? > > Possibly, but who's to say the thing won't bloat to 65k at which it'll > hit the vma limit, but even that can be grown beyond that. This issue came up a few years ago w.r.t. the per-cpu superblock counters in XFS which used one spinlock per CPU to be held at synchronisation/rebalance time. A 256p machine would fall over doing this, and there was great resistance to increasing the preempt count field size. Instead, I changed the spinlocks to use a bit in a flag word in the per-cpu structure and used a test_and_set_bit() loop to emulate a spinlock. Then by adding an external preempt_disable()/enable() for the fast and slow paths, they ultimately behave like spinlocks but without causing preempt count windup. I'm not suggesting that this is the solution to the current problem case, just indicating that we've been here before and that there are ways of avoiding preempt count windup in the cases where lots of critical areas need to be locked out simultaneously.... Cheers, Dave. -- Dave Chinner david(a)fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: KOSAKI Motohiro on 29 Mar 2010 20:40 > On Mon, 2010-03-29 at 18:29 +0200, Peter Zijlstra wrote: > > > > Peter, would it be safe to increase PREEMPT_BITS to eg. 10? > > One reason this all sucks massive is that nesting that many spinlocks > creates a terribly large !preempt section. > > There is nothing that stops someone from creating 64k vmas (or more when > someone raises that sysctl) and try this, that's just utter suckage. offtopic: We are plan to increase default maximum vmas limit from 64k at 1 or 2 years later on 64bit. It was necessary because gdb couldn't parse >64k vmas (i.e. mainly it's for core dump). but it was solved already. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Maciej Rutecki on 1 Apr 2010 15:00 On sobota, 27 marca 2010 o 13:43:32 Kent Overstreet wrote: > commit 5beb49305251e5669852ed541e8e2f2f7696c53e > Author: Rik van Riel <riel(a)redhat.com> > Date: Fri Mar 5 13:42:07 2010 -0800 > > mm: change anon_vma linking to fix multi-process server scalability > issue > I created a Bugzilla entry at https://bugzilla.kernel.org/show_bug.cgi?id=15672 for your bug report, please add your address to the CC list in there, thanks! -- Maciej Rutecki http://www.maciek.unixy.pl -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 21 Apr 2010 12:10
On Wed, 2010-04-21 at 17:57 +0200, Rafael J. Wysocki wrote: > > OK to close as "will fix later"? > Sure -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |