From: Sachin Sant on 13 Nov 2009 04:10 Peter Zijlstra wrote: > So what we need to do is make the whole of select_task_rq_fair() > cpu_online/active_mask aware, or give up and simply punt: > > diff --git a/kernel/sched.c b/kernel/sched.c > index 1f2e99d..62df61c 100644 > --- a/kernel/sched.c > +++ b/kernel/sched.c > @@ -2377,6 +2377,9 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, > task_rq_unlock(rq, &flags); > > cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); > + if (!cpu_active(cpu)) > + cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask); > + > if (cpu != orig_cpu) { > local_irq_save(flags); > rq = cpu_rq(cpu); > > > Something I think Mike also tried and didn't deadlock for him.. > > Sachin, Mike, could you try the above snippet and verify if it does > indeed solve your respective issues? > Unfortunately the above patch made things worse. With this patch the machine failed to boot with following oops CPU0: Dual-Core AMD Opteron(tm) Processor 2218 stepping 02 BUG: unable to handle kernel NULL pointer dereference at 0000000000000020 IP: [<ffffffff81061f17>] set_task_cpu+0x189/0x1ed PGD 0 Oops: 0000 [#1] SMP last sysfs file: CPU 0 Modules linked in: Pid: 3, comm: kthreadd Not tainted 2.6.32-rc7-next-20091113 #1 BladeCenter LS21 -[79716AA]- RIP: 0010:[<ffffffff81061f17>] [<ffffffff81061f17>] set_task_cpu+0x189/0x1ed RSP: 0018:ffff88012b357dd0 EFLAGS: 00010046 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000004 RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff88012b340000 RBP: ffff88012b357e10 R08: 0000000000000004 R09: ffff88012b3401f8 R10: 00000000000cffa7 R11: 0000000000000000 R12: ffff88012b340000 R13: 000000000c28ccf6 R14: 0000000000000004 R15: ffff880028214cc0 FS: 0000000000000000(0000) GS:ffff880028200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b CR2: 0000000000000020 CR3: 000000000174e000 CR4: 00000000000006f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process kthreadd (pid: 3, threadinfo ffff88012b356000, task ffff88012b3431c0) Stack: ffff880028214d20 0000000000000000 0000000028215640 0000000000000000 <0> ffff88012b340000 0000000000000001 ffff880028214cc0 0000000000000000 <0> ffff88012b357e60 ffffffff81063a75 0000000000000000 0000000000000000 Call Trace: [<ffffffff81063a75>] try_to_wake_up+0x103/0x31f [<ffffffff81063c9e>] default_wake_function+0xd/0xf [<ffffffff810519a7>] __wake_up_common+0x46/0x76 [<ffffffff810648ae>] ? migration_thread+0x0/0x285 [<ffffffff810577c8>] complete+0x38/0x4b [<ffffffff8108040a>] kthread+0x67/0x85 [<ffffffff810298fa>] child_rip+0xa/0x20 [<ffffffff810803a3>] ? kthread+0x0/0x85 [<ffffffff810298f0>] ? child_rip+0x0/0x20 Code: 00 8b 05 dd d7 df 04 85 c0 74 19 45 31 c0 31 c9 ba 01 00 00 00 be 01 00 00 00 bf 04 00 00 00 e8 79 02 07 00 48 8b 55 c8 44 89 f1 <48> 8b 42 20 48 8b 55 c0 49 03 84 24 88 00 00 00 48 2b 42 20 49 RIP [<ffffffff81061f17>] set_task_cpu+0x189/0x1ed RSP <ffff88012b357dd0> CR2: 0000000000000020 ---[ end trace 4eaa2a86a8e2da22 ]--- I tried this with today's next (2.6.32-rc7-20091113) + the above patch. Here is how the code looks after applying the patch... task_rq_unlock(rq, &flags); cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); if (!cpu_active(cpu)) cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask); if (cpu != orig_cpu) set_task_cpu(p, cpu); Thanks -Sachin -- --------------------------------- Sachin Sant IBM Linux Technology Center India Systems and Technology Labs Bangalore, India --------------------------------- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 13 Nov 2009 04:10 On Fri, 2009-11-13 at 14:30 +0530, Sachin Sant wrote: > Peter Zijlstra wrote: > > So what we need to do is make the whole of select_task_rq_fair() > > cpu_online/active_mask aware, or give up and simply punt: > > > > diff --git a/kernel/sched.c b/kernel/sched.c > > index 1f2e99d..62df61c 100644 > > --- a/kernel/sched.c > > +++ b/kernel/sched.c > > @@ -2377,6 +2377,9 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, > > task_rq_unlock(rq, &flags); > > > > cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); > > + if (!cpu_active(cpu)) > > + cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask); > > + > > if (cpu != orig_cpu) { > > local_irq_save(flags); > > rq = cpu_rq(cpu); > > > > > > Something I think Mike also tried and didn't deadlock for him.. > > > > Sachin, Mike, could you try the above snippet and verify if it does > > indeed solve your respective issues? > > > Unfortunately the above patch made things worse. With this patch > the machine failed to boot with following oops Ugh, more head scratching for me then.. Thanks for testing. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Gautham R Shenoy on 13 Nov 2009 05:00 On Thu, Nov 12, 2009 at 06:10:31PM +0100, Peter Zijlstra wrote: > > diff --git a/kernel/sched.c b/kernel/sched.c > index 1f2e99d..62df61c 100644 > --- a/kernel/sched.c > +++ b/kernel/sched.c > @@ -2377,6 +2377,9 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, > task_rq_unlock(rq, &flags); > How about this ? again: cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); if (!cpu_online(cpu)) cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask); if (!cpu) { set_task_affinity(); goto again; } > + > if (cpu != orig_cpu) { > local_irq_save(flags); > rq = cpu_rq(cpu); Will it help further narrow down the window ? > > > Something I think Mike also tried and didn't deadlock for him.. > > Sachin, Mike, could you try the above snippet and verify if it does > indeed solve your respective issues? > > /me prays it does, because otherwise I'm fresh out of clue... -- Thanks and Regards gautham -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 13 Nov 2009 05:20 On Fri, 2009-11-13 at 15:28 +0530, Gautham R Shenoy wrote: > On Thu, Nov 12, 2009 at 06:10:31PM +0100, Peter Zijlstra wrote: > > > > diff --git a/kernel/sched.c b/kernel/sched.c > > index 1f2e99d..62df61c 100644 > > --- a/kernel/sched.c > > +++ b/kernel/sched.c > > @@ -2377,6 +2377,9 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, > > task_rq_unlock(rq, &flags); > > > > How about this ? > > again: > cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); > if (!cpu_online(cpu)) > cpu = cpumask_any_and(&p->cpus_allowed, cpu_active_mask); > if (!cpu) { > set_task_affinity(); > goto again; > } > > + > > if (cpu != orig_cpu) { > > local_irq_save(flags); > > rq = cpu_rq(cpu); Index: linux-2.6/kernel/sched.c =================================================================== --- linux-2.6.orig/kernel/sched.c +++ linux-2.6/kernel/sched.c @@ -2376,7 +2376,15 @@ static int try_to_wake_up(struct task_st p->state = TASK_WAKING; __task_rq_unlock(rq); +again: cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); + if (!cpu_online(cpu)) + cpu = cpumask_any_and(&p->cpus_allowed, cpu_online_mask); + if (cpu >= nr_cpu_ids) { + cpuset_cpus_allowed_locked(p, &p->cpus_allowed); + goto again; + } + if (cpu != orig_cpu) { rq = cpu_rq(cpu); update_rq_clock(rq); is what I stuck in and am compiling now.. we'll see what that does. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 13 Nov 2009 05:40 On Fri, 2009-11-13 at 11:16 +0100, Peter Zijlstra wrote: > > Index: linux-2.6/kernel/sched.c > =================================================================== > --- linux-2.6.orig/kernel/sched.c > +++ linux-2.6/kernel/sched.c > @@ -2376,7 +2376,15 @@ static int try_to_wake_up(struct task_st > p->state = TASK_WAKING; > __task_rq_unlock(rq); > > +again: > cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); > + if (!cpu_online(cpu)) > + cpu = cpumask_any_and(&p->cpus_allowed, cpu_online_mask); > + if (cpu >= nr_cpu_ids) { > + cpuset_cpus_allowed_locked(p, &p->cpus_allowed); > + goto again; > + } > + > if (cpu != orig_cpu) { > rq = cpu_rq(cpu); > update_rq_clock(rq); > > is what I stuck in and am compiling now.. we'll see what that does. Well, it boots for me, but then, I've not been able to reproduce any issues anyway :/ /me goes try a PREEMPT=n kernel, since that is what Mike reports boot funnies with.. Full running diff against -tip: --- diff --git a/kernel/sched.c b/kernel/sched.c index 1f2e99d..7089063 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2374,17 +2374,24 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, if (task_contributes_to_load(p)) rq->nr_uninterruptible--; p->state = TASK_WAKING; - task_rq_unlock(rq, &flags); + __task_rq_unlock(rq); +again: cpu = p->sched_class->select_task_rq(p, SD_BALANCE_WAKE, wake_flags); + if (!cpu_online(cpu)) + cpu = cpumask_any_and(&p->cpus_allowed, cpu_online_mask); + if (cpu >= nr_cpu_ids) { + printk(KERN_ERR "Breaking affinity on %d/%s\n", p->pid, p->comm); + cpuset_cpus_allowed_locked(p, &p->cpus_allowed); + goto again; + } + if (cpu != orig_cpu) { - local_irq_save(flags); rq = cpu_rq(cpu); update_rq_clock(rq); set_task_cpu(p, cpu); - local_irq_restore(flags); } - rq = task_rq_lock(p, &flags); + rq = __task_rq_lock(p); WARN_ON(p->state != TASK_WAKING); cpu = task_cpu(p); @@ -7620,6 +7627,8 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) unsigned long flags; struct rq *rq; + printk(KERN_ERR "migration call\n"); + switch (action) { case CPU_UP_PREPARE: @@ -9186,6 +9195,8 @@ int __init sched_create_sysfs_power_savings_entries(struct sysdev_class *cls) static int update_sched_domains(struct notifier_block *nfb, unsigned long action, void *hcpu) { + printk(KERN_ERR "update_sched_domains\n"); + switch (action) { case CPU_ONLINE: case CPU_ONLINE_FROZEN: diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 5488a5d..0ff21af 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1345,6 +1345,37 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) } /* + * Try and locate an idle CPU in the sched_domain. + */ +static int +select_idle_sibling(struct task_struct *p, struct sched_domain *sd, int target) +{ + int cpu = smp_processor_id(); + int prev_cpu = task_cpu(p); + int i; + + /* + * If this domain spans both cpu and prev_cpu (see the SD_WAKE_AFFINE + * test in select_task_rq_fair) and the prev_cpu is idle then that's + * always a better target than the current cpu. + */ + if (target == cpu && !cpu_rq(prev_cpu)->cfs.nr_running) + return prev_cpu; + + /* + * Otherwise, iterate the domain and find an elegible idle cpu. + */ + for_each_cpu_and(i, sched_domain_span(sd), &p->cpus_allowed) { + if (!cpu_rq(i)->cfs.nr_running) { + target = i; + break; + } + } + + return target; +} + +/* * sched_balance_self: balance the current task (running on cpu) in domains * that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and * SD_BALANCE_EXEC. @@ -1398,37 +1429,34 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag want_sd = 0; } - if (want_affine && (tmp->flags & SD_WAKE_AFFINE)) { - int candidate = -1, i; + /* + * While iterating the domains looking for a spanning + * WAKE_AFFINE domain, adjust the affine target to any idle cpu + * in cache sharing domains along the way. + */ + if (want_affine) { + int target = -1; + /* + * If both cpu and prev_cpu are part of this domain, + * cpu is a valid SD_WAKE_AFFINE target. + */ if (cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) - candidate = cpu; + target = cpu; /* - * Check for an idle shared cache. + * If there's an idle sibling in this domain, make that + * the wake_affine target instead of the current cpu. */ - if (tmp->flags & SD_PREFER_SIBLING) { - if (candidate == cpu) { - if (!cpu_rq(prev_cpu)->cfs.nr_running) - candidate = prev_cpu; - } + if (tmp->flags & SD_PREFER_SIBLING) + target = select_idle_sibling(p, tmp, target); - if (candidate == -1 || candidate == cpu) { - for_each_cpu(i, sched_domain_span(tmp)) { - if (!cpumask_test_cpu(i, &p->cpus_allowed)) - continue; - if (!cpu_rq(i)->cfs.nr_running) { - candidate = i; - break; - } - } + if (target >= 0) { + if (tmp->flags & SD_WAKE_AFFINE) { + affine_sd = tmp; + want_affine = 0; } - } - - if (candidate >= 0) { - affine_sd = tmp; - want_affine = 0; - cpu = candidate; + cpu = target; } } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 Prev: security/selinux: decrement sizeof size in strncmp Next: AMI305 magnetometer driver |