From: Lai Jiangshan on 7 Apr 2010 09:40 Oleg Nesterov wrote: > On 04/05, Oleg Nesterov wrote: >> On 04/05, Lai Jiangshan wrote: >>> 1) get_online_cpus() must be allowed to be called recursively, so I added >>> get_online_cpus_nest for every task for new code. >> Well, iirc one of the goals of >> >> cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus() >> 86ef5c9a8edd78e6bf92879f32329d89b2d55b5a >> >> was avoiding the new members in task_struct. I leave this up to you >> and Gautham. Old get_online_cpus() is read-preference, I think the goal of this ability is allow get_online_cpus()/put_online_cpus() to be called nested. But read-preference RWL may cause write side starvation, so I abandon this ability, and use per-task counter for allowing get_online_cpus()/put_online_cpus() to be called nested, I think this deal is absolutely worth. >> >> >> Lai, I didn't read this patch carefully yet (and I can't apply it to >> Linus's tree). But at first glance, > > because I tried to apply it without 1/2 ;) > >>> void put_online_cpus(void) >>> { >>> ... >>> + if (!--current->get_online_cpus_nest) { >>> + preempt_disable(); >>> + __get_cpu_var(refcount)--; >>> + if (cpu_hotplug_task) >>> + wake_up_process(cpu_hotplug_task); >> This looks unsafe. In theory nothing protects cpu_hotplug_task from >> exiting if refcount_sum() becomes zero, this means wake_up_process() >> can hit the freed/reused/unmapped task_struct. Probably cpu_hotplug_done() >> needs another synhronize_sched() before return. > > Yes, I think this is true, at least in theory. preempt_disable() prevent cpu_hotplug_task from exiting. Thanks, Lai -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Lai Jiangshan on 12 Apr 2010 05:30 Oleg Nesterov wrote: > On 04/07, Oleg Nesterov wrote: >> On 04/07, Lai Jiangshan wrote: >>> Old get_online_cpus() is read-preference, I think the goal of this ability >>> is allow get_online_cpus()/put_online_cpus() to be called nested. >> Sure, I understand why you added task_struct->get_online_cpus_nest. >> >>> and use per-task counter for allowing get_online_cpus()/put_online_cpus() >>> to be called nested, I think this deal is absolutely worth. >> As I said, I am not going to argue. I can't justify this tradeoff. > > But, I must admit, I'd like to avoid adding the new member to task_struct. > > What do you think about the code below? > > I didn't even try to compile it, just to explain what I mean. > > In short: we have the per-cpu fast counters, plus the slow counter > which is only used when cpu_hotplug_begin() is in progress. > > Oleg. > get_online_cpus() in your code is still read-preference. I wish we quit this ability of get_online_cpus(). Lai. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 12 Apr 2010 05:40 On Mon, 2010-04-12 at 17:24 +0800, Lai Jiangshan wrote: > Oleg Nesterov wrote: > > On 04/07, Oleg Nesterov wrote: > >> On 04/07, Lai Jiangshan wrote: > >>> Old get_online_cpus() is read-preference, I think the goal of this ability > >>> is allow get_online_cpus()/put_online_cpus() to be called nested. > >> Sure, I understand why you added task_struct->get_online_cpus_nest. > >> > >>> and use per-task counter for allowing get_online_cpus()/put_online_cpus() > >>> to be called nested, I think this deal is absolutely worth. > >> As I said, I am not going to argue. I can't justify this tradeoff. > > > > But, I must admit, I'd like to avoid adding the new member to task_struct. > > > > What do you think about the code below? > > > > I didn't even try to compile it, just to explain what I mean. > > > > In short: we have the per-cpu fast counters, plus the slow counter > > which is only used when cpu_hotplug_begin() is in progress. > > > > Oleg. > > > > get_online_cpus() in your code is still read-preference. > I wish we quit this ability of get_online_cpus(). Why? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Lai Jiangshan on 12 Apr 2010 08:40 Peter Zijlstra wrote: > On Mon, 2010-04-12 at 17:24 +0800, Lai Jiangshan wrote: >> Oleg Nesterov wrote: >>> On 04/07, Oleg Nesterov wrote: >>>> On 04/07, Lai Jiangshan wrote: >>>>> Old get_online_cpus() is read-preference, I think the goal of this ability >>>>> is allow get_online_cpus()/put_online_cpus() to be called nested. >>>> Sure, I understand why you added task_struct->get_online_cpus_nest. >>>> >>>>> and use per-task counter for allowing get_online_cpus()/put_online_cpus() >>>>> to be called nested, I think this deal is absolutely worth. >>>> As I said, I am not going to argue. I can't justify this tradeoff. >>> But, I must admit, I'd like to avoid adding the new member to task_struct. >>> >>> What do you think about the code below? >>> >>> I didn't even try to compile it, just to explain what I mean. >>> >>> In short: we have the per-cpu fast counters, plus the slow counter >>> which is only used when cpu_hotplug_begin() is in progress. >>> >>> Oleg. >>> >> get_online_cpus() in your code is still read-preference. >> I wish we quit this ability of get_online_cpus(). > > Why? Because read-preference RWL will cause write site starvation. A user run the following code will cause cpuhotplug starvation. (100 processes run sched_setaffinity().) Lai #define _GNU_SOURCE #include <sched.h> #include <sys/types.h> #include <unistd.h> #include <stdlib.h> #define NCPU 4 #define NPROCESS 100 cpu_set_t set; pid_t target; void stress_test(void) { int cpu; srand((int)target); for (;;) { cpu = rand() % NCPU; CPU_SET(cpu, &set); sched_setaffinity(target, sizeof(set), &set); CPU_CLR(cpu, &set); } } int main(int argc, char *argv[]) { pid_t ret; int i; target = getpid(); for (i = 1; i < NPROCESS; i++) { ret = fork(); if (ret < 0) break; else if (ret) target = ret; else stress_test(); } stress_test(); } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 12 Apr 2010 08:40 On Mon, 2010-04-12 at 20:30 +0800, Lai Jiangshan wrote: > Because read-preference RWL will cause write site starvation. Sure, but why is that a problem? Hotplug should be a rare event, who cares if it takes a while to come through. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|
Next
|
Last
Pages: 1 2 Prev: perf_events: add PERF_SAMPLE_BRANCH_STACK Next: [PATCH 2/8] sched: __wake_up_locked() exported |