Prev: linux-next: vfs tree build failure
Next: [PATCH v3 2/2] cpufreq: ondemand: Independent max speed for nice threads with nice_max_freq
From: Peter Zijlstra on 18 Feb 2010 08:20 On Thu, 2010-02-18 at 09:20 +1100, Michael Neuling wrote: > > Suppose for a moment we have 2 threads (hot-unplugged thread 1 and 3, we > > can construct an equivalent but more complex example for 4 threads), and > > we have 4 tasks, 3 SCHED_OTHER of equal nice level and 1 SCHED_FIFO, the > > SCHED_FIFO task will consume exactly 50% walltime of whatever cpu it > > ends up on. > > > > In that situation, provided that each cpu's cpu_power is of equal > > measure, scale_rt_power() ensures that we run 2 SCHED_OTHER tasks on the > > cpu that doesn't run the RT task, and 1 SCHED_OTHER task next to the RT > > task, so that each task consumes 50%, which is all fair and proper. > > > > However, if you do the above, thread 0 will have +75% = 1.75 and thread > > 2 will have -75% = 0.25, then if the RT task will land on thread 0, > > we'll be having: 0.875 vs 0.25, or on thread 3, 1.75 vs 0.125. In either > > case thread 0 will receive too many (if not all) SCHED_OTHER tasks. > > > > That is, unless these threads 2 and 3 really are _that_ weak, at which > > point one wonders why IBM bothered with the silicon ;-) > > Peter, > > 2 & 3 aren't weaker than 0 & 1 but.... > > The core has dynamic SMT mode switching which is controlled by the > hypervisor (IBM's PHYP). There are 3 SMT modes: > SMT1 uses thread 0 > SMT2 uses threads 0 & 1 > SMT4 uses threads 0, 1, 2 & 3 > When in any particular SMT mode, all threads have the same performance > as each other (ie. at any moment in time, all threads perform the same). > > The SMT mode switching works such that when linux has threads 2 & 3 idle > and 0 & 1 active, it will cede (H_CEDE hypercall) threads 2 and 3 in the > idle loop and the hypervisor will automatically switch to SMT2 for that > core (independent of other cores). The opposite is not true, so if > threads 0 & 1 are idle and 2 & 3 are active, we will stay in SMT4 mode. > > Similarly if thread 0 is active and threads 1, 2 & 3 are idle, we'll go > into SMT1 mode. > > If we can get the core into a lower SMT mode (SMT1 is best), the threads > will perform better (since they share less core resources). Hence when > we have idle threads, we want them to be the higher ones. Just out of curiosity, is this a hardware constraint or a hypervisor constraint? > So to answer your question, threads 2 and 3 aren't weaker than the other > threads when in SMT4 mode. It's that if we idle threads 2 & 3, threads > 0 & 1 will speed up since we'll move to SMT2 mode. > > I'm pretty vague on linux scheduler details, so I'm a bit at sea as to > how to solve this. Can you suggest any mechanisms we currently have in > the kernel to reflect these properties, or do you think we need to > develop something new? If so, any pointers as to where we should look? Well there currently isn't one, and I've been telling people to create a new SD_flag to reflect this and influence the f_b_g() behaviour. Something like the below perhaps, totally untested and without comments so that you'll have to reverse engineer and validate my thinking. There's one fundamental assumption, and one weakness in the implementation. --- include/linux/sched.h | 2 +- kernel/sched_fair.c | 61 +++++++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 58 insertions(+), 5 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 0eef87b..42fa5c6 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -849,7 +849,7 @@ enum cpu_idle_type { #define SD_POWERSAVINGS_BALANCE 0x0100 /* Balance for power savings */ #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */ #define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */ - +#define SD_ASYM_PACKING 0x0800 #define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */ enum powersavings_balance_level { diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index ff7692c..7e42bfe 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -2086,6 +2086,7 @@ struct sd_lb_stats { struct sched_group *this; /* Local group in this sd */ unsigned long total_load; /* Total load of all groups in sd */ unsigned long total_pwr; /* Total power of all groups in sd */ + unsigned long total_nr_running; unsigned long avg_load; /* Average load across all groups in sd */ /** Statistics of this group */ @@ -2414,10 +2415,10 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, int *balance, struct sg_lb_stats *sgs) { unsigned long load, max_cpu_load, min_cpu_load; - int i; unsigned int balance_cpu = -1, first_idle_cpu = 0; unsigned long sum_avg_load_per_task; unsigned long avg_load_per_task; + int i; if (local_group) balance_cpu = group_first_cpu(group); @@ -2493,6 +2494,28 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, DIV_ROUND_CLOSEST(group->cpu_power, SCHED_LOAD_SCALE); } +static int update_sd_pick_busiest(struct sched_domain *sd, + struct sd_lb_stats *sds, + struct sched_group *sg, + struct sg_lb_stats *sgs) +{ + if (sgs->sum_nr_running > sgs->group_capacity) + return 1; + + if (sgs->group_imb) + return 1; + + if ((sd->flags & SD_ASYM_PACKING) && sgs->sum_nr_running) { + if (!sds->busiest) + return 1; + + if (group_first_cpu(sds->busiest) < group_first_cpu(group)) + return 1; + } + + return 0; +} + /** * update_sd_lb_stats - Update sched_group's statistics for load balancing. * @sd: sched_domain whose statistics are to be updated. @@ -2533,6 +2556,7 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, sds->total_load += sgs.group_load; sds->total_pwr += group->cpu_power; + sds->total_nr_running += sgs.sum_nr_running; /* * In case the child domain prefers tasks go to siblings @@ -2547,9 +2571,8 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, sds->this = group; sds->this_nr_running = sgs.sum_nr_running; sds->this_load_per_task = sgs.sum_weighted_load; - } else if (sgs.avg_load > sds->max_load && - (sgs.sum_nr_running > sgs.group_capacity || - sgs.group_imb)) { + } else if (sgs.avg_load >= sds->max_load && + update_sd_pick_busiest(sd, sds, group, &sgs)) { sds->max_load = sgs.avg_load; sds->busiest = group; sds->busiest_nr_running = sgs.sum_nr_running; @@ -2562,6 +2585,33 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, } while (group != sd->groups); } +static int check_asym_packing(struct sched_domain *sd, + struct sd_lb_stats *sds, + int cpu, unsigned long *imbalance) +{ + int i, cpu, busiest_cpu; + + if (!(sd->flags & SD_ASYM_PACKING)) + return 0; + + if (!sds->busiest) + return 0; + + i = 0; + busiest_cpu = group_first_cpu(sds->busiest); + for_each_cpu(cpu, sched_domain_span(sd)) { + i++; + if (cpu == busiest_cpu) + break; + } + + if (sds->total_nr_running > i) + return 0; + + *imbalance = sds->max_load; + return 1; +} + /** * fix_small_imbalance - Calculate the minor imbalance that exists * amongst the groups of a sched_domain, during @@ -2761,6 +2811,9 @@ find_busiest_group(struct sched_domain *sd, int this_cpu, return sds.busiest; out_balanced: + if (check_asym_packing(sd, &sds, this_cpu, imbalance)) + return sds.busiest; + /* * There is no obvious imbalance. But check if we can do some balancing * to save power. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 18 Feb 2010 12:10 On Thu, 2010-02-18 at 10:28 -0600, Joel Schopp wrote: > > There's one fundamental assumption, and one weakness in the > > implementation. > > > I'm going to guess the weakness is that it doesn't adjust the cpu power > so tasks running in SMT1 mode actually get more than they account for? No, but you're right, if these SMTx modes are running at different frequencies then yes that needs to happen as well. The weakness is failing to do the right thing in the presence of a 'strategically' placed RT task. Suppose: Sibling0, Sibling1, Sibling2, Sibling3 idle OTHER OTHER FIFO it might not manage to migrate a task to 0 because it ends up selecting 3 as busiest. It doesn't at all influence RT placement, but it does look at nr_running (which does include RT tasks) > What's the assumption? That cpu_of(Sibling n) < cpu_of(Sibling n+1) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 19 Feb 2010 05:10 On Fri, 2010-02-19 at 17:05 +1100, Michael Neuling wrote: > > include/linux/sched.h | 2 +- > > kernel/sched_fair.c | 61 +++++++++++++++++++++++++++++++++++++++++++++-- > - > > 2 files changed, 58 insertions(+), 5 deletions(-) > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 0eef87b..42fa5c6 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -849,7 +849,7 @@ enum cpu_idle_type { > > #define SD_POWERSAVINGS_BALANCE 0x0100 /* Balance for power savings */ > > #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg > resources */ > > #define SD_SERIALIZE 0x0400 /* Only a single load balancing instanc > e */ > > - > > +#define SD_ASYM_PACKING 0x0800 > > Would we eventually add this to SD_SIBLING_INIT in a arch specific hook, > or is this ok to add it generically? I'd think we'd want to keep that limited to architectures that actually need it. > > > +static int update_sd_pick_busiest(struct sched_domain *sd, > > + struct sd_lb_stats *sds, > > + struct sched_group *sg, > > + struct sg_lb_stats *sgs) > > +{ > > + if (sgs->sum_nr_running > sgs->group_capacity) > > + return 1; > > + > > + if (sgs->group_imb) > > + return 1; > > + > > + if ((sd->flags & SD_ASYM_PACKING) && sgs->sum_nr_running) { > > + if (!sds->busiest) > > + return 1; > > + > > + if (group_first_cpu(sds->busiest) < group_first_cpu(group)) > > "group" => "sg" here? (I get a compile error otherwise) Oh, quite ;-) > > +static int check_asym_packing(struct sched_domain *sd, > > + struct sd_lb_stats *sds, > > + int cpu, unsigned long *imbalance) > > +{ > > + int i, cpu, busiest_cpu; > > Redefining cpu here. Looks like the cpu parameter is not really needed? Seems that way indeed, I went back and forth a few times on the actual implementation of this function (which started out live as a copy of check_power_save_busiest_group), its amazing there were only these two compile glitches ;-) > > + > > + if (!(sd->flags & SD_ASYM_PACKING)) > > + return 0; > > + > > + if (!sds->busiest) > > + return 0; > > + > > + i = 0; > > + busiest_cpu = group_first_cpu(sds->busiest); > > + for_each_cpu(cpu, sched_domain_span(sd)) { > > + i++; > > + if (cpu == busiest_cpu) > > + break; > > + } > > + > > + if (sds->total_nr_running > i) > > + return 0; > > + > > + *imbalance = sds->max_load; > > + return 1; > > +} -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 23 Feb 2010 11:30 On Tue, 2010-02-23 at 17:08 +1100, Michael Neuling wrote: > I have some comments on the code inline but... > > So when I run this, I don't get processes pulled down to the lower > threads. A simple test case of running 1 CPU intensive process at > SCHED_OTHER on a machine with 2 way SMT system (a POWER6 but enabling > SD_ASYM_PACKING). The single processes doesn't move to lower threads as > I'd hope. > > Also, are you sure you want to put this in generic code? It seem to be > quite POWER7 specific functionality, so would be logically better in > arch/powerpc. I guess some other arch *might* need it, but seems > unlikely. Well, there are no arch hooks in the load-balancing (aside from the recent cpu_power stuff, and that really is the wrong thing to poke at for this), and I did hear some other people express interest in such a constraint. Also, load-balancing is complex enough as it is, so I prefer to keep everything in the generic code where possible, clearly things like sched_domain creation need arch topology bits, and the arch_scale* things require other arch information like cpu frequency. > > @@ -2493,6 +2494,28 @@ static inline void update_sg_lb_stats(st > > DIV_ROUND_CLOSEST(group->cpu_power, SCHED_LOAD_SCALE); > > } > > > > +static int update_sd_pick_busiest(struct sched_domain *sd, > > + struct sd_lb_stats *sds, > > + struct sched_group *sg, > > + struct sg_lb_stats *sgs) > > +{ > > + if (sgs->sum_nr_running > sgs->group_capacity) > > + return 1; > > + > > + if (sgs->group_imb) > > + return 1; > > + > > + if ((sd->flags & SD_ASYM_PACKING) && sgs->sum_nr_running) { > > If we are asymetric packing... > > > > + if (!sds->busiest) > > + return 1; > > This just seems to be a null pointer check. > > From the tracing I've done, this is always true (always NULL) at this > point so we return here. Right, so we need to have a busiest group to take a task from, if there is no busiest yet, take this group. And in your scenario, with there being only a single task, we'd only hit this once at most, so yes it makes sense this is always NULL. > > + > > + if (group_first_cpu(sds->busiest) < group_first_cpu(sg)) > > + return 1; > > I'm a bit lost as to what this is for. Any clues you could provide > would be appreciated. :-) > > Is the first cpu in this domain's busiest group before the first cpu in > this group. If, so pick this as the busiest? > > Should this be the other way around if we want to pack the busiest to > the first cpu? Mark it as the busiest if it's after (not before). > > Is group_first_cpu guaranteed to give us the first physical cpu (ie. > thread 0 in our case) or are these virtualised at this point? > > I'm not seeing this hit anyway due to the null pointer check above. So this says, if all things being equal, and we already have a busiest, but this candidate (sg) is higher than the current (busiest) take this one. The idea is to move the highest SMT task down. > > @@ -2562,6 +2585,38 @@ static inline void update_sd_lb_stats(st > > } while (group != sd->groups); > > } > > > > +int __weak sd_asym_packing_arch(void) > > +{ > > + return 0; > > +} arch_sd_asym_packing() is what you used in topology.h > > +static int check_asym_packing(struct sched_domain *sd, > > + struct sd_lb_stats *sds, > > + unsigned long *imbalance) > > +{ > > + int i, cpu, busiest_cpu; > > + > > + if (!(sd->flags & SD_ASYM_PACKING)) > > + return 0; > > + > > + if (!sds->busiest) > > + return 0; > > + > > + i = 0; > > + busiest_cpu = group_first_cpu(sds->busiest); > > + for_each_cpu(cpu, sched_domain_span(sd)) { > > + i++; > > + if (cpu == busiest_cpu) > > + break; > > + } > > + > > + if (sds->total_nr_running > i) > > + return 0; > > This seems to be the core of the packing logic. > > We make sure the busiest_cpu is not past total_nr_running. If it is we > mark as imbalanced. Correct? > > It seems if a non zero thread/group had a pile of processes running on > it and a lower thread had much less, this wouldn't fire, but I'm > guessing normal load balancing would kick in that case to fix the > imbalance. > > Any corrections to my ramblings appreciated :-) Right, so we're concerned the scenario where there's less tasks than SMT siblings, if there's more they should all be running and the regular load-balancer will deal with it. If there's less the group will normally be balanced and we fall out and end up in check_asym_packing(). So what I tried doing with that loop is detect if there's a hole in the packing before busiest. Now that I think about it, what we need to check is if this_cpu (the removed cpu argument) is idle and less than busiest. So something like: static int check_asym_pacing(struct sched_domain *sd, struct sd_lb_stats *sds, int this_cpu, unsigned long *imbalance) { int busiest_cpu; if (!(sd->flags & SD_ASYM_PACKING)) return 0; if (!sds->busiest) return 0; busiest_cpu = group_first_cpu(sds->busiest); if (cpu_rq(this_cpu)->nr_running || this_cpu > busiest_cpu) return 0; *imbalance = (sds->max_load * sds->busiest->cpu_power) / SCHED_LOAD_SCALE; return 1; } Does that make sense? I still see two problems with this though,.. regular load-balancing only balances on the first cpu of a domain (see the *balance = 0, condition in update_sg_lb_stats()), this means that if SMT[12] are idle we'll not pull properly. Also, nohz balancing might mess this up further. We could maybe play some games with the balance decision in update_sg_lb_stats() for SD_ASYM_PACKING domains and idle == CPU_IDLE, no ideas yet on nohz though. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 23 Feb 2010 11:40
On Tue, 2010-02-23 at 17:24 +0100, Peter Zijlstra wrote: > > busiest_cpu = group_first_cpu(sds->busiest); > if (cpu_rq(this_cpu)->nr_running || this_cpu > busiest_cpu) > return 0; Hmm, we could change the bit in find_busiest_group() to: if (idle == CPU_IDLE && check_asym_packing()) and skip the nr_running test.. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |