Prev: [LOCKDEP] 33-rc8 Running aplay with pulse as the default
Next: 33-rc8 Running aplay with pulse as the default
From: Peter Zijlstra on 16 Feb 2010 12:30 On Tue, 2010-02-16 at 21:29 +0530, Vaidyanathan Srinivasan wrote: > Agreed. Placement control should be handled by SD_PREFER_SIBLING > and SD_POWER_SAVINGS flags. > > --Vaidy > > --- > > sched_smt_powersavings for threaded systems need this fix for > consolidation to sibling threads to work. Since threads have > fractional capacity, group_capacity will turn out to be one > always and not accommodate another task in the sibling thread. > > This fix makes group_capacity a function of cpumask_weight that > will enable the power saving load balancer to pack tasks among > sibling threads and keep more cores idle. > > Signed-off-by: Vaidyanathan Srinivasan <svaidy(a)linux.vnet.ibm.com> > > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c > index 522cf0e..ec3a5c5 100644 > --- a/kernel/sched_fair.c > +++ b/kernel/sched_fair.c > @@ -2538,9 +2538,17 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, > * In case the child domain prefers tasks go to siblings > * first, lower the group capacity to one so that we'll try > * and move all the excess tasks away. I prefer a blank line in between two paragraphs, but even better would be to place this comment at the else if site. > + * If power savings balance is set at this domain, then > + * make capacity equal to number of hardware threads to > + * accomodate more tasks until capacity is reached. The my spell checker seems to prefer: accommodate > + * default is fractional capacity for sibling hardware > + * threads for fair use of available hardware resources. > */ > if (prefer_sibling) > sgs.group_capacity = min(sgs.group_capacity, 1UL); > + else if (sd->flags & SD_POWERSAVINGS_BALANCE) > + sgs.group_capacity = > + cpumask_weight(sched_group_cpus(group)); I guess we should apply cpu_active_mask so that we properly deal with offline siblings, except with cpumasks being the beasts they are I see no cheap way to do that. > if (local_group) { > sds->this_load = sgs.avg_load; > @@ -2855,7 +2863,8 @@ static int need_active_balance(struct sched_domain *sd, int sd_idle, int idle) > !test_sd_parent(sd, SD_POWERSAVINGS_BALANCE)) > return 0; > > - if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP) > + if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP && > + sched_smt_power_savings < POWERSAVINGS_BALANCE_WAKEUP) > return 0; > } /me still hopes for that unification patch.. :-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Vaidyanathan Srinivasan on 16 Feb 2010 13:30 * Peter Zijlstra <peterz(a)infradead.org> [2010-02-16 18:28:44]: > On Tue, 2010-02-16 at 21:29 +0530, Vaidyanathan Srinivasan wrote: > > Agreed. Placement control should be handled by SD_PREFER_SIBLING > > and SD_POWER_SAVINGS flags. > > > > --Vaidy > > > > --- > > > > sched_smt_powersavings for threaded systems need this fix for > > consolidation to sibling threads to work. Since threads have > > fractional capacity, group_capacity will turn out to be one > > always and not accommodate another task in the sibling thread. > > > > This fix makes group_capacity a function of cpumask_weight that > > will enable the power saving load balancer to pack tasks among > > sibling threads and keep more cores idle. > > > > Signed-off-by: Vaidyanathan Srinivasan <svaidy(a)linux.vnet.ibm.com> > > > > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c > > index 522cf0e..ec3a5c5 100644 > > --- a/kernel/sched_fair.c > > +++ b/kernel/sched_fair.c > > @@ -2538,9 +2538,17 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, > > * In case the child domain prefers tasks go to siblings > > * first, lower the group capacity to one so that we'll try > > * and move all the excess tasks away. > > I prefer a blank line in between two paragraphs, but even better would > be to place this comment at the else if site. > > > + * If power savings balance is set at this domain, then > > + * make capacity equal to number of hardware threads to > > + * accomodate more tasks until capacity is reached. The > > my spell checker seems to prefer: accommodate ok, will fix the comment. > > + * default is fractional capacity for sibling hardware > > + * threads for fair use of available hardware resources. > > */ > > if (prefer_sibling) > > sgs.group_capacity = min(sgs.group_capacity, 1UL); > > + else if (sd->flags & SD_POWERSAVINGS_BALANCE) > > + sgs.group_capacity = > > + cpumask_weight(sched_group_cpus(group)); > > I guess we should apply cpu_active_mask so that we properly deal with > offline siblings, except with cpumasks being the beasts they are I see > no cheap way to do that. The sched_domain will be rebuilt with the sched_group_cpus() representing only online siblings right? sched_group_cpus(group) will always be a subset of cpu_active_mask. Can please explain your comment. > > if (local_group) { > > sds->this_load = sgs.avg_load; > > @@ -2855,7 +2863,8 @@ static int need_active_balance(struct sched_domain *sd, int sd_idle, int idle) > > !test_sd_parent(sd, SD_POWERSAVINGS_BALANCE)) > > return 0; > > > > - if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP) > > + if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP && > > + sched_smt_power_savings < POWERSAVINGS_BALANCE_WAKEUP) > > return 0; > > } > > /me still hopes for that unification patch.. :-) I will post an RFC soon. The main challenge has been with the order in which we should place SD_POWER_SAVINGS flag at MC and CPU/NODE level depending on the system topology and sched_powersavings settings. --Vaidy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Vaidyanathan Srinivasan on 16 Feb 2010 13:50 * Vaidyanathan Srinivasan <svaidy(a)linux.vnet.ibm.com> [2010-02-16 23:55:30]: > * Peter Zijlstra <peterz(a)infradead.org> [2010-02-16 18:28:44]: > > > On Tue, 2010-02-16 at 21:29 +0530, Vaidyanathan Srinivasan wrote: > > > Agreed. Placement control should be handled by SD_PREFER_SIBLING > > > and SD_POWER_SAVINGS flags. > > > > > > --Vaidy > > > > > > --- > > > > > > sched_smt_powersavings for threaded systems need this fix for > > > consolidation to sibling threads to work. Since threads have > > > fractional capacity, group_capacity will turn out to be one > > > always and not accommodate another task in the sibling thread. > > > > > > This fix makes group_capacity a function of cpumask_weight that > > > will enable the power saving load balancer to pack tasks among > > > sibling threads and keep more cores idle. > > > > > > Signed-off-by: Vaidyanathan Srinivasan <svaidy(a)linux.vnet.ibm.com> > > > > > > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c > > > index 522cf0e..ec3a5c5 100644 > > > --- a/kernel/sched_fair.c > > > +++ b/kernel/sched_fair.c > > > @@ -2538,9 +2538,17 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, > > > * In case the child domain prefers tasks go to siblings > > > * first, lower the group capacity to one so that we'll try > > > * and move all the excess tasks away. > > > > I prefer a blank line in between two paragraphs, but even better would > > be to place this comment at the else if site. > > > > > + * If power savings balance is set at this domain, then > > > + * make capacity equal to number of hardware threads to > > > + * accomodate more tasks until capacity is reached. The > > > > my spell checker seems to prefer: accommodate > > ok, will fix the comment. Thanks for the review, here is the updated patch: --- sched: Fix group_capacity for sched_smt_powersavings sched_smt_powersavings for threaded systems need this fix for consolidation to sibling threads to work. Since threads have fractional capacity, group_capacity will turn out to be one always and not accommodate another task in the sibling thread. This fix makes group_capacity a function of cpumask_weight that will enable the power saving load balancer to pack tasks among sibling threads and keep more cores idle. Signed-off-by: Vaidyanathan Srinivasan <svaidy(a)linux.vnet.ibm.com> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 522cf0e..4466144 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -2541,6 +2541,21 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu, */ if (prefer_sibling) sgs.group_capacity = min(sgs.group_capacity, 1UL); + /* + * If power savings balance is set at this domain, then + * make capacity equal to number of hardware threads to + * accommodate more tasks until capacity is reached. + */ + else if (sd->flags & SD_POWERSAVINGS_BALANCE) + sgs.group_capacity = + cpumask_weight(sched_group_cpus(group)); + + /* + * The default group_capacity is rounded from sum of + * fractional cpu_powers of sibling hardware threads + * in order to enable fair use of available hardware + * resources. + */ if (local_group) { sds->this_load = sgs.avg_load; @@ -2855,7 +2870,8 @@ static int need_active_balance(struct sched_domain *sd, int sd_idle, int idle) !test_sd_parent(sd, SD_POWERSAVINGS_BALANCE)) return 0; - if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP) + if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP && + sched_smt_power_savings < POWERSAVINGS_BALANCE_WAKEUP) return 0; } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 16 Feb 2010 13:50 On Tue, 2010-02-16 at 23:55 +0530, Vaidyanathan Srinivasan wrote: > The sched_domain will be rebuilt with the sched_group_cpus() > representing only online siblings right? sched_group_cpus(group) will > always be a subset of cpu_active_mask. Can please explain your > comment. __build_*_sched_domain() seems to only rebuild the sd->span, not the sched_group's mask, cpu_to_*_group() only picks an existing group based on the cpumask passed in, it doesn't change sg->cpumask afaict. That is also the reason we drag load_balance_tmpmask all through load_balance() afaict. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
First
|
Prev
|
Pages: 1 2 3 Prev: [LOCKDEP] 33-rc8 Running aplay with pulse as the default Next: 33-rc8 Running aplay with pulse as the default |