Prev: [PATCH] time/fs - file's time race with vgettimeofday
Next: [PATCH v2] KVM: IOAPIC: only access APIC registers one dword at a time
From: Peter Zijlstra on 8 Jul 2010 14:20 On Thu, 2010-07-08 at 10:45 -0700, Suresh Siddha wrote: > > @@ -2433,7 +2433,8 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, > > return; > > } > > > > - update_group_power(sd, this_cpu); > > + if (local_group) > > + update_group_power(sd, this_cpu); > > if IDLE == CPU_NEWLY_IDLE, then all the cpu's in the local group will do > this. Also update_group_power() can be done on only on the local cpu, > i.e., when this_cpu == smp_processor_id() right? It might make sense to only update_group_power on !CPU_NEWLY_IDLE and rely on the tick driven cpu_power updates. No sense in updating them in finer slices I guess. So how about something like: --- kernel/sched_fair.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 9910e1b..2f05679 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -2427,14 +2427,14 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, * domains. In the newly idle case, we will allow all the cpu's * to do the newly idle load balance. */ - if (idle != CPU_NEWLY_IDLE && local_group && - balance_cpu != this_cpu) { - *balance = 0; - return; + if (idle != CPU_NEWLY_IDLE && local_group) { + if (balance_cpu != this_cpu) { + *balance = 0; + return; + } + update_group_power(sd, this_cpu); } - update_group_power(sd, this_cpu); - /* Adjust by relative CPU power of the group */ sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Suresh Siddha on 8 Jul 2010 18:00 On Thu, 2010-07-08 at 11:16 -0700, Peter Zijlstra wrote: > On Thu, 2010-07-08 at 10:45 -0700, Suresh Siddha wrote: > > > @@ -2433,7 +2433,8 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, > > > return; > > > } > > > > > > - update_group_power(sd, this_cpu); > > > + if (local_group) > > > + update_group_power(sd, this_cpu); > > > > if IDLE == CPU_NEWLY_IDLE, then all the cpu's in the local group will do > > this. Also update_group_power() can be done on only on the local cpu, > > i.e., when this_cpu == smp_processor_id() right? > > It might make sense to only update_group_power on !CPU_NEWLY_IDLE and > rely on the tick driven cpu_power updates. > > No sense in updating them in finer slices I guess. > > So how about something like: > > --- > kernel/sched_fair.c | 12 ++++++------ > 1 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c > index 9910e1b..2f05679 100644 > --- a/kernel/sched_fair.c > +++ b/kernel/sched_fair.c > @@ -2427,14 +2427,14 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, > * domains. In the newly idle case, we will allow all the cpu's > * to do the newly idle load balance. > */ > - if (idle != CPU_NEWLY_IDLE && local_group && > - balance_cpu != this_cpu) { > - *balance = 0; > - return; > + if (idle != CPU_NEWLY_IDLE && local_group) { > + if (balance_cpu != this_cpu) { > + *balance = 0; > + return; > + } > + update_group_power(sd, this_cpu); > } > > - update_group_power(sd, this_cpu); > - > /* Adjust by relative CPU power of the group */ > sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power; > I am ok with this patch (barring the currently broken aperf/mperf part). Acked-by: Suresh Siddha <suresh.b.siddha(a)intel.com> Also, looking at all this, don't we need to do something like this in the nohz load balance? diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 9910e1b..ae750e9 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -3598,6 +3598,7 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) } raw_spin_lock_irq(&this_rq->lock); + update_rq_clock(this_rq); update_cpu_load(this_rq); raw_spin_unlock_irq(&this_rq->lock); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 9 Jul 2010 09:20 On Thu, 2010-07-08 at 14:53 -0700, Suresh Siddha wrote: > Also, looking at all this, don't we need to do something like this in > the nohz load balance? Yes, I think you're right, thanks! > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c > index 9910e1b..ae750e9 100644 > --- a/kernel/sched_fair.c > +++ b/kernel/sched_fair.c > @@ -3598,6 +3598,7 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle) > } > > raw_spin_lock_irq(&this_rq->lock); > + update_rq_clock(this_rq); > update_cpu_load(this_rq); > raw_spin_unlock_irq(&this_rq->lock); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Venkatesh Pallipadi on 12 Jul 2010 13:20
On Thu, Jul 8, 2010 at 11:16 AM, Peter Zijlstra <peterz(a)infradead.org> wrote: > On Thu, 2010-07-08 at 10:45 -0700, Suresh Siddha wrote: >> > @@ -2433,7 +2433,8 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, >> > � � � � � � � return; >> > � � � } >> > >> > - � � update_group_power(sd, this_cpu); >> > + � � if (local_group) >> > + � � � � � � update_group_power(sd, this_cpu); >> >> if IDLE == CPU_NEWLY_IDLE, then all the cpu's in the local group will do >> this. Also update_group_power() can be done on only on the local cpu, >> i.e., when this_cpu == smp_processor_id() right? > > It might make sense to only update_group_power on !CPU_NEWLY_IDLE and > rely on the tick driven cpu_power updates. > > No sense in updating them in finer slices I guess. > > So how about something like: Yes. This looks good. Acked-by: Venkatesh Pallipadi <venki(a)google.com> > > --- > �kernel/sched_fair.c | � 12 ++++++------ > �1 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c > index 9910e1b..2f05679 100644 > --- a/kernel/sched_fair.c > +++ b/kernel/sched_fair.c > @@ -2427,14 +2427,14 @@ static inline void update_sg_lb_stats(struct sched_domain *sd, > � � � � * domains. In the newly idle case, we will allow all the cpu's > � � � � * to do the newly idle load balance. > � � � � */ > - � � � if (idle != CPU_NEWLY_IDLE && local_group && > - � � � � � balance_cpu != this_cpu) { > - � � � � � � � *balance = 0; > - � � � � � � � return; > + � � � if (idle != CPU_NEWLY_IDLE && local_group) { > + � � � � � � � if (balance_cpu != this_cpu) { > + � � � � � � � � � � � *balance = 0; > + � � � � � � � � � � � return; > + � � � � � � � } > + � � � � � � � update_group_power(sd, this_cpu); > � � � �} > > - � � � update_group_power(sd, this_cpu); > - > � � � �/* Adjust by relative CPU power of the group */ > � � � �sgs->avg_load = (sgs->group_load * SCHED_LOAD_SCALE) / group->cpu_power; > > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |