Prev: Bug in current -git tree causing dbus and gnome to chew up cpu time
Next: Introduce O_CLOEXEC (take >2)
From: Gene Heskett on 2 May 2007 04:50 On Wednesday 02 May 2007, Mike Galbraith wrote: >On Wed, 2007-05-02 at 04:03 -0400, Gene Heskett wrote: >> I just checked my logs, and it appears my workload didn't trigger this one >> Mike. > >It's just a build time compiler warning. Duh. I have a couple of pages of "may be used uninitialized" warnings. Including one in serial.c for the raw channel. And I have a bit of trouble there too. Related? Dunno. >> Ingo asked for a 0-100 rating, where 0 is mainline as I recall it, and 100 >> is the best of the breed. I'll give this one a 100 till something better >> shows up. > >Ditto. (so far... ya never know;) > > -Mike Yup, this is sweet so far. :-) -- Cheers, Gene "There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) The vulcan-death-grip ping has been applied. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Gene Heskett on 2 May 2007 05:00 On Wednesday 02 May 2007, Ingo Molnar wrote: >* Gene Heskett <gene.heskett(a)gmail.com> wrote: >> > I noticed a (harmless) bounds warning triggered by the reduction in >> > size of array->bitmap. Patchlet below. >> >> I just checked my logs, and it appears my workload didn't trigger this >> one Mike. [...] > >yeah: this is a build-time warning and it needs a newer/smarter gcc to >notice that provably redundant piece of code. It's a harmless thing - >but nevertheless Mike's fix is a nice little micro-optimization as well: >it always bothered me a bit that at 140 priority levels we were _just_ >past the 128 bits boundary by 12 bits. Now on 64-bit boxes it's just two >64-bit words to cover all 100 priority levels of RT tasks. > >> [...] And so far, v8 is working great here. And that great is in my >> best "Tony the Tiger" voice, stolen shamelessly from the breakfast >> cereal tv commercial of 30+ years ago. :) > >heh :-) > >> Ingo asked for a 0-100 rating, where 0 is mainline as I recall it, and >> 100 is the best of the breed. I'll give this one a 100 till something >> better shows up. > >nice - and you arent even using any OpenGL games ;) > >The 0-100 rating is really useful to me so that i can see the impact of >regressions (if any) and it's also one single number representing the >subjective impression - that way it's easier to keep tab of things. > >btw., do you still renice kmail slightly, or does it now work out of box >with default nice 0? > > Ingo For this last couple of boots, its "right out of the box" and isn't getting under my skin. A make -j4 didn't bother it either. -- Cheers, Gene "There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order." -Ed Howdershelt (Author) The goys have proven the following theorem... -- Physicist John von Neumann, at the start of a classroom lecture. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Balbir Singh on 2 May 2007 05:10 Ingo Molnar wrote: > Changes since -v7: > > - powerpc debug output and build warning fixes (Balbir Singh) > > - documentation fixes (Zach Carter) > > - interactivity: precise load calculation and load smoothing > > As usual, any sort of feedback, bugreport, fix and suggestion is more > than welcome, > > Ingo Hi, Ingo, I would like to report, what I think is a regression with -v8. With -v7 I would run the n/n+1 test. Basically on a system with n cpus, I would run n+1 tasks and see how their load is distributed. I usually find that the last two tasks would get stuck on one CPU on the system and would get half the cpu time as their other peers. I think this issue has been around for long even before CFS. But while I was investigating that, I found that with -v8, all the n+1 tasks are stuck on the same cpu. Output of /proc/sched_debug # cat /proc/sched_debug Sched Debug Version: v0.02 now at 1507287574145 nsecs cpu: 0 .nr_running : 0 .raw_weighted_load : 0 .nr_switches : 111130 .nr_load_updates : 376821 .nr_uninterruptible : 18446744073709551550 .next_balance : 4295269119 .curr->pid : 0 .clock : 7431167968202137 .prev_clock_raw : 7431167968202136 .clock_warps : 0 .clock_unstable_events : 0 .clock_max_delta : 0 .fair_clock : 26969582038 .prev_fair_clock : 26969539422 .exec_clock : 9881536864 .prev_exec_clock : 9881494248 .wait_runtime : 116431647 .cpu_load[0] : 0 .cpu_load[1] : 0 .cpu_load[2] : 0 .cpu_load[3] : 0 .cpu_load[4] : 0 runnable tasks: task PID tree-key delta waiting switches prio wstart-fair sum-exec sum-wait ---------------------------------------------------------------------------------------------------------------------------- cpu: 1 .nr_running : 0 .raw_weighted_load : 0 .nr_switches : 56374 .nr_load_updates : 376767 .nr_uninterruptible : 156 .next_balance : 4295269118 .curr->pid : 0 .clock : 7431167857161633 .prev_clock_raw : 7431167857161632 .clock_warps : 0 .clock_unstable_events : 0 .clock_max_delta : 0 .fair_clock : 34038615236 .prev_fair_clock : 34038615236 .exec_clock : 18764126904 .prev_exec_clock : 18764126904 .wait_runtime : 132146856 .cpu_load[0] : 0 .cpu_load[1] : 0 .cpu_load[2] : 0 .cpu_load[3] : 0 .cpu_load[4] : 0 runnable tasks: task PID tree-key delta waiting switches prio wstart-fair sum-exec sum-wait ---------------------------------------------------------------------------------------------------------------------------- cpu: 2 .nr_running : 5 .raw_weighted_load : 5120 .nr_switches : 140351 .nr_load_updates : 376767 .nr_uninterruptible : 18446744073709551559 .next_balance : 4295269128 .curr->pid : 6462 .clock : 7431167968695481 .prev_clock_raw : 7431167968695480 .clock_warps : 0 .clock_unstable_events : 0 .clock_max_delta : 0 .fair_clock : 178895812434 .prev_fair_clock : 178895727748 .exec_clock : 858569069824 .prev_exec_clock : 858568528616 .wait_runtime : 2643237421 .cpu_load[0] : 0 .cpu_load[1] : 0 .cpu_load[2] : 0 .cpu_load[3] : 0 .cpu_load[4] : 0 runnable tasks: task PID tree-key delta waiting switches prio wstart-fair sum-exec sum-wait ---------------------------------------------------------------------------------------------------------------------------- R bash 6462 178897659138 1846704 -1846958 19646 120 -178895812434 169799117688 135410790136 bash 6461 178897934427 2121993 -7673376 19538 120 -5551118 169989747968 135499300276 bash 6460 178898353788 2541354 -6492732 19608 120 -3951111 170136703840 135648219117 bash 6459 178899921997 4109563 -6460948 19747 120 -2351093 170559324432 135812802778 bash 6458 178901052918 5240484 -5991881 19756 120 -751111 171257975848 135805570391 cpu: 3 .nr_running : 1 .prev_fair_clock : 24318712701 .exec_clock : 20098322728 .prev_exec_clock : 20098322728 .wait_runtime : 178370619 .cpu_load[0] : 0 .cpu_load[1] : 0 .cpu_load[2] : 0 .cpu_load[3] : 0 .cpu_load[4] : 0 runnable tasks: task PID tree-key delta waiting switches prio wstart-fair sum-exec sum-wait ---------------------------------------------------------------------------------------------------------------------------- R cat 7524 24318779730 67029 -67029 3 120 -24318712701 1661560 2277 .raw_weighted_load : 1024 .nr_switches : 43253 .nr_load_updates : 376767 .nr_uninterruptible : 18446744073709551583 .next_balance : 4295269180 .curr->pid : 7524 .clock : 7431167970150081 .prev_clock_raw : 7431167970150080 .clock_warps : 0 .clock_unstable_events : 0 .clock_max_delta : 0 .fair_clock : 24318712701 Output of top 6459 root 20 0 4912 792 252 R 20 0.0 8:29.33 bash 6458 root 20 0 4912 792 252 R 20 0.0 8:29.90 bash 6460 root 20 0 4912 792 252 R 20 0.0 8:28.94 bash 6461 root 20 0 4912 792 252 R 20 0.0 8:28.88 bash 6462 root 20 0 4912 792 252 R 20 0.0 8:28.54 bash -- Warm Regards, Balbir Singh Linux Technology Center IBM, ISTL - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on 2 May 2007 06:10 * Balbir Singh <balbir(a)linux.vnet.ibm.com> wrote: > With -v7 I would run the n/n+1 test. Basically on a system with n > cpus, I would run n+1 tasks and see how their load is distributed. I > usually find that the last two tasks would get stuck on one CPU on the > system and would get half the cpu time as their other peers. I think > this issue has been around for long even before CFS. But while I was > investigating that, I found that with -v8, all the n+1 tasks are stuck > on the same cpu. i believe this problem is specific to powerpc - load is distributed fine on i686/x86_64 and your sched_debug shows a cpu_load[0] == 0 on CPU#2 which is 'impossible'. (I sent a few suggestions off-Cc about how to debug this.) Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Bill Huey on 2 May 2007 06:10
On Tue, May 01, 2007 at 10:57:14PM -0400, Ting Yang wrote: > Based on my understanding, adopting something like EEVDF in CFS should > not be very difficult given their similarities, although I do not have > any idea on how this impacts the load balancing for SMP. Does this worth > a try? > > Sorry for such a long email :-) An excellent long email. Thanks. Have you looked at Con's SD ? What did you think about it analytically and do you think that these ideas could be incorporated into that scheduler ? bill - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |