Prev: Selinux: Remove unused headers skbuff.h in selinux/nlmsgtab.c
Next: epoll clarification sought: multithreaded epoll_wait for UDP sockets?
From: Peter Zijlstra on 4 Mar 2010 13:20 On Thu, 2010-03-04 at 09:54 -0800, Stephane Eranian wrote: > On Thu, Mar 4, 2010 at 12:58 AM, Peter Zijlstra <peterz(a)infradead.org> wrote: > > On Wed, 2010-03-03 at 22:57 +0100, Stephane Eranian wrote: > >> I don't understand how LBR state is migrated when a per-thread event is moved > >> from one CPU to another. It seems LBR is managed per-cpu. > >> > >> Can you explain this to me? > > > > It is not, its basically impossible to do given that the TOS doesn't > > count more bits than is strictly needed. > > > I don't get that about the TOS. > > So you are saying that one context switch out, you drop the current > content of LBR. When you are scheduled back in on an another CPU, > you grab whatever is there? What is currently implemented is that we loose history at the point a new task schedules in an LBR using event. If we had a wider TOS we could try and stitch partial stacks together because we could detect overflow. We could also preserve the LBR because we would be able to know where a task got scheduled in and not release information of the previous task while still allowing a cpu-wide user to see everything. > > Or we should stop supporting cpu and task users at the same time. > > > Or you should consider LBR as an event which has a constraint that > it can only run on one pseudo counter (similar to what you do with > BTS). Scheduling would take care of the mutual exclusion. Multiplexing > would provide the work-around. Yes, that an even more limited case than not sharing it between task and cpu context, which is basically the strongest you need. If you do that you can store the LBR stack on unschedule and put it back on schedule (on whichever cpu that may be). But since we do not support LBR-config that'll be of very limited use since there are enough branches between the point where we schedule the counter to hitting userspace to cycle the LBR several times. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Peter Zijlstra on 4 Mar 2010 15:30 On Thu, 2010-03-04 at 19:18 +0100, Peter Zijlstra wrote: > What is currently implemented is that we loose history at the point a > new task schedules in an LBR using event. > This also matches CPU errata AX14, AJ52 and AAK109 which states that a task switch may produce faulty LBR state, so clearing history after a task switch seems the best thing to do. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
From: Stephane Eranian on 4 Mar 2010 16:00
On Thu, Mar 4, 2010 at 12:23 PM, Peter Zijlstra <peterz(a)infradead.org> wrote: > On Thu, 2010-03-04 at 19:18 +0100, Peter Zijlstra wrote: >> What is currently implemented is that we loose history at the point a >> new task schedules in an LBR using event. >> > This also matches CPU errata AX14, AJ52 and AAK109 which states that a > task switch may produce faulty LBR state, so clearing history after a > task switch seems the best thing to do. > > You would save the LBR before the task switch and restore after the task switch, so I don't see how you would be impacted by this. You would not pick up the bogus LBR content. Given that you seem to be interested only in LBR at the user level. I think what you have right now should work. But I don't like a design that precludes supporting LBR config regardless of the fact the MSR is shared or not, because that is preventing some interesting measurements. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo(a)vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ |