From: Terje Mathisen "terje.mathisen at on 21 Jan 2010 13:51 MitchAlsup wrote: > After reading this thread several times, it seems that the timer one > is looking for has several properties: > > A: can be read at least a billion times per second uniformly over a > whole system of thousands of nodes > B: always returns a unique number--this number related to time in some > way > C: this number is ultimatey used to determine order (i.e > synchronization winners and loosers) > D: uses all the fast access pathways in the system (i.e. cache > hierarchy) > E: but never uses any of the slow parts of the system (i.e. cache > coherence mechanism, OS-calls) > F: leverages off of fast access techniques (user mode instructions, > TLB) > G: which is safe, secure, fast, and<blah blah> > > This reminds me of what the physicists were probably talking about > just after the turn of the previous century between the discovery of > the photoelectric effect and the development of quantum mechanics. :-) See mine and Andy's post about the difference between timestamps and timers; wanting a single mechanism that is perfect for all uses and which carries nearly zero hw/sw cost is obviously somewhat naive. Terje -- - <Terje.Mathisen at tmsw.no> "almost all programming can be viewed as an exercise in caching"
From: Robert Myers on 21 Jan 2010 15:05 On Jan 21, 12:51 pm, MitchAlsup <MitchAl...(a)aol.com> wrote: > This reminds me of what the physicists were probably talking about > just after the turn of the previous century between the discovery of > the photoelectric effect and the development of quantum mechanics. Not likely. One of the most easily-understood results of special relativity is that a universal clock is not possible even in theory. Robert.
From: nmm1 on 21 Jan 2010 15:07 In article <4B57BC27.50906(a)patten-glew.net>, Andy \"Krazy\" Glew <ag-news(a)patten-glew.net> wrote: >Terje Mathisen wrote: > >>>> It's not so bad as you think. As long as your uncertainty of time is >>>> smaller than the communication delay between the nodes, you are fine, >>>> i.e. >>>> your values are unique - you only have to make sure that the adjustments >>>> propagate through the shortest path. >>> >>> Er, no. How do you stop two threads delivering the same timestamp >>> if they execute a 'call' at the same time without having a single >>> time server? Ensuring global uniqueness is the problem. >> >> No! >> >> Global uniqueness is a separate, but also quite important problem. >> >> It is NOT fair to saddle every single timestamp call with the overhead >> required for a globally unique value! > >Amen, brother! > >Too many timestamp and time related functions are rolled into one. Actually, I agree. >There are TIMESTAMPS, e.g. for databases. Wanting global uniqueness >and monotonicity. They do not even necessarily need time, although >it is pleasant to be able to compute timestamp deltas and calculate >time elapsed. Providing these isn't actually all that much simpler, once you demand sequential consistency and compatibility with the ordering implied by other communication channels (e.g. memory visibility). And a lot of people and algorithms do make those demands. I don't claim to be entirely consistent in my views[*], because the best architectural choice is always very dependent on exactly how you prioritise your requirements. And I certainly vary in that, depending on how I approach problems at any time. There certainly are reasonable requirements for global uniqueness and sequentially consistent monotonicity, but the question is whether the sacrifices you have to make to provide them are too great. And the same question applies to requiring the same properties for wall-clock time. The same thing applies to demanding predictability for floating- point calculations, but there I am fairly consistent in thinking that it's a Bad Idea. But there is a lot more experience there, and almost all of it leads to that conclusion. Parallel time and timestamps is a less mature field. [*] Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes. W. Whitman. Regards, Nick Maclaren.
From: Robert A Duff on 21 Jan 2010 19:14 Mayan Moudgill <mayan(a)bestweb.net> writes: > (Going back to the privileged-code-page approach) Alternatively, we > could guarantee that every instruction on a page was the start of a safe > code sequence. What if somebody tries to jump into the middle of an instruction? >...This could be done trivially by having each of the > instructions be branches to the actual function. But then the function > body itself would still need to be guarded somehow. A possible solution > would be to have instructions that are available only in privilege mode, > but having a page protect mode such that, if an instruction from that > page is executed, the privilege level of the executing process is > escalated. So, the page will have a EXECUTE-AND-CHANGE-PRIVILEGE bit > set, if any instruction from that page is executed the privilege of the > process is increased, and that page contains branches to the actual > functions. > > Of course, the hardware could simply treat such a page as a vector of > pointers, and the branch-and-change-privilege picks an instruction from > the page to branch to. > > (Going back to the privileged-code-page approach) Another alternative is > to allow entry to pages with EXECUTE-PRIVILEGED-CODE only at the > beginning of the page. This has the drawback of requiring an entire page > to be devoted to what might be a small function, which is not a big deal > on a desktop processor; there may be a performance penalty associated > with the additional TLB entries. Why does it have to be a whole page? Maybe you could say that if you call/jump to such a page, the address has to be a multiple of (say) 64 bytes. Otherwise, the hardware traps. So you can put a bunch of 64-byte privileged procedures on each such page. This answers my question about "middle of an instruction". - Bob
From: Gavin Scott on 21 Jan 2010 20:10
Robert A Duff <bobduff(a)shell01.theworld.com> wrote: > Why does it have to be a whole page? Maybe you could say that if > you call/jump to such a page, the address has to be a multiple > of (say) 64 bytes. Otherwise, the hardware traps. So you can > put a bunch of 64-byte privileged procedures on each such page. PA-RISC has an even more flexible form of this in that you can associate a privilege promotion level with an executable page. The promotion does not happen when you branch to the page, but when you execute a PC relative branch instruction on that page that includes the ,GATE option, the target will execute at the set privilege level. So you can fill a page up with entry points that promote themselves. Promotion can only happen at the points of the GATE instructions, so branching into the middle of an instruction sequence won't let you do anything special. A page marked in this way is called a Gateway page, and this is the primary mechanism for privilege promotion in the architecture. G. |