Prev: PEEEEEEP
Next: Texture units as a general function
From: Anne & Lynn Wheeler on 26 Dec 2009 18:37 Robert Myers <rbmyersusa(a)gmail.com> writes: > Not having to deal with RJE emulation and HASP was more important to > bringing computing in-house than was the cost of computation. Even if > I had to do a computation on a Cray, I wanted the data on my own > hardware as quickly as possible, to end the back and forth. re: http://www.garlic.com/~lynn/2009s.html#34 Larrabee delayed: anyone know what's happening? businesses didn't mind so much that business critical data was traveling out to somebody's desktop for use in spreadsheet (as long as it was possibly all on premise on non-authorized people couldn't evesdrop) .... it was when it disappeared from the datacenter to reside on somebody's desktop ... which then experienced some desktop glitch .... and found it wasn't backed up ... and the business found itself w/o some major critical piece of business operational data (putting the business at risk). in the mid-90s there was some study that half of the business that lost disk with unbacked up business critical data, filed for bankruptcy within 30 days. business critical datacenters tended to have little things like (at least) daily backups ... along with disaster recovery plans .... contingencies to keep the business running (that had become critically dependent on dataprocessing processes) when we were doing ha/cmp ... some past posts http://www.garlic.com/~lynn/subtopic.html#hacmp I coined the terms "disaster survivability" and "geographic survivability" (to differentiate from simple disaster/recovery) .... some past posts http://www.garlic.com/~lynn/subtopic.html#available also in that period ... i was asked to write a section for the corporate continuous availability strategy document. unfortuantely, both Rocherster and POK objected to the section (they couldn't meet the implementation description at the time) ... and it got pulled. -- 40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Robert Myers on 26 Dec 2009 19:05 On Dec 26, 6:37 pm, Anne & Lynn Wheeler <l...(a)garlic.com> wrote: > in the mid-90s there was some study that half of the business that lost > disk with unbacked up business critical data, filed for bankruptcy > within 30 days. Of course, anyone that exposed was probably very poor and doing lots of other things wrong, too. Robert.
From: Bernd Paysan on 26 Dec 2009 19:36 Mayan Moudgill wrote: > So: summarizing - I still don't think active messages is the right > name. I haven't encountered any real-life instances where people > actually send code to be executed (or even interpreted) at a low-level > inside the device driver. I have. Several different, to tell you. One guy (Heinz Schnitter) sends source code around - this is a distributed programming system. Another one (Chuck Moore) sends instructions around - this is a small chip full of tiny CPUs. They all did not really generalize, though I know that Chuck Moore knows what Heinz did. > Even the active message people did not - > they sent code pointers, rather than code. Honestly speaking, that's likely caused by the use of C, or another Algol-like programming language. You can't easily generate code, but you can easily generate pointers to code. If they had used Fortran, they probably had sent numbers around, which then would have been dispatched in a computed goto statement. If they had used Forth, as the two examples above, they wouldn't have been limited by the restrictions of Algol-like languages. IMHO "active message" is the term that's most generic of all the terms I heard. It means little more than "a message that knows what it should do" rather than "a message where the receiver knows what to do with it". The details, what this means, i.e. what kind of machine model you have in mind, doesn't really matter - it is some sort of executable code. What *kind* of executable code this should be is a comp.arch question ;-). -- Bernd Paysan "If you want it done right, you have to do it yourself" http://www.jwdt.com/~paysan/
From: "Andy "Krazy" Glew" on 26 Dec 2009 22:40 Terje Mathisen wrote: > As soon as you let multiple cpus access the same cache line at the same > time, you have a serious performance problem, which is why I suggested > that only in the case of atomic/synch primitives should you ever do it. > > I accept however that if both you and Andy think this is bad, then it > probably isn't such a good idea to allow programmers to be surprised by > the difference between one size of data objects and another, both of > which can be handled inside a register and with size-specific load/store > operations available. > :-( > > Terje I'm not as sure as Nick is. I would *like* what you say to be true. I would much rather implement word based "coherency" (eventual coherency) than byte based. Since this is a somewhat new approach, I tend to think in terms of extremes. Never fear, however: I can see some approaches that, while not as good as arbitrary bytes, have byte semantics without requiring the overhead. --- By the way, if we do this, then having multiple CPUs access the same cache line is not so bad a problem. Not so bad for ordinary accesses; although still perhaps a problem when the inevitable cache flushes are needed.
From: "Andy "Krazy" Glew" on 26 Dec 2009 22:44
Andrew Reilly wrote: > On Wed, 23 Dec 2009 21:17:07 -0800, Andy \"Krazy\" Glew wrote: > >> And dataflow, no matter how you >> gloss over it, does not really like stateful memory. Either we hide the >> fact that there really is memory back there (Haskell monads, anyone?), >> or there is another level of synchronization relating to when it is okay >> to overwrite a memory location. I vote for the latter. > > Why? I've only been fooling around with functional programming for a > year or so, and have not graduated to the point where I think that I'm up > for Haskell (I haven't convinced myself that I can let go of explicit > execution order, yet.) Compared to all of the Turing-school languages > (Fortran's descendants) that are all about modifying state, the Church- > school (Lisp's descendants) that is more about the computations is quite > a liberating (and initially mind-altering) change. > > Why prefer adding layers of protocol and mechanism, so that you can > coordinate the overwriting of memory locations, instead of just writing > your results to a different memory location (or none at all, if the > result is immediately consumed by the next computation?) > > [I suspect that the answer has something to do with caching and hit- > rates, but clearly there are trade-offs and optimizations that can be > made on both sides of the fence.] > > Cheers, Yep. It's caching. Reuse of memory. I've run experiments where you never reuse a memory location. (You can do it even without language support, by a mapping in your simulator.) Performance very bad. 2X-4X worse. You have to somehow reuse memory to benefit from caches. |