Prev: solutions of problems of advance computer architecture and parallel processing
Next: ------->>>FREE MAC<<<-------
From: jgd on 22 Mar 2010 10:59 In article <ho7h4o$e46$2(a)news.eternal-september.org>, ahlstromc(a)launchmodem.com (Chris Ahlstrom) wrote: > How is Window's support for supercomputer clusters? Well, it exists, a bit. There's a version of Windows Server that's intended to be distributed across many x86-64 modes. Its main selling point is that you don't need those snobby, awkward Linux/UNIX people to run it; your corporate Windows support people can supposedly handle it. The general reaction from potential customers has apparently been "Huh?" although apparently a few corporations have signed up. -- John Dallman, jgd(a)cix.co.uk, HTML mail is treated as probable spam.
From: Anne & Lynn Wheeler on 22 Mar 2010 11:30 jgd(a)cix.compulink.co.uk writes: > Well, it exists, a bit. There's a version of Windows Server that's > intended to be distributed across many x86-64 modes. Its main selling > point is that you don't need those snobby, awkward Linux/UNIX people to > run it; your corporate Windows support people can supposedly handle it. > > The general reaction from potential customers has apparently been "Huh?" > although apparently a few corporations have signed up. re: http://www.garlic.com/~lynn/2010f.html#50 Handling multicore CPUs; what the competition is thinking at '91 asilomar acm sigops conference, i had a running argument with jim about whether commodity chips could be used for both high availability and cluster scaleup (he was still at dec at the time ... so there possibly was some bias for vax/clusters). later he went to work for redmond and had to be up on the stage with the ceo for their cluster announcement. http://research.microsoft.com/en-us/um/people/gray/ -- 42yrs virtualization experience (since Jan68), online at home since Mar1970
From: Mike Jr on 23 Mar 2010 00:10 On Mar 22, 9:38 am, Anne & Lynn Wheeler <l...(a)garlic.com> wrote: > Mike Jr<n00s...(a)comcast.net> writes: > > Thank you. In the far distant past, IBM had a machine called the SP2 > > that used a shared nothing architecture to get around the SMP shared > > memory bottleneck. The SP2 was a supercomputer. > > before SP2 ... there was SP1 ... some of the genesis mentioned in this > jan92 meeting in ellison's conference roomhttp://www.garlic.com/~lynn/95.html#13 > [snip] Wow. Back in the 90's I did some high level consulting for IBM up in Somers. That was just around the time that the main frame business imploded. I was both appalled by how decisions had been focused through the main frame lens and how innovation in other labs, like Toronto, was stifled. A bunch of good people trying to do the right thing and getting nowhere. I heard numerous stories very similar to what you describe. It was disheartening. ------------------ http://www.jyqlv.com/index.php/2010/03/20/intel-hopes-48-core-chip-will-solve-new-challenges/ "The system is different in some ways, though, notably in its lack of cache coherencytechnology that keeps data stored in each cores high- speed memory bank synchronized with the others on the chip. By contrast, Intels Larrabee processor, a many-core x86 chip under development for graphics acceleration, is a cache-coherent design that has a large amount of real estate devoted to caching data. One major feature of the SCC design is a high-speed mesh network that lets each of the 48 cores communicate with others or with the four linked memory controllers. The first-generation Tera-scale chip had such a network, but the second-generation mesh consumes only a third of the power and is accelerated with built-in hardware instructions for minimum communication delays, Rattner said." ------------------- http://www.physorg.com/news187463445.html ""With older, single-processor systems, computers behave exactly the same way as long as you give the same commands. Today's computers are non-deterministic. Even if you give the same set of commands, you might get a different result," Ceze said. He and UW associate professors of computer science and engineering Mark Oskin and Dan Grossman and UW graduate students Owen Anderson, Tom Bergan, Joseph Devietti, Brandon Lucia and Nick Hunt have developed a way to get modern, multiple-processor computers to behave in predictable ways, by automatically parceling sets of commands and assigning them to specific places. Sets of commands get calculated simultaneously, so the well-behaved program still runs faster than it would on a single processor." --Mike Jr.
From: Robert Myers on 23 Mar 2010 00:43 On Mar 22, 7:22 am, Mike Jr <n00s...(a)comcast.net> wrote: > > BTW, I am posting this from my Ubuntu home computer running on an > Intel i7 multi-core CPU. > Don't tell Eugene. He won't be able to hobnob with members of Congress if you can do things on a desktop. Or shoot polar bears, either. Someone in Virginia just got executed for bragging, but I don't think Eugene reads that kind of news. Too busy drinking wine with members of congress. Anything less than a warehouse need not apply. Nick agrees; his job depends on it. Ancient lawns and other pretentiousness. Ubuntu? Don't you know that Microsoft controls everything? Robert.
From: Penang on 23 Mar 2010 01:04 On Mar 22, 1:38 am, Anne & Lynn Wheeler <l...(a)garlic.com> wrote: > Mike Jr <n00s...(a)comcast.net> writes: > > Thank you. In the far distant past, IBM had a machine called the SP2 > > that used a shared nothing architecture to get around the SMP shared > > memory bottleneck. The SP2 was a supercomputer. > > before SP2 ... there was SP1 ... some of the genesis mentioned in this > jan92 meeting in ellison's conference roomhttp://www.garlic.com/~lynn/95.html#13 > > and this old emailhttp://www.garlic.com/~lynn/lhwemail.html#medusa > > before it was transferred and positioned as numerical intensive only. > > recent thread in c.a.http://www.garlic.com/~lynn/2010f.html#47Nonlinear systems and nonlocal supercomputinghttp://www.garlic.com/~lynn/2010f.html#48Nonlinear systems and nonlocal supercomputinghttp://www.garlic.com/~lynn/2010f.html#49Nonlinear systems and nonlocal supercomputing > > as mentioned in the above thread ... the reason for doing message > passing was the rios chip set didn't support cache consistency for > shared memory (aka it "scale" past one). the engineering manager that we > reported to when starting the project had only relatively recently moved > to be head of somerset (joint motorola, ibm, apple, etc) that would do > single chip 801/risc and eventually support for cache consistency and > shared memory. as mentioned in the above thread, had also been doing > some stuff with SCI (which was numa shared memory) ... but until had a > chip that cache consistency semantics ... there wasn't much to do. > > in any case, within hrs of this email ... the hammer fell, the effort > transferred, we were told we couldn't work on anything with more than > four processorshttp://www.garlic.com/~lynn/2006x.html#email920129 > > it was then announced as product for numerical intensive only ... some > past press ... one from 17feb92http://www.garlic.com/~lynn/2001n.html#6000clusters > > and another from later that summerhttp://www.garlic.com/~lynn/2001n.html#6000clusters2 > > and we were gone within weeks of the above (got paid to leave and not > come back ... extra enducement was structured as sabbatical w/some > benefits to retirement). recent mention getting letter on the > last day claiming was promoted the following day ... this was after a > decade of being told that there were no promotions in my futurehttp://www..garlic.com/~lynn/2009r.html#6Have you ever though about taking a sabbatical?http://www.garlic.com/~lynn/2010f.html#20Would you fight? > > the SCI NUMA (multi-core) flavor from the 90s was multiple (2-4, > single-core) chips on the same board with shared L2 ... that were then > interconnected with SCI. sequent and data general both did four intel > processor boards with SCI & convex did a two hp risc processor boards > (with SCI). > > note that some of same the people involved in transferring the project > and telling us that we couldn't work on anything with than four > processors ... had also been involved in blocking our bidding on NSFNET > RFP; a couple recent references (i.e. director of NSF even wrote letter > to company execs ... but that just aggravated the internal politics)http://www.garlic.com/~lynn/2010e.html#64LPARs: More or Less?http://www.garlic.com/~lynn/2010e.html#80Entry point for a Mainframe? > > -- > 42yrs virtualization experience (since Jan68), online at home since Mar1970 So what happened next? You guys got the severance checks and just go home? That's it? Wow !
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 5 Prev: solutions of problems of advance computer architecture and parallel processing Next: ------->>>FREE MAC<<<------- |