From: Piotr Wyderski on 20 Oct 2009 08:04 Gavin Scott wrote: > Well, the flop is still a multi-billion dollar business I think, and > we support some thousand(s) of them that are actively used by some of > the world's largest companies for a substantial portion of their > critical computing needs. So do we. But the companies do not perform their calculations on Itanics just because they are Itanics, but because Intel and HP hire really good marketing guys who were able to sell that hardware. The very first thing we do is to persuade our clients not using that platform. A large enough x64 or Sparc-based system is all they should need. I agree that supporting Itanium-based systems it is a business, but it is something like making money on cripples -- not a very ethical thing to do, since there exists a cure. > There were perhaps more new and interesting ISA features in Itanium > than in any other I can recall. You may not like them, or may not > consider them a success, but it's definitely in the realm of "new > and interesting within the last 20 years" in at least a few respects. Surely. But a flop is a flop... Best regards Piotr Wyderski
From: Bill Todd on 20 Oct 2009 09:41 Terje Mathisen wrote: .... > OTOH, I really do believe Intel intended to start deliver in 1997, in > which case it _would_ have been, by far, the fastest cpu on the planet. I don't care enough at this late date to do more than question the above assertion from what I can remember off the top of my head without attempting to quantify my reservations, but remember that the product that they allegedly expected to deliver in 1997 (or perhaps 1998, depending on how you read their early claims) was Merced, not McKinley. Do you really think that Merced in a then-current process "_would_ have been, by far, the fastest cpu on the planet" - especially for general (rather than heavily FP-specific) code? Considering the competition at about that time (e.g., the Alpha 21264 and the Pentium generation that was giving it a good run at times - and PA-RISC was no slouch either, all of them successful OoO implementations) that seems at least debatable. > When they finally did deliver, years later, it was still the fastest cpu > for dense fp kernels like SpecFP. Fastest by a small margin, once Compaq grudgingly tuned the then-current Alpha products for SpecFP as much as the McKinleys were tuned (had they done this earlier, Merced would not have been at the top of the heap at all). > > They delivered too little, too late, but still managed to terminate > several competing architecture development tracks at other vendors. Based almost completely on ridiculously overblown hype and perceived market domination rather than on technical merit. - bill
From: Bernd Paysan on 20 Oct 2009 10:17 nmm1(a)cam.ac.uk wrote: >>Now, printing books is going to end soon, through e-book readers, lowering >>the price of a book by another order of magnitude. > > That's two extreme speculations in one sentence :-) I'm not talking about the official sales price of a book ;-). A usable e- book reader is to books as what an MP3 player is to music. Did the official sales price of music drop due to the existence of MP3 players and downloaded music? No. It doesn't matter, the practical price for music dropped by an order of magnitude or more. The book publishers are going to repeat all the mistakes the MI did, plus some new (like abusing DRM right from start, e.g. Amazon erasing "1984"). -- Bernd Paysan "If you want it done right, you have to do it yourself" http://www.jwdt.com/~paysan/
From: nmm1 on 20 Oct 2009 10:52 In article <D7idnZckOPiTI0DXnZ2dnUVZ_vSdnZ2d(a)metrocastcablevision.com>, Bill Todd <billtodd(a)metrocast.net> wrote: >Terje Mathisen wrote: > >> OTOH, I really do believe Intel intended to start deliver in 1997, in >> which case it _would_ have been, by far, the fastest cpu on the planet. > >I don't care enough at this late date to do more than question the above >assertion from what I can remember off the top of my head without >attempting to quantify my reservations, but remember that the product >that they allegedly expected to deliver in 1997 (or perhaps 1998, >depending on how you read their early claims) was Merced, not McKinley. > Do you really think that Merced in a then-current process "_would_ >have been, by far, the fastest cpu on the planet" - especially for >general (rather than heavily FP-specific) code? ... Oh, yes, indeed, it would have been - if they had delivered in 1997 what they were promising in 1995-6. Inter alia, it would have had lazy execution as the norm, compilers that would have optimised arbitrary C spaghetti into excellent IA64 machine code, it would been cheaper than the competition, and no doubt it would have been announced by a sounder of pigs singing hosannas overhead. If they had delivered just the hardware, it would have been by far the fastest cpu on the planet for suitable codes (not necessarily floating-point, but definitely only a small subset). The process wouldn't have been a catastrophic problem, though the yield of its pancakes wouldn't have been wonderful. And they might have pulled that off, without needing to break any of the laws of mathematics or physics. Regards, Nick Maclaren.
From: nmm1 on 20 Oct 2009 15:49
In article <60e27937-1063-4241-9b96-88af251ba66b(a)p23g2000vbl.googlegroups.com>, Robert Myers <rbmyersusa(a)gmail.com> wrote: >> There are differences. =A040 years ago, few applications programmers >> needed to know about it - virtually the only exceptions were the >> vector computer (HPC) people and the database people. =A0That is >> about to change. >> >Vector computing was trivial, relatively speaking. I learned Cray >assembly language before I learned any microprocessor assembly >language. None of us really knew much of anything about concurrency >except how to turn loops inside out, to insert $IVDEP directives, to >use the vector mask register, and to avoid indirect addressing. Yes, it is. However, that's not what I was talking about. There was significant work done with asynchronicity, parallel databases, parallel I/O, parallel communications and so on. I can't say when it started, because it was before my time. >> >All that work that was done in the first six days of the history of >> >computing was aimed at doing the same thing that human "computers" >> >were doing calculating the trajectories of artillery shells. =A0Leave >> >the computer alone, and it can still manage that sort of very >> >predictable calculation tolerably well. >> >> Sorry, but that is total nonsense. =A0I was there, from the late 1960s >> onwards. >> >Where do you think I was, Nick? Do you know? I don't want to make >this personal, but when you use comp.arch as a forum for your personal >mythology, I don't know how to avoid it. I stand by my >characterization. If you check up on it, the first interactive 'real-time' games date from the 1950s, and they were widespread by the mid-1960s. Again, before I started. If you will deprecating the work of the first generation, I will stop correcting you. >For one thing, the computers that were available, even as late as the >late sixties, were pathetic in terms of what they could actually do. As the Wheelers frequently point out, many people did then with those 'pathetic' computers what people are still struggling to do. At the start of this thread, I pointed out that the initial inventions were often no more than proof of concept, but their existence is enough to show that things have NOT changed out of all recognition. >> >Even though IBM and its camp-followers had to learn early how to cope >> >with asynchronous events ("transactions"), they generally did so by >> >putting much of the burden on the user: if you didn't talk to the >> >computer in just exactly the right way at just exactly the right time, >> >you were ignored. >> >> Ditto. >> >Nick. I *know* when time-sharing systems were developed. I wasn't >involved, but I was *there*, and I know plenty of people who were. What on earth are you on about? Let's ignore the detail that Cambridge was one of the leading sites in the world in that respect. You claim that the 'IBM' designs involved transactions being ignored - nothing could be further from the truth. That is a design feature of the X Windowing System (and perhaps Xerox PARC before it), and was NOT a feature of the mainframe designs. Regards, Nick Maclaren. |