Prev: VMWare tools killed my Mac OS X ?
Next: Software vs hardware floating-point [was Re: What happened ...]
From: nmm1 on 7 Sep 2009 16:51 In article <slrnhaaruk.4c3.jhaynes(a)localhost.localdomain>, Jim Haynes <jhaynes(a)alumni.uark.edu> wrote: > >And what they are doing is very much limited by the manufacturing >processes, so you can't propose a different architecture without >considering how it is to be manufactured. That's true. >And there is the momentum >of what is already being done. You can't plan to make an architecture >and build one or a few machines to test the water - anything you do >has to sell thousands and millions to be worth the cost of getting >ready to produce it. Sorry, but that isn't. Engineering companies often produce prototypes in small numbers, fairly often with no intention of selling many. All that is needed is that the information gained is expected to justify the investment. That is, after all, what research and development is all about. As I have posted before, Intel can easily afford to spend 10 million on an experimental project that fails, but cannot afford to save the 10 million and miss out on the next revolution. Judging when an experiment is worth the gamble is what senior vice-presidents are paid to do. Regards, Nick Maclaren.
From: Robert Myers on 7 Sep 2009 16:54 On Sep 7, 3:38 pm, n...(a)cam.ac.uk wrote: > Only a complete loon would > expect current software to do anything useful on large numbers of > processors, let alone with a new architecture! You've such a way with words. Nick. Browsers, which are probably the OS of the future, are already multi-threaded or soon to be. No longer does the browser freeze because of some java script in an open tab. Browsers that don't seize that advantage will fall by the wayside. The same will happen all over software, and at increasing levels of fineness of division of labor. Robert.
From: Mayan Moudgill on 7 Sep 2009 16:58 Robert Myers wrote: > On Sep 6, 9:44 pm, Mayan Moudgill <ma...(a)bestweb.net> wrote: > >> >>Basically, I think the field has gotten more complicated and less >>accessible to the casual reader (or even the gifted well read amateur). >>The knowledge required of a computer architect have increased to the >>point that its probably impossible to acquire even a *basic* grounding >>in computer architecture outside of actually working in the field >>developing a processor or _possibly_ studying with one of a few PhD >>programs. The field has gotten to the point where it _may_ require >>architects to specialize in different application areas; a lot of the >>skills transfer, but it still requires retraining to move from, say, >>general-purpose processors to GPU design. >> > > I don't know about computer architecture, but the general feeling in > physics has always been that almost no one (except the speaker and his > small circle of peers, of course) is smart enough to do physics, and > you seem to be echoing that unattractive sentiment here. > Unlike physics, you don't have to be smart to do computer architecture; Its much more of an art-form. However, its informed by a lot of knowledge. When one makes an architectural trade-off, one has to evaluate: - how much will this benefit? - how will it be implemented? - how will it fit together with the rest of the design? - is there some better way of doing things? - does it really solve the problem you think its going to solve? - how will it affect cycle time? area? power? yield? If you draw a box with a particular feature, you'd better be able to answer the time/area/power question. That depends on having a good feel for how it will translate into actual hardware, which in turn requires you to understand both what you could do if you could do full-custom and what you could do if you were restricted to libraries. You have to know the process you're designing in, and its restrictions - in particular, wire-delay will get more important. If you're using dynamic logic, there are even more issues. You have to keep in mind the limitations of the tools to place/route/time/perform noise-analysis etc. You have to understand the pipeline you're going to fit into, and the floorplan of the processor, so that you can budget for wire delays and chop up the block into appropriate stages. And this does not even address the issues of coming up with the features in the first place. Thats generally driven by the application or application mix you are trying to tackle. You have to be able to understand where the bottlenecks are. Then you have to come up with ways to remove them. Quite often, this can be done without changes to the architecture, or changes to the architecture that appear to solve a completely different problem. Also, if you remove a bottleneck, you have to figure out whether there's going to be a bottleneck just behind it. Of course, it helps to have an encylopedic knowledge of what was done before, both in hardware and in the software that ran on it.
From: Mayan Moudgill on 7 Sep 2009 17:01 Anne & Lynn Wheeler wrote: > Mayan Moudgill <mayan(a)bestweb.net> writes: > >>Consider this: at one time, IBM had at least 7 teams developing >>different processors: Rochester, Endicott, Poughkeepsie/Fishkill, >>Burlington, Raliegh, Austin & Yorktown Heights (R&D). > > > don't forget los gatos vlsi lab ... did chips for various disk division > products (like jib prime for 3880 disk controller) . also put in lots of > work on blue iliad (1st 32bit 801 ... never completed). then there was > stuff going outside the US. Of course <smack> forgot Boblingen. Hmm....can't think of anywhere else, though IBM labs at Haifa and Zurich might have done some work.
From: nmm1 on 7 Sep 2009 17:10
In article <aaf198b8-b33b-4214-a142-b0958f6d99cf(a)m11g2000yqf.googlegroups.com>, Robert Myers <rbmyersusa(a)gmail.com> wrote: >On Sep 7, 3:38=A0pm, n...(a)cam.ac.uk wrote: > >> Only a complete loon would >> expect current software to do anything useful on large numbers of >> processors, let alone with a new architecture! > >You've such a way with words. Nick. Browsers, which are probably the >OS of the future, are already multi-threaded or soon to be. So? If you think that making something "multi-threaded" means that it can make use of large numbers of processors, you have a lot to learn about developing parallel programs. And, by "large", I don't mean 4-8, I mean a lot more. >No longer >does the browser freeze because of some java script in an open tab. Oh, YEAH. I use a browser that has been multi-threaded for a fair number of versions, and it STILL does that :-( >Browsers that don't seize that advantage will fall by the wayside. >The same will happen all over software, and at increasing levels of >fineness of division of labor. Yeah. That's what I was being told over 30 years ago. Making use of parallelism is HARD - anyone who says it is easy is a loon. Yes, there are embarassingly parallel requirements, but there are fewer than most people think, and even they hit scalability problems if not carefully designed. Regards, Nick Maclaren. |