From: John Larkin on 26 May 2010 22:43 On Wed, 26 May 2010 21:42:18 -0400, Phil Hobbs <pcdhSpamMeSenseless(a)electrooptical.net> wrote: >John Larkin wrote: >> On Wed, 26 May 2010 08:30:38 -0400, Phil Hobbs >> <pcdhSpamMeSenseless(a)electrooptical.net> wrote: >> >>> Jan Panteltje wrote: >>>> The next step: A way to produce flexible gallium arsenid wafers in quantity has been found: >>>> http://beforeitsnews.com/news/48/149/Semiconductor_Gallium_Arsenide_Twice_As_Efficient_As_Silicon_in_Solar_Power_Applications_Say_Illinois_Researchers.html >>>> >>>> Intel considers producing CPUs on the stuff. >>>> >>>> Now the THz processor? >>>> Multicore is dead ;-) >>>> >>> Not till someone figures out how to make P-channel GaAs FETs that are >>> worth anything. Hole mobility in GaAs is pitiful. Building a modern >>> processor out of NMOS would make for rather interesting power >>> dissipation densities--i.e. the whole thing would turn to lava. >>> >> >> Easy. Do a 50 GHz, billion-transistor CPU in RTL. >> >> John >> > >You might be able to do something with a 3D stack, putting GaAs on top >of Si. The problem there is that you really need the lower metal layers >(fine pitch, short lines) to connect the transistors in a gate. My old >colleague John Bowers of UCSD and his group figured out how to get InP >on silicon, which would be another approach. > >Cheers > >Phil Hobbs So far, the defect density of compound semiconductors kind of makes the point moot. I'm still astonished that anybody can run a 100M device silicon IC through scores of process steps and mask layers and get anything to work. Too bad nobody is making opamps out of phemts. I recall that some people used to make (bad) opamps out of all NPN transistors. John
From: Jan Panteltje on 27 May 2010 05:48 On a sunny day (Thu, 27 May 2010 00:00:40 -0700) it happened Kevin McMurtrie <mcmurtrie(a)pixelmemory.us> wrote in <4bfe1898$0$22159$742ec2ed(a)news.sonic.net>: >Software for 8 core systems is my job. There are two problems: > >1) Very few developers understand multithreading to a useful degree. >Usually they know what a semaphore is but they don't know how to manage >lock contention. The only fix for this is demanding higher standards. > >2) Most tasks CAN be broken up. The problem is that classic threading >tools have gobs of overhead, and that limits multithreading to tasks >with a low rate of interaction or repetition. Only very recent SDKs >have addressed this using lightweight task queues and executors. > >Apple has some good docs on solving this: >http://images.apple.com/macosx/technology/docs/GrandCentral_TB_brief_2009 >0903.pdf > >Java 1.5 and beyond have a standardized system for lightweight tasks >too. The Sun implementation could be better but it's easy enough to >swap in a custom one. I see it this way: If I have a render session running that takes 300 minutes on a 1 GHz (as example), then, if memory speed keeps track and all that, it will run in 3 minutes on a 300 GHz. I had many of those long render sessions running, usually start late at night, ready in the morning. But if somebody came with a 300 core 1 GHz I would see no advantage, on the contrary, that person would spend the next 300 *days* rewriting all soft and trying to take advantage of those extra cores, perhaps getting a 10x (more likely 3x, or even less) speed improvement. By that time (300 days later) I would have rendered thousands of productions, WITHOUT ever having to modify a single program or script, or even recompile. The extra speed would however allow me to use much better effects and incredibly nice features, do it all in real time in full resolution, resulting in a better end result.
From: Tim Williams on 27 May 2010 07:55 *Cough* How long do you figure the original software took to write? 300 days? If they had designed it for 300-core operation from the get-go, they wouldn't have had that problem. Sounds like a failure of management to me :) Tim -- Deep Friar: a very philosophical monk. Website: http://webpages.charter.net/dawill/tmoranwms "Jan Panteltje" <pNaonStpealmtje(a)yahoo.com> wrote in message news:htlf5v$qbe$1(a)news.albasani.net... > I see it this way: > If I have a render session running that takes 300 minutes on a 1 GHz (as > example), > then, if memory speed keeps track and all that, it will run in 3 minutes > on a 300 GHz. > I had many of those long render sessions running, usually start late at > night, ready in the morning. > > But if somebody came with a 300 core 1 GHz I would see no advantage, on > the contrary, > that person would spend the next 300 *days* rewriting all soft and trying > to take advantage of those > extra cores, perhaps getting a 10x (more likely 3x, or even less) speed > improvement. > By that time (300 days later) I would have rendered thousands of > productions, > WITHOUT ever having to modify a single program or script, or even > recompile. > The extra speed would however allow me to use much better effects and > incredibly nice features, > do it all in real time in full resolution, resulting in a better end > result. >
From: Jan Panteltje on 27 May 2010 11:12 On a sunny day (Thu, 27 May 2010 06:55:46 -0500) it happened "Tim Williams" <tmoranwms(a)charter.net> wrote in <htlmk9$sfc$1(a)news.eternal-september.org>: >*Cough* > >How long do you figure the original software took to write? 300 days? If >they had designed it for 300-core operation from the get-go, they wouldn't >have had that problem. > >Sounds like a failure of management to me :) > >Tim You are an idiot, I can hardly decrypt your rant. The original soft was written when there WERE no multicores. And I wrote large parts of it, AND it cannot be split up in more then say 6 threads if you wanted to. But, OK, I guess somebody could use a core for each pixel, plus do multiple data single instruction perhaps, will it be YOU who writes all that? Intel has a job for you! 64 bit x86 is still around and one of the reasons AMD was successful with that, is that it would run EXISTING code, not all though, but even a recompile was easy. But multicore is a totally different beast. I'd love to see a 300 GHz gallium arsenide x86 :-) I would buy one.
From: Tim Williams on 27 May 2010 13:00
"Jan Panteltje" <pNaonStpealmtje(a)yahoo.com> wrote in message news:htm259$o5f$1(a)news.albasani.net... >>How long do you figure the original software took to write? 300 days? If >>they had designed it for 300-core operation from the get-go, they wouldn't >>have had that problem. >> >>Sounds like a failure of management to me :) > > You are an idiot, I can hardly decrypt your rant. Strange, it's perfect American English. > The original soft was written when there WERE no multicores. > And I wrote large parts of it, > AND it cannot be split up in more then say 6 threads if you wanted to. Sounds like you aren't trying hard enough. Design constraints chosen early on, like algorithmic methods, can severely impact the final solution. Drawing pixels, sure, put a core on each and let them chug. Embarrassingly parallel applications are trivial to split up. If there's some higher level structure to it that prevents multicore execution, that would be the thing to look at. And yes, it may result in rewriting the whole damn program. Which was my point, it may be necessary to reinvent the entire program, in order to accommodate new design constraints as early as possible. > But, OK, I guess somebody could use a core for each pixel, GPUs have been doing it for decades. > plus do multiple data single instruction perhaps, > will it be YOU who writes all that? Intel has a job for you! No. SIMD is an instruction thing, not a core thing. Not at all parallel. SIMD tends to be cache limited, like any other instruction set, running on any other core. The only way to beat the bottleneck is with more cores running on more cache lines. FWIW, maybe you've noticed here before, I've done a bit of x86 assembly before. I'm not unfamiliar with some of the enhanced instructions that've been added, like SIMD. However, I've never used anything newer than 8086, not directly in assembly. More and more, especially with APIs and objects and abstraction and RAM-limited bandwidth, assembly is less and less practical. Only compiler designers really need to know about it. The days of hand-assembled inner loops went away some time after 2000. > I'd love to see a 300 GHz gallium arsenide x86 :-) > I would buy one. If Cray were still around, I bet they would actually be crazy enough to make John's GaAs RTL monster. Tim -- Deep Friar: a very philosophical monk. Website: http://webpages.charter.net/dawill/tmoranwms |