From: Dirk Bruere at NeoPax on 1 Jun 2010 09:45 On 01/06/2010 04:07, John Larkin wrote: > > http://online.wsj.com/article/SB10001424052748703406604575278671661900004.html > > > John > nVidia - 512 cores. I suspect that 512 simple cores will out-compute 50 complex cores. -- Dirk http://www.transcendence.me.uk/ - Transcendence UK http://www.blogtalkradio.com/onetribe - Occult Talk Show
From: JosephKK on 10 Jun 2010 09:21 On Mon, 31 May 2010 20:07:31 -0700, John Larkin <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote: > >http://online.wsj.com/article/SB10001424052748703406604575278671661900004.html > > >John For good statistics and historical data try here: http://www.top500.org/stats/list/30/procfam
From: MooseFET on 10 Jun 2010 09:56 On Jun 1, 11:07 am, John Larkin <jjlar...(a)highNOTlandTHIStechnologyPART.com> wrote: > http://online.wsj.com/article/SB1000142405274870340660457527867166190... > > John 50 seems an odd number. I would expect a power of 2 or a power of 3 number of cores. The power of 2 number is just because things tend to be doubled and doubled etc. The power of 3 number is because if you imagine a hypercube like arrangement where each side is a bus for communication directly between cores, it makes sense to have 3 processors on a bus because while A and B are talking, C can't be having a conversation with either. This would allow the array or cores to get information quickly between themselves. It assumes that they each have a cache that the transfer works to sync. At some point, adding more of the same cores stops working as well as adding some special purpose hardware to a fraction of the cores. Not every core needs to be able to do a floating point at all. Some would be able to profit from a complex number ALU or perhaps a 3D alu. Chances are, one core would get stuck with the disk I/O etc that core would profit from having fast interrupt times. The others less so.
From: John Larkin on 10 Jun 2010 11:52 On Thu, 10 Jun 2010 06:56:56 -0700 (PDT), MooseFET <kensmith(a)rahul.net> wrote: >On Jun 1, 11:07 am, John Larkin ><jjlar...(a)highNOTlandTHIStechnologyPART.com> wrote: >> http://online.wsj.com/article/SB1000142405274870340660457527867166190... >> >> John > >50 seems an odd number. I would expect a power of 2 or a power of 3 >number of cores. Maybe they did 64 and only get 50 to work? > >The power of 2 number is just because things tend to be doubled and >doubled etc. > >The power of 3 number is because if you imagine a hypercube >like arrangement where each side is a bus for communication >directly between cores, it makes sense to have 3 processors >on a bus because while A and B are talking, C can't be having >a conversation with either. This would allow the array or cores >to get information quickly between themselves. It assumes >that they each have a cache that the transfer works to sync. > >At some point, adding more of the same cores stops working >as well as adding some special purpose hardware to a fraction >of the cores. > >Not every core needs to be able to do a floating point at all. >Some would be able to profit from a complex number ALU >or perhaps a 3D alu. > >Chances are, one core would get stuck with the disk I/O etc >that core would profit from having fast interrupt times. The >others less so. Eventually we'll have a CPU as every device driver, and a CPU for every program thread, with real execution protection. No more buffer overflow exploits, no more crashed OSs, no more memory leaks. John
From: Vladimir Vassilevsky on 10 Jun 2010 12:03
John Larkin wrote: > Eventually we'll have a CPU as every device driver, and a CPU for > every program thread, with real execution protection. No more buffer > overflow exploits, no more crashed OSs, no more memory leaks. Instead we will have racing, deadlocks, data coherency issues, state save/restore problems, unpredictable arbitration and a version hell. Thanks, but no thanks. The development for the system with one core is heck of a lot simpler. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com |