From: Nick Maclaren on

In article <PNDmk.8961$Bt6.3201(a)newsfe04.iad>,
"Chris M. Thomasson" <no(a)spam.invalid> writes:
|>
|> FWIW, I have a memory allocation algorithm which can scale because its based
|> on per-thread/core/node heaps:
|>
|> AFAICT, there is absolutely no need for memory-allocation cores. Each thread
|> can have a private heap such that local allocations do not need any
|> synchronization.

Provided that you can live with the constraints of that approach.
Most applications can, but not all.


Regards,
Nick Maclaren.
From: John Larkin on
On Thu, 07 Aug 2008 14:51:57 GMT, Jan Panteltje
<pNaonStpealmtje(a)yahoo.com> wrote:

>On a sunny day (Thu, 07 Aug 2008 07:08:52 -0700) it happened John Larkin
><jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
><d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com>:
>
>>>Been there - done that :-)
>>>
>>>That is precisely how the early SMP systems worked, and it works
>>>for dinky little SMP systems of 4-8 cores. But the kernel becomes
>>>the bottleneck for many workloads even on those, and it doesn't
>>>scale to large numbers of cores. So you HAVE to multi-thread the
>>>kernel.
>>
>>Why? All it has to do is grant run permissions and look at the big
>>picture. It certainly wouldn't do I/O or networking or file
>>management. If memory allocation becomes a burden, it can set up four
>>(or fourteen) memory-allocation cores and let them do the crunching.
>>Why multi-thread *anything* when hundreds or thousands of CPUs are
>>available?
>>
>>Using multicore properly will require undoing about 60 years of
>>thinking, 60 years of believing that CPUs are expensive.
>>
>>John
>
>Ah, and this all reminds me about when 'object oriented programming' was going to
>change everything.
>It did lead to such language disasters as C++ (and of course MS went for it),
>where the compiler writers at one time did not even know how to implement things.
>Now the next big thing is 'think an object for every core' LOL.
>Days of future wasted.
>All the little things have to communicate and deliver data at the right time to the right place.
>Sounds a bit like Intel made a bigger version of Cell.
>And Cell is a beast to program (for optimum speed).

Then stop thinking about optimum speed. Start thinking about a
computer system that doesn't crash, can't get viruses or trojans, is
easy to understand and use, that not even a rogue device driver can
bring down.

Think about how to manage a chip with 1024 CPUs. Hurry, because it
will be reality soon. We have two choices: make existing OS's
unspeakably more tangled, or start over and do something simple.

Speed will be a side effect, almost by accident.


John


From: Jan Panteltje on
On a sunny day (Thu, 07 Aug 2008 08:39:21 -0700) it happened John Larkin
<jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
<og5m94d6nniumi8itc03c5ltf60n1sle6c(a)4ax.com>:

>>>Using multicore properly will require undoing about 60 years of
>>>thinking, 60 years of believing that CPUs are expensive.
>>>
>>>John
>>
>>Ah, and this all reminds me about when 'object oriented programming' was going to
>>change everything.
>>It did lead to such language disasters as C++ (and of course MS went for it),
>>where the compiler writers at one time did not even know how to implement things.
>>Now the next big thing is 'think an object for every core' LOL.
>>Days of future wasted.
>>All the little things have to communicate and deliver data at the right time to the right place.
>>Sounds a bit like Intel made a bigger version of Cell.
>>And Cell is a beast to program (for optimum speed).
>
>Then stop thinking about optimum speed. Start thinking about a
>computer system that doesn't crash, can't get viruses or trojans, is
>easy to understand and use, that not even a rogue device driver can
>bring down.

I already have those, they run Linux.
I give you though, that a bad behaving module can cause big problems.
Just had to reboot a couple of times to get rid of 'vloopback', wanted to interface
the Ethernet webcam with Flashplayer.
It works now: http://panteltje.com/panteltje/mcamip/#v4l_and_flash
not with the new adobe flashplayer 10 beta for Linux though....
We will almost always be one step behind I guess..


>Think about how to manage a chip with 1024 CPUs. Hurry, because it
>will be reality soon. We have two choices: make existing OS's
>unspeakably more tangled, or start over and do something simple.

If I understood the Intel press release correctly, the API of Larabee
will be not different from a normal graphics card, that would be nice.
They create the problem, let them write the soft :-)


>Speed will be a side effect, almost by accident.

One can wonder how important speed really is for the consumer PC.
Sure, HD video, and later maybe 4096xsomething pixels will take more speed.
However for normal HD already cheap chipsets provide the power.
For HD video editing the speed can probably never be high enough...
but that is not only a graphics issue.

John, I dunno where it will go, but one thing I know:
It Will Not Become Simpler :-)

There is a tendency to more and more complex structures in nature.
With us at the top perhaps, little one cell organisms at the bottom, molecules,
atoms, quarks, what not.
Self organising in a way, the best configurations make it - in time -
And what is time, we are but a dash in eternity.


>
>John
>
>
>
From: Dirk Bruere at NeoPax on
NV55 wrote:
> On Aug 5, 5:26 am, Dirk Bruere at NeoPax <dirk.bru...(a)gmail.com>
> wrote:
>> Skybuck Flying wrote:
>>> As the number of cores goes up the watt requirements goes up too ?
>>> Will we need a zillion watts of power soon ?
>>> Bye,
>>> Skybuck.
>> Since the ATI Radeon� HD 4800 series has 800 cores you work it out.
>>
>> --
>> Dirk
>
>
> Each of the 800 "cores", which are simple stream processors, in
> ATI RV770
> (Radeon 4800 series) are not comparable to the 16, 24, 32 or 48
> cores that will be in Larrabee. Just like they're not comparable to
> the 240 "cores" in Nvidia GeForce GTX 280. Though I'm not saying
> you didn't realize that, just for those that might not have.

True, but they seem to be positioning Larrabee in the same tech segment
as video cards. Which makes sense since a SIMD system is the easiest to
program. If they want N general purpose cores doing general purpose
computing the whole thing will bog down somewhere between 16 and 32. A
lot of the R&D theory was done 30+ years ago.

Maybe they will try something radical, like an ancient data flow
architecture, but I doubt it.

--
Dirk

http://www.transcendence.me.uk/ - Transcendence UK
http://www.theconsensus.org/ - A UK political party
http://www.onetribe.me.uk/wordpress/?cat=5 - Our podcasts on weird stuff
From: Robert Myers on
On Aug 7, 4:57 pm, Dirk Bruere at NeoPax <dirk.bru...(a)gmail.com>
wrote:

>
> > Each of the 800  "cores",  which are simple stream processors, in
> > ATI  RV770
> > (Radeon 4800 series)   are not comparable to the 16, 24, 32 or 48
> > cores that will be in Larrabee. Just like they're not comparable to
> > the 240  "cores" in  Nvidia GeForce GTX 280.    Though I'm not saying
> > you didn't realize that, just for those that might not have.
>
> True, but they seem to be positioning Larrabee in the same tech segment
> as video cards. Which makes sense since a SIMD system is the easiest to
> program. If they want N general purpose cores doing general purpose
> computing the whole thing will bog down somewhere between 16 and 32. A
> lot of the R&D theory was done 30+ years ago.
>
> Maybe they will try something radical, like an ancient data flow
> architecture, but I doubt it.
>
"General purpose" GPU's are not really general purpose, but they
aren't doing graphics, either.

Robert.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Prev: LM3478 design gets insanely hot
Next: 89C51ED2