From: John Larkin on
On Fri, 08 Aug 2008 10:03:28 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

>John Larkin wrote:
>> On Thu, 07 Aug 2008 14:51:57 GMT, Jan Panteltje
>> <pNaonStpealmtje(a)yahoo.com> wrote:
>>
>>> On a sunny day (Thu, 07 Aug 2008 07:08:52 -0700) it happened John Larkin
>>> <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
>>> <d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com>:
>>>
>>>>> Been there - done that :-)
>>>>>
>>>>> That is precisely how the early SMP systems worked, and it works
>>>>> for dinky little SMP systems of 4-8 cores. But the kernel becomes
>>>>> the bottleneck for many workloads even on those, and it doesn't
>>>>> scale to large numbers of cores. So you HAVE to multi-thread the
>>>>> kernel.
>
>>>> Why? All it has to do is grant run permissions and look at the big
>>>> picture. It certainly wouldn't do I/O or networking or file
>>>> management. If memory allocation becomes a burden, it can set up four
>>>> (or fourteen) memory-allocation cores and let them do the crunching.
>>>> Why multi-thread *anything* when hundreds or thousands of CPUs are
>>>> available?
>>>>
>>>> Using multicore properly will require undoing about 60 years of
>>>> thinking, 60 years of believing that CPUs are expensive.
>
>Thinking multicore properly might yield some advantages on certain types
>of problem. But these are not the sort of problems most domestic users
>of PCs actually have. It could be useful for 3D gaming, but even there
>it still makes sense to split the load across specialised dedicated
>video CPUs using fancy memory and generics doing the grunt work.
>
>>> Ah, and this all reminds me about when 'object oriented programming' was going to
>>> change everything.
>>> It did lead to such language disasters as C++ (and of course MS went for it),
>>> where the compiler writers at one time did not even know how to implement things.
>>> Now the next big thing is 'think an object for every core' LOL.
>>> Days of future wasted.
>>> All the little things have to communicate and deliver data at the right time to the right place.
>>> Sounds a bit like Intel made a bigger version of Cell.
>>> And Cell is a beast to program (for optimum speed).
>>
>> Then stop thinking about optimum speed. Start thinking about a
>> computer system that doesn't crash, can't get viruses or trojans, is
>> easy to understand and use, that not even a rogue device driver can
>> bring down.
>
>How exactly does your wonder architecture prevent the muppet at the
>keyboard clicking on the canonical Trojan that starts two new threads
>and grabs IO and memory resources at random?

The browser runs on one or more CPUs that are totally sandboxed: they
are limited in what files they can access, how much memory they can
use, everything. Those cpu's can even crash and not take down anything
but the browser. The tcp/ip, the file system, the user graphics, the
disk i/o... all are seperate, supervised, trusted processes, each with
its own cpu.

>
>Oh it dies when the number of processes running N > 1024 (about 10x
>start process latency after he hits return). Fabulous!

Oh, one *would* have to assume that the system was not coded by
morons. I forgot to specify that.



>
>The only way your idea will satisfy your idealised goals is if it
>remains in fantasy land. Once you apply power it is vulnerable to all
>the usual human factors Trojan attacks no matter how robust the hardware.


Our FPGA's have dozens of times the number crunching power of a Core
Duo, and never get trojans, memory leaks, any of that.

>>
>> Think about how to manage a chip with 1024 CPUs. Hurry, because it
>> will be reality soon. We have two choices: make existing OS's
>> unspeakably more tangled, or start over and do something simple.
>
>There are tight robust OSs but Windows is not one of them. On the
>desktop Linux is a lot closer, and the Apples OS X is pretty good too.
>And IBM's ill fated OS/2 was good in its day (sadly 'Doze won). The PC
>world could easily have been different had technological superiority won
>the day (instead of glizty GUI froth and BSODs).
>
>When you have 1024 CPUs you have 1024x1023 (virtual) communications
>paths to other CPUs. Unless you are very careful it is easy to end up
>with interprocess communications that are like wading through treacle.

Any OS has a zillion inter-process paths. The difference would be that
they are designed by engineers, who can manage things like this,
instead of programmers, who can't.


>
>You might find the following research article interesting.
>http://www2.lifl.fr/west/publi/MDL+07frontiers.pdf
>
>Connexion machine at MIT goes back to the early 80's and was influenced
>by Lisp in its early incarnations.
>
>http://en.wikipedia.org/wiki/Connection_Machine
>>
>> Speed will be a side effect, almost by accident.
>
>If you really think large numbers of CPUs will give some advantage in
>home computers why not write a program to emulate them using the latest
>Pentium virtualisation instructions. There may already be an academic
>implementation of this model - I haven't looked. It would be useful for
>teaching purposes if nothing else. And it might help disabuse you of
>some of your wilder notions on this subject.


Sorry, I have a day job.

So what do you think OS's will look like 10 years from now, when even
home computers run on chips with 100's of cores? Still one gigantic
VMS/Mach/NT descendent running on one CPU, thrashing and piping all
over the place, doing everything, still vulnerable to viruses and
application bugs, still mixing scheduling and virtual memory
management and file systems and running PowerPoint with serial port
interrupts? And those other cores stay idle unless you play a game?

Things will never change? We'll always use 1980's OS architectures?

John


From: John Larkin on
On Thu, 7 Aug 2008 07:44:19 -0700, "Chris M. Thomasson"
<no(a)spam.invalid> wrote:

>
>"Chris M. Thomasson" <no(a)spam.invalid> wrote in message
>news:PNDmk.8961$Bt6.3201(a)newsfe04.iad...
>> "John Larkin" <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
>> message news:d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com...
>[...]
>>> Using multicore properly will require undoing about 60 years of
>>> thinking, 60 years of believing that CPUs are expensive.
>>
>> The bottleneck is the cache-coherency system.
>
>I meant to say:
>
>/One/ bottleneck is the cache-coherency system.
>
>

I think the trend is to have the cores surround a common shared cache;
a little local memory (and cache, if the local memory is slower for
some reason) per CPU wouldn't hurt.

Cache coherency is simple if you don't insist on flat-out maximum
performance. What we should insist on is flat-out unbreakable systems,
and buy better silicon to get the performance back if we need it.

I'm reading Showstopper!, the story of the development of NT. It's a
great example of why we need a different way of thinking about OS's.

Silicon is going to make that happen, finally free us of the tyranny
of CPU-as-precious-resource. A lot of programmers aren't going to like
this.

John


From: Jan Panteltje on
On a sunny day (Fri, 08 Aug 2008 08:54:36 -0700) it happened John Larkin
<jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
<8v4m945fbcvrln66tqars905r3p18sgdfk(a)4ax.com>:

>>/One/ bottleneck is the cache-coherency system.
>>
>>
>
>I think the trend is to have the cores surround a common shared cache;
>a little local memory (and cache, if the local memory is slower for
>some reason) per CPU wouldn't hurt.
>
>Cache coherency is simple if you don't insist on flat-out maximum
>performance. What we should insist on is flat-out unbreakable systems,
>and buy better silicon to get the performance back if we need it.
>
>I'm reading Showstopper!, the story of the development of NT. It's a
>great example of why we need a different way of thinking about OS's.
>
>Silicon is going to make that happen, finally free us of the tyranny
>of CPU-as-precious-resource. A lot of programmers aren't going to like
>this.
>
>John

John Lennon:

'You know I am a dreamer'
.....
' And I hope you join us someday'

(well what I remember of it).
You should REALLY try to program a Cell processor some day.

Dunno what you have against programmers, there are programmaers who
are amazingly clever with hardware resources.
I dunno about NT and MS, but IIRC MS plucked programmers from
unis, and sort of brainwashed them then.. the result we all know.


From: Martin Brown on
John Larkin wrote:
> On Thu, 7 Aug 2008 07:44:19 -0700, "Chris M. Thomasson"
> <no(a)spam.invalid> wrote:
>
>> "Chris M. Thomasson" <no(a)spam.invalid> wrote in message
>> news:PNDmk.8961$Bt6.3201(a)newsfe04.iad...
>>> "John Larkin" <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
>>> message news:d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com...
>> [...]
>>>> Using multicore properly will require undoing about 60 years of
>>>> thinking, 60 years of believing that CPUs are expensive.

>>> The bottleneck is the cache-coherency system.
>> I meant to say:
>>
>> /One/ bottleneck is the cache-coherency system.
>
> I think the trend is to have the cores surround a common shared cache;
> a little local memory (and cache, if the local memory is slower for
> some reason) per CPU wouldn't hurt.

For small N this can be made work very nicely.
>
> Cache coherency is simple if you don't insist on flat-out maximum
> performance. What we should insist on is flat-out unbreakable systems,
> and buy better silicon to get the performance back if we need it.

Existing cache hardware on Pentiums still isn't quite good enough. Try
probing its memory with large power of two strides and you fall over a
performance limitation caused by the cheap and cheerful way it uses
lower address bits for cache associativity. See Steven Johnsons post in
the FFT Timing thread.
>
> I'm reading Showstopper!, the story of the development of NT. It's a
> great example of why we need a different way of thinking about OS's.

If it is anything like the development of OS/2 you get to see very
bright guys reinvent things from scratch that were already known in the
mini and mainframe world (sometimes with the same bugs and quirks as the
first iteration of big iron code suffered from).

NT 3.51 was a particularly good vintage. After that bloatware set in.
>
> Silicon is going to make that happen, finally free us of the tyranny
> of CPU-as-precious-resource. A lot of programmers aren't going to like
> this.

CPU cycles are cheap and getting cheaper and human cycles are expensive
and getting more expensive. But that also says that we should also be
using better tools and languages to manage the hardware.

Unfortunately time to market advantage tends to produce less than robust
applications with pretty interfaces and fragile internals. You can after
all send out code patches over the Internet all too easily ;-)

Since people buy the stuff (I would not wish Vista on my worst enemy by
the way) even with all its faults the market rules, and market forces
are never wrong...

Most of what you are claiming as advantages of separate CPUs can be
achieved just as easily with hardware support for protected user memory
and security privilige rings. It is more likely that virtualisation of
single, dual or quad cores will become common in domestic PCs.

There was a Pentium exploit documented against some brands of Unix. eg.
http://www.ssi.gouv.fr/fr/sciences/fichiers/lti/cansecwest2006-duflot.pdf

Loads of physical CPUs just creates a different set of complexity
problems. And they are a pig to program efficiently.

Regards,
Martin Brown
** Posted from http://www.teranews.com **
From: Martin Brown on
John Larkin wrote:
>
> So what do you think OS's will look like 10 years from now, when even
> home computers run on chips with 100's of cores? Still one gigantic
> VMS/Mach/NT descendent running on one CPU, thrashing and piping all
> over the place, doing everything, still vulnerable to viruses and
> application bugs, still mixing scheduling and virtual memory
> management and file systems and running PowerPoint with serial port
> interrupts?

A lot of IO is concentrated by the bridge hardware these days. And
serial ports have had moderate to large FIFOs for about a decade.

XP runs quite happily on my dual core. Vista runs less happily on my new
Toshiba portable and I will never recommend using it to anyone.

> And those other cores stay idle unless you play a game?

I can see a case for cores allocated to processes with highest demand
for resources, but I do not believe it makes any sense to have one
thread per core with a properly designed secure operating system.

In exactly the same sense as you claim for your magical hardware
architecture a properly designed secure OS would be well secure.

I could be persuaded that Mickeysoft leave 'Doze vulnerable to avoid
putting the AV people out of business (that would be anti-competitive).
>
> Things will never change? We'll always use 1980's OS architectures?

Sadly I suspect that might well be the case until some compelling reason
to change comes along. Do you not remember how long the delay was before
there were 32bit consumer grade OS's for the early 386 PCs?

Regards,
Martin Brown
** Posted from http://www.teranews.com **
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Prev: LM3478 design gets insanely hot
Next: 89C51ED2