From: chrisv on
Peter K�hlmann wrote:

> Hadron quacked:
>>
>> Those "super computers" use, err, a re-tooled Linux.

Oh, were those "re-tooled" versions of Linux approved by your "OSS
Culling Committee", Hadron Quack?

We know that you're a Linux-hating Micro$oft fanboi, "Hadron", but
would you care to list the versions of Linux that could *not* be
described as "re-tooled"?

Gosh, it seems that, "re-tooled" or not, it's still Linux, "Hadron".

>Pray tell, Hadron Snot Larry Quark.
>
>Tell us meore. We are all ears. "Re-tooled". Interesting
>
>> Surely you're not
>> so dim as to not realise that. Hint : if Koehlmann mentions something
>> even remotely technical treat it like a turd on your doorstep. Do not
>> embrace it and wave it around.
>
>Another fine "true linux advocacy post" from the
>"true linux advocate", "kernel hacker", "./configure hero", "emacs user",
>"swapfile expert", "X specialist", "CUPS guru", "USB-disk server admin",
>"defragger professional", "newsreader magician", "hardware maven", "time
>coordinator", "email sage", "tripwire wizard", "Pulseaudio rockstar",
>"XORG sorcerer", "filesystem pro", "Nathans second chance evangelist" and
>"OSS culling committee chairman" Hadron Quark, aka Hans Schneider, aka
>Richard, aka Damian O'Leary, aka Steve Townsend, aka Ubuntu King

Leave it to "true Linux advocate" Hadron Quark to downplay the great
power of Free and Open Software - the ability to build-upon the work
of others and create a tool that is "perfect" for the task.

From: David Schwartz on
On Mar 22, 2:11 am, Penang <kalamb...(a)gmail.com> wrote:

> In this approach, the operating system would no longer resemble the
> kernel mode of today's OSes, but rather act more like a hypervisor. A
> concept from virtualization, a hypervisor acts as a layer between the
> virtual machine and the actual hardware.
>
> The programs themselves would take on many of the duties of resource
> management. The OS could assign an application a CPU and some memory,
> and the program itself, using metadata generated by the compiler,
> would best know how to use these resources."

That's what people already do. That's why we have the word
"hypervisor" and "virtualization".

The problem is that it doesn't work very well. The individual programs
don't have enough information to make effective use of globally-shared
resources.

Plus, if the goal is to use all the cores effectively, what possible
sense could it make to dedicate a core to a process that may or may
not be able to use 100% of it?! To make effective use of large numbers
of cores, you need a highly-developed ability to quickly reassign CPU
resources to where they are needed. Dedicating cores is a huge step in
the wrong direction.

DS
From: TomB on
["Followup-To:" header set to comp.os.linux.advocacy.]
On 2010-03-22, the following emerged from the brain of David Schwartz:
> On Mar 22, 2:11 am, Penang <kalamb...(a)gmail.com> wrote:
>
>> In this approach, the operating system would no longer resemble the
>> kernel mode of today's OSes, but rather act more like a hypervisor.
>> A concept from virtualization, a hypervisor acts as a layer between
>> the virtual machine and the actual hardware.
>>
>> The programs themselves would take on many of the duties of
>> resource management. The OS could assign an application a CPU and
>> some memory, and the program itself, using metadata generated by
>> the compiler, would best know how to use these resources."
>
> That's what people already do. That's why we have the word
> "hypervisor" and "virtualization".
>
> The problem is that it doesn't work very well. The individual
> programs don't have enough information to make effective use of
> globally-shared resources.
>
> Plus, if the goal is to use all the cores effectively, what possible
> sense could it make to dedicate a core to a process that may or may
> not be able to use 100% of it?! To make effective use of large
> numbers of cores, you need a highly-developed ability to quickly
> reassign CPU resources to where they are needed. Dedicating cores is
> a huge step in the wrong direction.

Absolutely agreed. It would kind of take us back to the W2k days of
SMP, where the second processor/core would only be used for user
interface processing.

--
Three may keep a secret, if two of them are dead.
~ Benjamin Franklin
From: Tim Roberts on
Rainer Weikusat <rweikusat(a)mssgmbh.com> wrote:
>
>SMP was a new and interesting problem about twenty years ago.

Twice that -- Seymour Cray was doing SMP with the CDC 6600 in 1964.
--
Tim Roberts, timr(a)probo.com
Providenza & Boekelheide, Inc.
From: Joe Pfeiffer on
Tim Roberts <timr(a)probo.com> writes:

> Rainer Weikusat <rweikusat(a)mssgmbh.com> wrote:
>>
>>SMP was a new and interesting problem about twenty years ago.
>
> Twice that -- Seymour Cray was doing SMP with the CDC 6600 in 1964.

6500 (two 6400 CPUs) or 6700 (not really symmetric; a 6400 and a 6600).
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)