From: Bill Todd on
Jan Panteltje wrote:
> On a sunny day (Sat, 09 Aug 2008 10:20:40 -0700) it happened John Larkin
> <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in
> <12kr94p9sm7accdled800edbaisojuda5i(a)4ax.com>:
>
>> "First, some words about the meaning of "kernel". Operating Systems
>> can be written so that most services are moved outside the OS core and
>> implemented as processes.This OS core then becomes a lot smaller, and
>> we call it a kernel. When this kernel only provides the basic
>> services, such as basic memory management ant multithreading, it is
>> called a microkernel or even nanokernel for the super-small ones. To
>> stress the difference between the
>>
>> Unix-type of OS, the Unix-like core is called a monolithic kernel. A
>> monolithic kernel provides full process management, device
>> drivers,file systems, network access etc. I will here use the word
>> kernel in the broad sense, meaning the part of the OS supervising the
>> machine."
>
>
> Just to rain a bit on your parade, in the *Linux* kernel,
> many years ago, the concept of 'modules' was introduced.
> Now device drivers are 'modules', and are, although closely connected, and in the same
> source package, _not_ a real pert of the kernel.
> (I am no Linux kernel expert, but it is absolutely possible to write a device
> driver as module, and then, while the system is running, load that module,
> and unload it again.
> I sort of have the feeling that your knowledge of Linux, and the Linux kernel, is very academic John,
> and you should really compile a kernel and play with Linux a bit to get
> the feel of it.

Er, the discussion that John quoted above referred not to what is
compiled with the kernel but to what executes in the same protection
domain that the kernel does (as it is my impression Linux modules do).
Perhaps John is not the one who needs to develop a deeper understanding
here.

- bill
From: Jan Panteltje on
On a sunny day (Sun, 10 Aug 2008 05:58:13 -0400) it happened Bill Todd
<billtodd(a)metrocast.net> wrote in
<1aqdnfjG5tCEJgPVnZ2dnUVZ_sTinZ2d(a)metrocastcablevision.com>:

>> Just to rain a bit on your parade, in the *Linux* kernel,
>> many years ago, the concept of 'modules' was introduced.
>> Now device drivers are 'modules', and are, although closely connected, and in the same
>> source package, _not_ a real pert of the kernel.
>> (I am no Linux kernel expert, but it is absolutely possible to write a device
>> driver as module, and then, while the system is running, load that module,
>> and unload it again.
>> I sort of have the feeling that your knowledge of Linux, and the Linux kernel, is very academic John,
>> and you should really compile a kernel and play with Linux a bit to get
>> the feel of it.
>
>Er, the discussion that John quoted above referred not to what is
>compiled with the kernel but to what executes in the same protection
>domain that the kernel does (as it is my impression Linux modules do).
>Perhaps John is not the one who needs to develop a deeper understanding
>here.

He mentioned 'monolithic', and with modules, the Linux kernel is _not_ monolitic.
You can load a device driver as a module (after you configured it to be a module
before compilation, the kernel config gives you often a choice), and
then that module will even be dynamically loaded, including other modules it depends on,
and unloaded too if no longer used (that device).
This keeps memory usage low, and prevent that you need to reboot if you add a new driver.

As to 'protection domain' be aware that even if you were to run device drivers on a different core (one for each device???)
then you will still have to move the data from one core to the other for processing, and
how protected do you think that data is? It is all illusion: 'More cores will solve everything.'.
I wonder how many here actually use Linux, compiled a kernel, wrote modules and applications,
and even can write in C.
I'd rather have a discussion with them, then the generalised bloating about systems they never even
had hands on experience with.
In that case sci.electronics.design becomes like sci.physics, bunch of idiots with even
more idiotic theories causing so much noise that the real stuff is obscured, and your chance to learn something
is zero.
This is my personal rant, I am a Linux user, written many applications for it, did some work on
drivers too.
Academic bullshit I know about too, in my first year Information Technology I found an error in the
text book, reported it, professors do not always like to be corrected, I learned that.
There was a project that you could join, about in depth study of operating systems, and, since I actually
wrote one, I applied for the project, was promptly rejected.
Where did those guys go? Microsoft??????
I will listen to John Larkin's theory about how safe multicore systems are after he writes a demo, or even
shows someone else's that cannot be corrupted.
Utopia does not exist.

<EOR (=End OF Rant>


From: Chris M. Thomasson on
"Nick Maclaren" <nmm1(a)cus.cam.ac.uk> wrote in message
news:g7f3mq$shf$1(a)gemini.csx.cam.ac.uk...
>
> In article <PNDmk.8961$Bt6.3201(a)newsfe04.iad>,
> "Chris M. Thomasson" <no(a)spam.invalid> writes:
> |>
> |> FWIW, I have a memory allocation algorithm which can scale because its
> based
> |> on per-thread/core/node heaps:
> |>
> |> AFAICT, there is absolutely no need for memory-allocation cores. Each
> thread
> |> can have a private heap such that local allocations do not need any
> |> synchronization.
>
> Provided that you can live with the constraints of that approach.
> Most applications can, but not all.

That's a great point! It just seems that the approach could possibly be
beneficial to all sorts of applications. Could you help me out here and give
some examples of a couple of applications that simply could not tolerate the
approach at any level? When I say any level I mean allocations starting at
lowest common denominator from it orgin... This being trying thread local
heap, then core local heap, and so on and so forth...

I see problems. Well, with mega-core systems, the per-core memory is going
to be limited indeed! Its analogous to programming a Cell with its dedicated
per-SPE memory; something like 256 kb. When the local allocation to a SPE is
exhausted, well, DMA to the global memory is going to need to be utilized. I
know this works because I have played around with algorithms using the IBM
Cell Simulator.

http://groups.google.com/group/comp.arch/browse_frm/thread/4c97441d6704d8a1

http://groups.google.com/group/comp.arch/msg/4133f6eb8a6b5a74

programming the Cell is VERY FUN!!!!

From: Phil Hobbs on
Chris M. Thomasson wrote:
> "Nick Maclaren" <nmm1(a)cus.cam.ac.uk> wrote in message
> news:g7f3mq$shf$1(a)gemini.csx.cam.ac.uk...
>>
>> In article <PNDmk.8961$Bt6.3201(a)newsfe04.iad>,
>> "Chris M. Thomasson" <no(a)spam.invalid> writes:
>> |>
>> |> FWIW, I have a memory allocation algorithm which can scale because
>> its based
>> |> on per-thread/core/node heaps:
>> |>
>> |> AFAICT, there is absolutely no need for memory-allocation cores.
>> Each thread
>> |> can have a private heap such that local allocations do not need any
>> |> synchronization.
>>
>> Provided that you can live with the constraints of that approach.
>> Most applications can, but not all.
>
> That's a great point! It just seems that the approach could possibly be
> beneficial to all sorts of applications. Could you help me out here and
> give some examples of a couple of applications that simply could not
> tolerate the approach at any level? When I say any level I mean
> allocations starting at lowest common denominator from it orgin... This
> being trying thread local heap, then core local heap, and so on and so
> forth...
>
> I see problems. Well, with mega-core systems, the per-core memory is
> going to be limited indeed! Its analogous to programming a Cell with its
> dedicated per-SPE memory; something like 256 kb. When the local
> allocation to a SPE is exhausted, well, DMA to the global memory is
> going to need to be utilized. I know this works because I have played
> around with algorithms using the IBM Cell Simulator.
>
> http://groups.google.com/group/comp.arch/browse_frm/thread/4c97441d6704d8a1
>
> http://groups.google.com/group/comp.arch/msg/4133f6eb8a6b5a74
>
> programming the Cell is VERY FUN!!!!

In order to maintain cache coherence, interconnect bandwidth wants to go as
the square or the cube of Moore's Law, depending on your assumptions (Rent's
rule might make it the 1.5th or 2.5th power, but not less than that). In
many-processor SMPs, that bandwidth dominates. Hence there's a move afoot
towards specialization, as in the Cell, which is a SIMD machine like an old Cray.

The cache coherence problem is a thorny one, because if full coherence is
relaxed very much,
(a) programming gets much much harder, and
(b) the range of problems that the machine can tackle efficiently drops like
a rock.

Thus I'm not sure what local storage allocation really gets you, because ISTM
it's a smallish piece of a much bigger and thornier problem.

There are intermediate design points, such as an MxN-way system, with M N-way
SMPs. If N stops scaling, that makes the cache coherence problem easier and
saves interconnect power.

As the old saying goes, computer design is 'bottleneckology'.

Cheers,

Phil Hobbs
From: Jan Panteltje on
On a sunny day (Sun, 10 Aug 2008 15:02:40 GMT) it happened Jan Panteltje
<pNaonStpealmtje(a)yahoo.com> wrote in <g7mvuk$2mc$1(a)aioe.org>:

And for the others: Sony was to have two HDMI ports on the PS3,
should have made for interesting experiments.

But the real PS3 only had one, so I decided to skip on the Sony product
(most Sony products I have bought in the past were really bad actually).
And Linux you can run on anything (and runs on anything), for less then
the cost of a PS3 you can assemble a good PC, so if you must run Linux
why bother tortuing yourself on a PS3? Use a real computer.

But perhaps if you are one of those gamers... well
the video modes also suck on that thing. And the power consumption is high,
not green at all, and it does not have that nice Nintendo remote.
:-)))))))))))))))))))))))))))))))))))))))))

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Prev: LM3478 design gets insanely hot
Next: 89C51ED2