Prev: LM3478 design gets insanely hot
Next: 89C51ED2
From: JosephKK on 9 Aug 2008 12:52 On Fri, 08 Aug 2008 18:03:09 +0100, Martin Brown <|||newspam|||@nezumi.demon.co.uk> wrote: >John Larkin wrote: >> On Thu, 7 Aug 2008 07:44:19 -0700, "Chris M. Thomasson" >> <no(a)spam.invalid> wrote: >> >>> "Chris M. Thomasson" <no(a)spam.invalid> wrote in message >>> news:PNDmk.8961$Bt6.3201(a)newsfe04.iad... >>>> "John Larkin" <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in >>>> message news:d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com... >>> [...] >>>>> Using multicore properly will require undoing about 60 years of >>>>> thinking, 60 years of believing that CPUs are expensive. > >>>> The bottleneck is the cache-coherency system. >>> I meant to say: >>> >>> /One/ bottleneck is the cache-coherency system. >> >> I think the trend is to have the cores surround a common shared cache; >> a little local memory (and cache, if the local memory is slower for >> some reason) per CPU wouldn't hurt. > >For small N this can be made work very nicely. >> >> Cache coherency is simple if you don't insist on flat-out maximum >> performance. What we should insist on is flat-out unbreakable systems, >> and buy better silicon to get the performance back if we need it. > >Existing cache hardware on Pentiums still isn't quite good enough. Try >probing its memory with large power of two strides and you fall over a >performance limitation caused by the cheap and cheerful way it uses >lower address bits for cache associativity. See Steven Johnsons post in >the FFT Timing thread. >> >> I'm reading Showstopper!, the story of the development of NT. It's a >> great example of why we need a different way of thinking about OS's. > >If it is anything like the development of OS/2 you get to see very >bright guys reinvent things from scratch that were already known in the >mini and mainframe world (sometimes with the same bugs and quirks as the >first iteration of big iron code suffered from). > >NT 3.51 was a particularly good vintage. After that bloatware set in. >> >> Silicon is going to make that happen, finally free us of the tyranny >> of CPU-as-precious-resource. A lot of programmers aren't going to like >> this. > >CPU cycles are cheap and getting cheaper and human cycles are expensive >and getting more expensive. But that also says that we should also be >using better tools and languages to manage the hardware. > >Unfortunately time to market advantage tends to produce less than robust >applications with pretty interfaces and fragile internals. You can after >all send out code patches over the Internet all too easily ;-) Yeah, to people with broadband. Back when XP SP2 came out i was still on dial up, MS send me a CD for free. Consider costs like that before spouting. > >Since people buy the stuff (I would not wish Vista on my worst enemy by >the way) even with all its faults the market rules, and market forces >are never wrong... > >Most of what you are claiming as advantages of separate CPUs can be >achieved just as easily with hardware support for protected user memory >and security privilige rings. It is more likely that virtualisation of >single, dual or quad cores will become common in domestic PCs. Why virtualize them? I can have them physically. Of course M$ PC style software still cannot use them efficiently. Nor can they use 64-bit effectively and usually make poor use of SSE, SSE2 etc., > >There was a Pentium exploit documented against some brands of Unix. eg. >http://www.ssi.gouv.fr/fr/sciences/fichiers/lti/cansecwest2006-duflot.pdf > >Loads of physical CPUs just creates a different set of complexity >problems. And they are a pig to program efficiently. Mostly due to MS-DOS and follow ons style group think. We have a generation of programmers that never learned partitioning properly. > >Regards, >Martin Brown >** Posted from http://www.teranews.com **
From: JosephKK on 9 Aug 2008 13:24 On Thu, 07 Aug 2008 14:51:57 GMT, Jan Panteltje <pNaonStpealmtje(a)yahoo.com> wrote: >On a sunny day (Thu, 07 Aug 2008 07:08:52 -0700) it happened John Larkin ><jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in ><d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com>: > >>>Been there - done that :-) >>> >>>That is precisely how the early SMP systems worked, and it works >>>for dinky little SMP systems of 4-8 cores. But the kernel becomes >>>the bottleneck for many workloads even on those, and it doesn't >>>scale to large numbers of cores. So you HAVE to multi-thread the >>>kernel. >> >>Why? All it has to do is grant run permissions and look at the big >>picture. It certainly wouldn't do I/O or networking or file >>management. If memory allocation becomes a burden, it can set up four >>(or fourteen) memory-allocation cores and let them do the crunching. >>Why multi-thread *anything* when hundreds or thousands of CPUs are >>available? >> >>Using multicore properly will require undoing about 60 years of >>thinking, 60 years of believing that CPUs are expensive. >> >>John > >Ah, and this all reminds me about when 'object oriented programming' was going to >change everything. >It did lead to such language disasters as C++ (and of course MS went for it), >where the compiler writers at one time did not even know how to implement things. >Now the next big thing is 'think an object for every core' LOL. >Days of future wasted. >All the little things have to communicate and deliver data at the right time to the right place. >Sounds a bit like Intel made a bigger version of Cell. >And Cell is a beast to program (for optimum speed). Part of what many others are saying you no longer need optimum performance, just good performance. Good enough is the mortal enemy of the best. This seems to be true in all areas of endeavor. >Maybe it will work for graphics, as things are sort of fixed, like to see real numbers though. >Couple of PS3s together make great rendering, there is a demo on youtube. > There have been many "silver bullet" fixes since the 1960's. Structured Programming, Literate Programming, several programming languages, Rapid Prototyping, CASE, OOA / OOD, Provable Software (in the mathematical sense), and numerous others. Has any of them worked(?), No (except on a few restricted cases).
From: Dirk Bruere at NeoPax on 9 Aug 2008 13:35 JosephKK wrote: > On Thu, 07 Aug 2008 14:51:57 GMT, Jan Panteltje > <pNaonStpealmtje(a)yahoo.com> wrote: > >> On a sunny day (Thu, 07 Aug 2008 07:08:52 -0700) it happened John Larkin >> <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in >> <d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com>: >> >>>> Been there - done that :-) >>>> >>>> That is precisely how the early SMP systems worked, and it works >>>> for dinky little SMP systems of 4-8 cores. But the kernel becomes >>>> the bottleneck for many workloads even on those, and it doesn't >>>> scale to large numbers of cores. So you HAVE to multi-thread the >>>> kernel. >>> Why? All it has to do is grant run permissions and look at the big >>> picture. It certainly wouldn't do I/O or networking or file >>> management. If memory allocation becomes a burden, it can set up four >>> (or fourteen) memory-allocation cores and let them do the crunching. >>> Why multi-thread *anything* when hundreds or thousands of CPUs are >>> available? >>> >>> Using multicore properly will require undoing about 60 years of >>> thinking, 60 years of believing that CPUs are expensive. >>> >>> John >> Ah, and this all reminds me about when 'object oriented programming' was going to >> change everything. >> It did lead to such language disasters as C++ (and of course MS went for it), >> where the compiler writers at one time did not even know how to implement things. >> Now the next big thing is 'think an object for every core' LOL. >> Days of future wasted. >> All the little things have to communicate and deliver data at the right time to the right place. >> Sounds a bit like Intel made a bigger version of Cell. >> And Cell is a beast to program (for optimum speed). > > Part of what many others are saying you no longer need optimum > performance, just good performance. Good enough is the mortal enemy > of the best. This seems to be true in all areas of endeavor. > >> Maybe it will work for graphics, as things are sort of fixed, like to see real numbers though. >> Couple of PS3s together make great rendering, there is a demo on youtube. >> > > There have been many "silver bullet" fixes since the 1960's. > Structured Programming, Literate Programming, several programming > languages, Rapid Prototyping, CASE, OOA / OOD, Provable Software (in > the mathematical sense), and numerous others. > > Has any of them worked(?), No (except on a few restricted cases). > Does anyone here actually use a s/w methodology? For the most part I do top down and bottom up. Basic outline first, then write the peripherals drivers and low level routines that I know I'm going to need. It usually all meets up in the middle without a problem. -- Dirk http://www.transcendence.me.uk/ - Transcendence UK http://www.theconsensus.org/ - A UK political party http://www.onetribe.me.uk/wordpress/?cat=5 - Our podcasts on weird stuff
From: Jan Panteltje on 9 Aug 2008 13:48 On a sunny day (Sat, 09 Aug 2008 10:20:40 -0700) it happened John Larkin <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in <12kr94p9sm7accdled800edbaisojuda5i(a)4ax.com>: >"First, some words about the meaning of "kernel". Operating Systems >can be written so that most services are moved outside the OS core and >implemented as processes.This OS core then becomes a lot smaller, and >we call it a kernel. When this kernel only provides the basic >services, such as basic memory management ant multithreading, it is >called a microkernel or even nanokernel for the super-small ones. To >stress the difference between the > >Unix-type of OS, the Unix-like core is called a monolithic kernel. A >monolithic kernel provides full process management, device >drivers,file systems, network access etc. I will here use the word >kernel in the broad sense, meaning the part of the OS supervising the >machine." Just to rain a bit on your parade, in the *Linux* kernel, many years ago, the concept of 'modules' was introduced. Now device drivers are 'modules', and are, although closely connected, and in the same source package, _not_ a real pert of the kernel. (I am no Linux kernel expert, but it is absolutely possible to write a device driver as module, and then, while the system is running, load that module, and unload it again. I sort of have the feeling that your knowledge of Linux, and the Linux kernel, is very academic John, and you should really compile a kernel and play with Linux a bit to get the feel of it. >Most popular os's (Win, Linux, Unix) are big-kernel designs, to reduce >inter-process overhead. That makes them complex, buggy, and >paradoxically slow. Unix has been around decades, got more and more perfectioned, Linux and BSD are incarnations of it. There was some old saying that went like this (correct me hopefully somebody knows it more precisely): "Those who criticise Unix are bound to re-invent it'.
From: John Larkin on 9 Aug 2008 17:22
On Sat, 09 Aug 2008 10:24:24 -0700, JosephKK <quiettechblue(a)yahoo.com> wrote: >On Thu, 07 Aug 2008 14:51:57 GMT, Jan Panteltje ><pNaonStpealmtje(a)yahoo.com> wrote: > >>On a sunny day (Thu, 07 Aug 2008 07:08:52 -0700) it happened John Larkin >><jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in >><d10m94d7etb6sfcem3hmdl3hk8qnels3kg(a)4ax.com>: >> >>>>Been there - done that :-) >>>> >>>>That is precisely how the early SMP systems worked, and it works >>>>for dinky little SMP systems of 4-8 cores. But the kernel becomes >>>>the bottleneck for many workloads even on those, and it doesn't >>>>scale to large numbers of cores. So you HAVE to multi-thread the >>>>kernel. >>> >>>Why? All it has to do is grant run permissions and look at the big >>>picture. It certainly wouldn't do I/O or networking or file >>>management. If memory allocation becomes a burden, it can set up four >>>(or fourteen) memory-allocation cores and let them do the crunching. >>>Why multi-thread *anything* when hundreds or thousands of CPUs are >>>available? >>> >>>Using multicore properly will require undoing about 60 years of >>>thinking, 60 years of believing that CPUs are expensive. >>> >>>John >> >>Ah, and this all reminds me about when 'object oriented programming' was going to >>change everything. >>It did lead to such language disasters as C++ (and of course MS went for it), >>where the compiler writers at one time did not even know how to implement things. >>Now the next big thing is 'think an object for every core' LOL. >>Days of future wasted. >>All the little things have to communicate and deliver data at the right time to the right place. >>Sounds a bit like Intel made a bigger version of Cell. >>And Cell is a beast to program (for optimum speed). > >Part of what many others are saying you no longer need optimum >performance, just good performance. Good enough is the mortal enemy >of the best. This seems to be true in all areas of endeavor. Let's change the definition of optimal code: never crashes, can't support trojans/viruses/spyware, is easy to install, understand, and remove. > >>Maybe it will work for graphics, as things are sort of fixed, like to see real numbers though. >>Couple of PS3s together make great rendering, there is a demo on youtube. >> > >There have been many "silver bullet" fixes since the 1960's. >Structured Programming, Literate Programming, several programming >languages, Rapid Prototyping, CASE, OOA / OOD, Provable Software (in >the mathematical sense), and numerous others. > >Has any of them worked(?), No (except on a few restricted cases). > Read this http://www.dreamingincode.com/ as a wondeful example of how broken things are. John |