Prev: LM3478 design gets insanely hot
Next: 89C51ED2
From: ChrisQ on 10 Aug 2008 13:05 Jan Panteltje wrote: > John Lennon: > > 'You know I am a dreamer' .... ' And I hope you join us someday' > > (well what I remember of it). You should REALLY try to program a Cell > processor some day. > > Dunno what you have against programmers, there are programmaers who > are amazingly clever with hardware resources. I dunno about NT and > MS, but IIRC MS plucked programmers from unis, and sort of > brainwashed them then.. the result we all know. > > That's just the problem - programmers have been so good at hiding the limitations of poorly designed hardware that the whole world thinks that hardware must be perfect and needs no attention other than making it go faster. If you look at some modern i/o device architectures, it's obvious the hardware engineers never gave a second thought about how the thing would be programmed efficiently... Chris (with embedded programmer hat on :-(
From: Jan Panteltje on 10 Aug 2008 13:24 On a sunny day (Sun, 10 Aug 2008 17:05:31 +0000) it happened ChrisQ <blackhole(a)devnull.com> wrote in <g7n75m$vi$1(a)aioe.org>: >Jan Panteltje wrote: > >> John Lennon: >> >> 'You know I am a dreamer' .... ' And I hope you join us someday' >> >> (well what I remember of it). You should REALLY try to program a Cell >> processor some day. >> >> Dunno what you have against programmers, there are programmaers who >> are amazingly clever with hardware resources. I dunno about NT and >> MS, but IIRC MS plucked programmers from unis, and sort of >> brainwashed them then.. the result we all know. >> >> > >That's just the problem - programmers have been so good at hiding the >limitations of poorly designed hardware that the whole world thinks >that hardware must be perfect and needs no attention other than making >it go faster. > >If you look at some modern i/o device architectures, it's obvious the >hardware engineers never gave a second thought about how the thing would >be programmed efficiently... > >Chris (with embedded programmer hat on :-( Interesting. For me, I have a hardware background, but also software, the two came together with FPGA, when I wanted to implement DES as fast as possible. I did wind up with just a bunch of gates and 1 clock cycle, so no program :-) No loops (all unfolded in hardware). So, you need to define some boundary between hardware resources (that one used a lot of gates), and software resources, I think.
From: Tim Williams on 10 Aug 2008 13:29 "ChrisQ" <blackhole(a)devnull.com> wrote in message news:g7n75m$vi$1(a)aioe.org... > That's just the problem - programmers have been so good at hiding the > limitations of poorly designed hardware Is that like the crummy WinModems? Tim -- Deep Friar: a very philosophical monk. Website: http://webpages.charter.net/dawill/tmoranwms
From: John Larkin on 10 Aug 2008 13:32 On Sun, 10 Aug 2008 10:38:01 GMT, Jan Panteltje <pNaonStpealmtje(a)yahoo.com> wrote: >On a sunny day (Sun, 10 Aug 2008 05:58:13 -0400) it happened Bill Todd ><billtodd(a)metrocast.net> wrote in ><1aqdnfjG5tCEJgPVnZ2dnUVZ_sTinZ2d(a)metrocastcablevision.com>: > >>> Just to rain a bit on your parade, in the *Linux* kernel, >>> many years ago, the concept of 'modules' was introduced. >>> Now device drivers are 'modules', and are, although closely connected, and in the same >>> source package, _not_ a real pert of the kernel. >>> (I am no Linux kernel expert, but it is absolutely possible to write a device >>> driver as module, and then, while the system is running, load that module, >>> and unload it again. >>> I sort of have the feeling that your knowledge of Linux, and the Linux kernel, is very academic John, >>> and you should really compile a kernel and play with Linux a bit to get >>> the feel of it. >> >>Er, the discussion that John quoted above referred not to what is >>compiled with the kernel but to what executes in the same protection >>domain that the kernel does (as it is my impression Linux modules do). >>Perhaps John is not the one who needs to develop a deeper understanding >>here. > >He mentioned 'monolithic', and with modules, the Linux kernel is _not_ monolitic. >You can load a device driver as a module (after you configured it to be a module >before compilation, the kernel config gives you often a choice), and >then that module will even be dynamically loaded, including other modules it depends on, >and unloaded too if no longer used (that device). >This keeps memory usage low, and prevent that you need to reboot if you add a new driver. > >As to 'protection domain' be aware that even if you were to run device drivers on a different core (one for each device???) >then you will still have to move the data from one core to the other for processing, and >how protected do you think that data is? It is all illusion: 'More cores will solve everything.'. >I wonder how many here actually use Linux, compiled a kernel, wrote modules and applications, >and even can write in C. What does C have to do with it, other than being a contributor to the chaos that modern computing is? More big programming projects fail than ever make it to market. OS's are commonly shipped with hundreds or sometimes thousands of bugs. Serious damage to consumers, business, and US national security has been compromised through the criminally stupid design of Windows. Lots of people are refusing to upgrade their apps because the newer releases are bigger, slower, and more fragile than the older ones. In products with hardware, HDL-based logic, and firmware, it's nearly always the firmware that's full of bugs. If engineers can write bug-free VHDL, which they usually do, why can't programmers write bug-free C, which they practically never do? Things are broken, and we need a change. Since hardware works, and software doesn't, we heed more of the former with more control over less of the latter. Fortunately, that *will* happen, and multicore is one of the drivers. >I'd rather have a discussion with them, then the generalised bloating about systems they never even >had hands on experience with. >In that case sci.electronics.design becomes like sci.physics, bunch of idiots with even >more idiotic theories causing so much noise that the real stuff is obscured, and your chance to learn something >is zero. >This is my personal rant, I am a Linux user, written many applications for it, did some work on >drivers too. >Academic bullshit I know about too, in my first year Information Technology I found an error in the >text book, reported it, professors do not always like to be corrected, I learned that. >There was a project that you could join, about in depth study of operating systems, and, since I actually >wrote one, I applied for the project, was promptly rejected. >Where did those guys go? Microsoft?????? >I will listen to John Larkin's theory about how safe multicore systems are after he writes a demo, or even >shows someone else's that cannot be corrupted. >Utopia does not exist. I have stated no theories. I have observed that the number of cores per CPU chip is increasing radically, that Moore's law has repartitioned itself away from raw CPU complexity and speed into multiple, relatively modest processors. That this is happening across the range of processors, scientific and desktop and embedded. Are you denying that this is happening? If not, do you have any opinions on whether having hundreds of fairly fast CPUs, instead of one blindingly-fast one, will change OS design? Will it change embedded app design? If you have no opinions, and can conjecture no change, why do you get mad at people who do, and can? Why do you post in a group that has "design" in its name? Maybe you should start and moderate sci.electronics.tradition. John
From: Jan Panteltje on 10 Aug 2008 13:48
On a sunny day (Sun, 10 Aug 2008 10:32:17 -0700) it happened John Larkin <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in <k38u949ebukbdr3hi4ql3vm9720rlv7bt3(a)4ax.com>: >On Sun, 10 Aug 2008 10:38:01 GMT, Jan Panteltje ><pNaonStpealmtje(a)yahoo.com> wrote: > >>On a sunny day (Sun, 10 Aug 2008 05:58:13 -0400) it happened Bill Todd >><billtodd(a)metrocast.net> wrote in >><1aqdnfjG5tCEJgPVnZ2dnUVZ_sTinZ2d(a)metrocastcablevision.com>: >> >>>> Just to rain a bit on your parade, in the *Linux* kernel, >>>> many years ago, the concept of 'modules' was introduced. >>>> Now device drivers are 'modules', and are, although closely connected, and in the same >>>> source package, _not_ a real pert of the kernel. >>>> (I am no Linux kernel expert, but it is absolutely possible to write a device >>>> driver as module, and then, while the system is running, load that module, >>>> and unload it again. >>>> I sort of have the feeling that your knowledge of Linux, and the Linux kernel, is very academic John, >>>> and you should really compile a kernel and play with Linux a bit to get >>>> the feel of it. >>> >>>Er, the discussion that John quoted above referred not to what is >>>compiled with the kernel but to what executes in the same protection >>>domain that the kernel does (as it is my impression Linux modules do). >>>Perhaps John is not the one who needs to develop a deeper understanding >>>here. >> >>He mentioned 'monolithic', and with modules, the Linux kernel is _not_ monolitic. >>You can load a device driver as a module (after you configured it to be a module >>before compilation, the kernel config gives you often a choice), and >>then that module will even be dynamically loaded, including other modules it depends on, >>and unloaded too if no longer used (that device). >>This keeps memory usage low, and prevent that you need to reboot if you add a new driver. >> >>As to 'protection domain' be aware that even if you were to run device drivers on a different core (one for each device???) >>then you will still have to move the data from one core to the other for processing, and >>how protected do you think that data is? It is all illusion: 'More cores will solve everything.'. >>I wonder how many here actually use Linux, compiled a kernel, wrote modules and applications, >>and even can write in C. > > >What does C have to do with it, other than being a contributor to the >chaos that modern computing is? More big programming projects fail >than ever make it to market. OS's are commonly shipped with hundreds >or sometimes thousands of bugs. Serious damage to consumers, business, >and US national security has been compromised through the criminally >stupid design of Windows. Lots of people are refusing to upgrade their >apps because the newer releases are bigger, slower, and more fragile >than the older ones. In products with hardware, HDL-based logic, and >firmware, it's nearly always the firmware that's full of bugs. If >engineers can write bug-free VHDL, which they usually do, why can't >programmers write bug-free C, which they practically never do? > >Things are broken, and we need a change. Since hardware works, and >software doesn't, we heed more of the former with more control over >less of the latter. Fortunately, that *will* happen, and multicore is >one of the drivers. > > >>I'd rather have a discussion with them, then the generalised bloating about systems they never even >>had hands on experience with. >>In that case sci.electronics.design becomes like sci.physics, bunch of idiots with even >>more idiotic theories causing so much noise that the real stuff is obscured, and your chance to learn something >>is zero. >>This is my personal rant, I am a Linux user, written many applications for it, did some work on >>drivers too. >>Academic bullshit I know about too, in my first year Information Technology I found an error in the >>text book, reported it, professors do not always like to be corrected, I learned that. >>There was a project that you could join, about in depth study of operating systems, and, since I actually >>wrote one, I applied for the project, was promptly rejected. >>Where did those guys go? Microsoft?????? >>I will listen to John Larkin's theory about how safe multicore systems are after he writes a demo, or even >>shows someone else's that cannot be corrupted. >>Utopia does not exist. > >I have stated no theories. I have observed that the number of cores >per CPU chip is increasing radically, that Moore's law has >repartitioned itself away from raw CPU complexity and speed into >multiple, relatively modest processors. That this is happening across >the range of processors, scientific and desktop and embedded. Are you >denying that this is happening? > >If not, do you have any opinions on whether having hundreds of fairly >fast CPUs, instead of one blindingly-fast one, will change OS design? >Will it change embedded app design? > >If you have no opinions, and can conjecture no change, why do you get >mad at people who do, and can? Why do you post in a group that has >"design" in its name? Maybe you should start and moderate >sci.electronics.tradition. > >John HI JOHN elwctronics design is not (!= in C ;-) ) software design. Just stating there will be more cores on a chip is obvious, we knew that for years. Stating that more cores will improve _reliability_ (in the widest sense of the word) as you seem to (at least that is what I understand from your postings), puts the burden of proof on you. You call software bad, yet you claim your own small asm programs are perfect, this makes one suspicious. There is a lot of good software, I would say that software that does what it is intended to do, and does that without crashing, is good software. If that software runs on good hardware you can do a lot with it. All the problems with MS operating system are alien to me, the last MS OS I bought was win98SE, I still have it on a PC, and it does occasionally misbehave, use it for my Canon scanner, and DVD layout sometimes. I will not go online with it..... All other things run various versions / distributions of Linux, think I have tried most of these, all but RatHead worked OK. So I do not really see your problem, things do not crash, the soft I wrote myself does not crash, things do not get infected with trojans, virusses, worms, other things... I have a very good firewall (iptables), latest DNS fixes, this server has now been running since 2004, still with the same Seagate harddisk... What is your problem? As to computer languages, the portability of C will help you out big time once you want to run that same stable application on say a MIPS platform, or any other processor. Re-writing your code in ASM for each new platform is asking for bugs, so C is an universal solution. Especially for more complex programs. AND operating systems. |