Prev: LM3478 design gets insanely hot
Next: 89C51ED2
From: Bernd Paysan on 13 Aug 2008 08:05 John Larkin wrote: > A lot of hardware sorts of stuff, like tcp/ip stack accelerators, > coule be done in a dedicated cpu. Sort of like using a PIC to blink an > LED. Part of the channel-controller thing was driven by mot wanting to > burden an expensive CPU with scut work and interrupts and context > switching overhead. All that stops mattering when cpu's are free. Of > course, disk controllers and graphics processors would still be > needed, but simpler ones and fewer of them. Come on, when CPUs are almost free, dedicated IO CPUs are still a lot cheaper. You can have more of them in the same die area. They might still have the same basic instruction set, just with different performance tradeoff. You might put a few fast cores on the die, which give you maximum performance for single-threaded applications. Then, you put a number of slower cores on it, for maximum multi-threaded performance. And then, another even slower and simpler type of core for IO. When cores are cheap, it makes sense to build them for their purpose. -- Bernd Paysan "If you want it done right, you have to do it yourself" http://www.jwdt.com/~paysan/
From: Wilco Dijkstra on 13 Aug 2008 09:40 "Nick Maclaren" <nmm1(a)cus.cam.ac.uk> wrote in message news:g7u4e6$8k0$1(a)gemini.csx.cam.ac.uk... > > In article <aqook.200536$IP7.138587(a)newsfe16.ams2>, > "Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes: > |> > |> It's certainly true the C standard is one of the worst specified. However most > |> compiler writers agree about the major omissions and platforms have ABIs that > |> specify everything else needed for binary compatibility (that includes features > |> like volatile, bitfield details etc). So things are not as bad in reality. > > Er, no. I have a LOT of experience with serious code porting, and > am used as an expert of last resort. Most niches have their own > interpretations of C, but none of them use the same ones, and only > programmers with a very wide experience can write portable code. Can you give examples of such different interpretations? There are a few areas that people disagree about, but it often doesn't matter much. Interestingly most code is widely portable despite most programmers having little understanding about portability and violating the C standard in almost every respect. > Note that any application that relies on ghastly kludges like > autoconfigure is not portable, not even remotely. And the ways > in which that horror is used (and often comments in its input) > shows just how bad the C 'standard' is. Very often, 75% of that > is to bypass deficiencies in the C standard. Actually you don't need any "autoconfiguring" in C. Much of that was needed due to badly broken non-conformant Unix compilers. I do see such terrible mess every now and again, with people declaring builtin functions incorrectly as otherwise "it wouldn't compile on compiler X"... Properly sized types like int32_t have finally been standardized, so the only configuration you need is the selection between the various extensions that have not yet been standardized (although things like __declspec are widely accepted nowadays). > A simple question: have you ever ported a significant amount of > code (say, > 250,000 lines in > 10 independent programs written > by people you have no contact with) to a system with a conforming > C system, based on different concepts to anything the authors > were familiar with? I have. I've done a lot of porting and know most of the problems. It's not nearly as bad as you claim. Many "porting" issues are actually caused by bugs and limitations in the underlying OS. I suggest that your experience is partly colored by the fact that people ask you as a last resort. Wilco
From: John Larkin on 13 Aug 2008 10:07 On Tue, 12 Aug 2008 20:50:06 -0700, JosephKK <quiettechblue(a)yahoo.com> wrote: >On Mon, 11 Aug 2008 10:05:01 -0700, John Larkin ><jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote: > >>On Mon, 11 Aug 2008 15:00:39 GMT, Jan Panteltje >><pNaonStpealmtje(a)yahoo.com> wrote: >> >>>On a sunny day (Mon, 11 Aug 2008 07:45:08 -0700) it happened John Larkin >>><jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in >>><iij0a4ho0867ufr82loiq6epj3b23u8svr(a)4ax.com>: >>> >>> >>>>I've got a small software project going now to write a new material >>>>control/BOM/parts database system. The way I've found to keep the bugs >>>>under reasonable control is to use one programmer, one programmer >>>>supervisor, and four testers. >>>> >>>>John >>> >>>postgreSQL with phpPgAdmin as frontend here, web based. >>>What bugs? Yes the bugs in my SQL :-) >> >>A single database file, of fixed-length records, programmed in >>PowerBasic, direct linear search to look things up. Fast as hell and >>bulletproof. > >Only for fairly small numbers of records, not more than a few >thousand. Try it with a modest database with say several million >records. Big databases have several relational files with over a >billion records. Data warehouses hit trillions of records in high >tens of thousands of files and more. PowerBasic simply will not scale >that high, nor will linear searches. Right now, we have about 4800 different parts in stock, and about 600 parts lists (BOMs). Why use SQL on that? Why make a monster out of a simple problem? Searches are so fast you can't see them happen, and there's no database maintanance, no linked lists to get tangled, no index files. John
From: Nick Maclaren on 13 Aug 2008 10:32 In article <6vBok.60819$Gh7.57365(a)newsfe15.ams2>, "Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes: |> |> > |> It's certainly true the C standard is one of the worst specified. However most |> > |> compiler writers agree about the major omissions and platforms have ABIs that |> > |> specify everything else needed for binary compatibility (that includes features |> > |> like volatile, bitfield details etc). So things are not as bad in reality. |> > |> > Er, no. I have a LOT of experience with serious code porting, and |> > am used as an expert of last resort. Most niches have their own |> > interpretations of C, but none of them use the same ones, and only |> > programmers with a very wide experience can write portable code. |> |> Can you give examples of such different interpretations? There are a |> few areas that people disagree about, but it often doesn't matter much. It does as soon as you switch on serious optimisation, or use a CPU with unusual characteristics; both are common in HPC and rare outside it. Note that compilers like gcc do not have any options that count as serious optimisation. I could send you my Objects diatribe, unless you already have it, which describes one aspect. You can also add anything involving sequence points (including functions in the library that may be implemented as macros), anything involving alignment, when a library function must return an error (if ever) and when it is allowed to flag no error and go bananas. And more. |> Interestingly most code is widely portable despite most programmers |> having little understanding about portability and violating the C standard in |> almost every respect. That is completely wrong, as you will discover if you ever need to port to a system that isn't just a variant of one you are familiar with. Perhaps 1% of even the better 'public domain' sources will compile and run on such systems - I got a lot of messages from people flabberghasted that my C did. |> Actually you don't need any "autoconfiguring" in C. Much of that was |> needed due to badly broken non-conformant Unix compilers. I do see |> such terrible mess every now and again, with people declaring builtin |> functions incorrectly as otherwise "it wouldn't compile on compiler X"... Many of those are actually defects in the standard, if you look more closely. |> Properly sized types like int32_t have finally been standardized, so the |> only configuration you need is the selection between the various extensions |> that have not yet been standardized (although things like __declspec are |> widely accepted nowadays). "Properly sized types like int32_t", forsooth! Those abominations are precisely the wrong way to achieve portability over a wide range of systems or over the long term. I shall be dead and buried when the 64->128 change hits, but people will discover their error then, oh, yes, they will! int32_t should be used ONLY for external interfaces, and it doesn't help with them because it doesn't specify the endianness or overflow handling. And not all interfaces are the same. All internal types should be selected as to their function - e.g. array indices, file pointers, hash code values or whatever - so that they will match the system's properties. As in Fortran, K&R C etc. |> > A simple question: have you ever ported a significant amount of |> > code (say, > 250,000 lines in > 10 independent programs written |> > by people you have no contact with) to a system with a conforming |> > C system, based on different concepts to anything the authors |> > were familiar with? I have. |> |> I've done a lot of porting and know most of the problems. It's not nearly |> as bad as you claim. Many "porting" issues are actually caused by bugs |> and limitations in the underlying OS. I suggest that your experience is |> partly colored by the fact that people ask you as a last resort. Partly, yes. But I am pretty certain that my experience is a lot wider than yours. I really do mean different CONCEPTS - start with IBM MVS and move on to a Hitachi SR2201, just during the C era. Note that I was involved in both the C89 and C99 standardisation process; and the BSI didn't vote "no" for no good reason. Regards, Nick Maclaren.
From: Joerg on 13 Aug 2008 12:18
John Larkin wrote: > On Tue, 12 Aug 2008 20:50:06 -0700, JosephKK <quiettechblue(a)yahoo.com> > wrote: > >> On Mon, 11 Aug 2008 10:05:01 -0700, John Larkin >> <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote: >> >>> On Mon, 11 Aug 2008 15:00:39 GMT, Jan Panteltje >>> <pNaonStpealmtje(a)yahoo.com> wrote: >>> >>>> On a sunny day (Mon, 11 Aug 2008 07:45:08 -0700) it happened John Larkin >>>> <jjlarkin(a)highNOTlandTHIStechnologyPART.com> wrote in >>>> <iij0a4ho0867ufr82loiq6epj3b23u8svr(a)4ax.com>: >>>> >>>> >>>>> I've got a small software project going now to write a new material >>>>> control/BOM/parts database system. The way I've found to keep the bugs >>>>> under reasonable control is to use one programmer, one programmer >>>>> supervisor, and four testers. >>>>> >>>>> John >>>> postgreSQL with phpPgAdmin as frontend here, web based. >>>> What bugs? Yes the bugs in my SQL :-) >>> A single database file, of fixed-length records, programmed in >>> PowerBasic, direct linear search to look things up. Fast as hell and >>> bulletproof. >> Only for fairly small numbers of records, not more than a few >> thousand. Try it with a modest database with say several million >> records. Big databases have several relational files with over a >> billion records. Data warehouses hit trillions of records in high >> tens of thousands of files and more. PowerBasic simply will not scale >> that high, nor will linear searches. > > Right now, we have about 4800 different parts in stock, and about 600 > parts lists (BOMs). Why use SQL on that? Why make a monster out of a > simple problem? Searches are so fast you can't see them happen, and > there's no database maintanance, no linked lists to get tangled, no > index files. > Way to go. Although Access could also do that with next to nothing in programming effort. Just set up the fields, some practical queries, the reports you regularly need, done. -- Regards, Joerg http://www.analogconsultants.com/ "gmail" domain blocked because of excessive spam. Use another domain or send PM. |