Prev: LM3478 design gets insanely hot
Next: 89C51ED2
From: Nick Maclaren on 14 Aug 2008 04:57 In article <QIednc5cFOIOUj7VnZ2dnUVZ8qDinZ2d(a)giganews.com>, Terje Mathisen <terje.mathisen(a)hda.hydro.com> writes: |> Jan Panteltje wrote: |> |> > No, int32_t and friends became NECESSARY when the 32 to 64 wave hit, |> > a simple example, and audio wave header spec: |> > #ifndef _WAVE_HEADER_H_ |> > #define _WAVE_HEADER_H_ |> > |> > typedef struct |> > { /* header for WAV-Files */ |> > uint8_t main_chunk[4]; /* 'RIFF' */ |> > uint32_t length; /* length of file */ |> |> This is precisely the wrong specification for a portable specification! |> |> Which byte order should be used for the length field? Yes. Plus the fact that many interfaces use fields that have subtly different value ranges or semantics than the ones specified by C. But that wasn't my primary point. The reason that the fixed-length fanatics are so wrong is that they take a decision that is appropriate for external interfaces and extend it to internal ones, and even the majority of workspace variables. Let's take that mistake as an example. The length is passed around as uint32_t, but so are rather a lot of other fields. In a year or so, the program is upgraded to support another interface, which allows 48- or 64-bit file lengths. Not merely does the program now have to be hacked, sometimes extensively, there is a high chance of missing some changes or changing something that shouldn't have been. Then the program starts to corrupt data, but typically only when handling very large files! That is PRECISELY what has happened, not just in the IBM MVT/MVS days, but more than once in the Unix era, yet nobody seems to learn. 10 years ago, most Unix utilities were solid with such bugs, for exactly that reason - even when running in 64-bit mode, they often started corrupting data or crashing after 2/4 GB. Yet writing word size independent code is no harder than writing that sort of mess - though you do need to regard most of C99 as anathema. Such code typically doesn't even check whether 'words' are 32-bit or 64-bit, and would usually work with 36-, 48-, 60- or 128-bit ones. |> The alternative is to hide the memory ordering behind access functions |> that take care of any byte swapping that might be needed. That is the only approach for genuine portability, of course, such as when some new interface demands that you do bit-swapping, add padding, or do other such unexpected munging. Plus, of course, it is the way that any competent software engineer does it, because it provides a place to put tracing hooks and hacks for broken interface usages, as well as making it almost trivial to add support for new interfaces. Regards, Nick Maclaren.
From: Bernd Paysan on 14 Aug 2008 04:53 Kim Enkovaara wrote: > And the synthesis result for the integer and bitvector are the same. The > difference is that the other one traps in the simulation and the > designer has to think about the error. In HW there is no bounds > checking. That's a big no-no. Synthesis is just another implementation of simulation, so the semantics must be the same. If there is no proof that the trap can't happen, I would say you aren't allowed to synthesize the construct. > We also have to differentiate what is meant with an error. Is is > something that traps the simulation and it might be a bug, or is it > something that exists in the chip. I like code that traps as early as > possible and near the real problem, for that reason assertions and > bound checking are a real timesaver in verification. Yes, but assertions are an obvious verification tool, and not mixed with the actual operation semantics. If I write in Verilog if(addr > 10) begin $display("Address out of bound: %d", addr); $stop; end then this is perfectly synthesizable code, and I know that a failure in simulation is actually a bug in some producer of the address. >> My opinion towards good tools: >> >> * Straight forward operations >> * Simple semantics > > At least Verilog blocking vs. nonblocking and general scheduling > semantics are not very simple. VHDL scheduling is much harder to > misuse. Fortunately, the synthesis tools are usually very strict on the rules of blocking vs. non-blocking, so if you misuse them, you get error messages. >> * Don't offer several choices where one is sufficient > > Sometimes people like to code differently, choices help that. In > SystemVerilog there are at least so many ways to do things that > most people should be happy :) SystemVerilog is a lot of VHDL with Verilog syntax (as you described above). >> * Restrict people to a certain common style where the tool allows choices > > Coding style definition is a good way to start endless religious wars :) If you design as a team, you don't have time for many different coding styles. But certainly, you are right that people are religious about their coding style - in the current chip we just examine, we found a bug which came from a particularly risky way of coding something, and we already had a discussion between two people when this was coded - the implementer ignored the common practice, and the code he wrote was actually wrong. -- Bernd Paysan "If you want it done right, you have to do it yourself" http://www.jwdt.com/~paysan/
From: Martin Brown on 14 Aug 2008 05:29 Wilco Dijkstra wrote: > "Martin Brown" <|||newspam|||@nezumi.demon.co.uk> wrote in message news:eff30$48a2a587$14802(a)news.teranews.com... >> Wilco Dijkstra wrote: >>> "Martin Brown" <|||newspam|||@nezumi.demon.co.uk> wrote in message news:55e8e$48a1bebf$4916(a)news.teranews.com... >>>> John Larkin wrote: >>>>> On Tue, 12 Aug 2008 10:41:23 +0100, "Ken Hagan" >>>>> <K.Hagan(a)thermoteknix.com> wrote: > >>> Exactly. A poor programmer is going to be a poor programmer no matter >>> what language they use. It's always fun to see how people believe that >>> so called "safe" languages are really a lot "safer". The bugs just move >>> elsewhere. >> Not quite. I doubt if a poor programmer could ever get a program to compile with an Ada compiler. Pascal or Modula2 >> would protect the world from a lot of the pointer casting disasters that C encourages. > > I agree C is a bit easier to learn syntactically and so attracts a larger share > of bad programmers. But in terms of types there isn't a huge difference - > you can't assign incompatible pointers in C without a cast. One issue is that > compilers don't give a warning when casts are likely incorrect (such as > casting to a type with higher alignment). I have seen far too many horrors in C code inspections. I am frankly amazed that some coders get away with so many mistakes. > >>> For example garbage collection solves various pointer problems >>> that inexperienced programmers make. However it creates a whole new >>> set of problems. Or the runtime system or libraries are much bigger and so >>> contain their own set of bugs on each platform etc. >> Even experienced programmers can make bad errors with pointers. You could make a fairly strong case for only having >> arrays. > > I doubt it. I've worked for many years on huge applications which use complex > data structures with lots of pointers. I've seen very few pointer related failures > despite using specialized memory allocators and all kinds of complex pointer > casting, unions etc. Most memory failures are null pointer accesses due to > simple mistakes. > > There is certainly a good case for making pointers and arrays more distinct > in C to clear up the confusion between them and allow for bounds checking. Lack of proper sequential N-D arrays is one major C weakness. FORTRAN had that exactly right apart from the old 1 based indexing. Being able to declare procedure parameters call by reference for speed but as a const would also make things a lot less likely to get trashed. That way they could only be read but not modified. > >> Pointers are the programming equivalent of having a rats nest of bare wires randomly soldered to points on your >> circuit board with the other end hanging in the air waiting to touch something vital. > > I guess you don't like pointers then :-) I don't dislike them quite as much as that suggests. It was more for effect. However, it describes what happens quite often :( The worst pointer related faults I have ever had to find was as an outsider diagnosing faults in a customers large software base. The crucial mode of failure was a local copy of a pointer to an object that was subsequently deallocated but stayed around unmolested for long enough for the program to mostly still work except when it didn't. Regards, Martin Brown ** Posted from http://www.teranews.com **
From: Kim Enkovaara on 14 Aug 2008 05:57 Bernd Paysan wrote: > Kim Enkovaara wrote: >> And the synthesis result for the integer and bitvector are the same. The >> difference is that the other one traps in the simulation and the >> designer has to think about the error. In HW there is no bounds >> checking. > > That's a big no-no. Synthesis is just another implementation of simulation, > so the semantics must be the same. If there is no proof that the trap can't > happen, I would say you aren't allowed to synthesize the construct. The synthesis works the same way as VHDL if range checking is disabled in the simulator via command line. Range checking is just an extra precaution, just like assertions which are not synthesizable in any general tools, just in some very specialized tools. Synthesis vs. simulation semantics are different, many structures that happily simulate might not be really synthesizable, for example many wait statements. If the user is clueless, any language can't save from the disaster. > Yes, but assertions are an obvious verification tool, and not mixed with the > actual operation semantics. If I write in Verilog > > if(addr > 10) begin > $display("Address out of bound: %d", addr); > $stop; > end > > then this is perfectly synthesizable code, and I know that a failure in > simulation is actually a bug in some producer of the address. But while synthesizing for real target the tool first says that $display and $stop are not supported constructs, and after that it removes the empty if, so nothing was actually generated. What is the difference to VHDL and defining the address to be integer range 0..10? With assertions I was more pointing to direction of PSL/SV assertions, not the traditional ASSERT in VHDL or code based assertions. >> At least Verilog blocking vs. nonblocking and general scheduling >> semantics are not very simple. VHDL scheduling is much harder to >> misuse. > > Fortunately, the synthesis tools are usually very strict on the rules of > blocking vs. non-blocking, so if you misuse them, you get error messages. And simulators can simulate same verilog code in many different ways. Many commercial IP models behave differently depending on optimization flags and simulators or simulator versions even. It seems that especially in behavioral models verilog is usually very badly misused. I hate debugging problems where first you have to figure out with the standard at hand if the simulator or code is wrong, and after that try to convince the original coder that he is wrong in his assumptions about the language semantics. --Kim
From: Jan Panteltje on 14 Aug 2008 06:07
On a sunny day (Thu, 14 Aug 2008 08:24:19 +0200) it happened Terje Mathisen <terje.mathisen(a)hda.hydro.com> wrote in <QIednc5cFOIOUj7VnZ2dnUVZ8qDinZ2d(a)giganews.com>: >> No, int32_t and friends became NECESSARY when the 32 to 64 wave hit, >> a simple example, and audio wave header spec: >> #ifndef _WAVE_HEADER_H_ >> #define _WAVE_HEADER_H_ >> >> typedef struct >> { /* header for WAV-Files */ >> uint8_t main_chunk[4]; /* 'RIFF' */ >> uint32_t length; /* length of file */ > >This is precisely the wrong specification for a portable specification! > >Which byte order should be used for the length field? > >If we had something like uint32l_t/uint32b_t with explicit >little-/big-endian byte ordering, then it would be portable. > >The only way to make the struct above portable would be to make all >16/32-bit variables arrays of 8-bit bytes instead, and then explicitly >specify how they are to be merged. > >Using a shortcut specification as above would only be allowable as a >platform-specific optimization, guarded by #ifdef's, for >machine/compiler combinations which match the actual specification. > >The alternative is to hide the memory ordering behind access functions >that take care of any byte swapping that might be needed. > >Terje That is all true, but one thing 'uint32_t length' makes clear here is that the wave file can be at most 4GB long. When moved to a larger size 'int' this is not obvious. When going from 32 to 64 bit one could easily assume otherwise. There is also an alignment problem with structures like this, one could be tempted to just read a file header into the structure address, however if the compiler aligns at 4 byte intervals, then you uint8_t may not be where you expect it. So there is always more to it, if you dig deeper. |