From: Jan Panteltje on
On a sunny day (14 Aug 2008 08:57:49 GMT) it happened nmm1(a)cus.cam.ac.uk (Nick
Maclaren) wrote in <g80s2d$qe8$1(a)gemini.csx.cam.ac.uk>:

>
>The length is passed around as uint32_t, but so are rather a lot of
>other fields. In a year or so, the program is upgraded to support
>another interface, which allows 48- or 64-bit file lengths.

That is babble.
A new file specification will have an other header, or will be in
an extention field of the current header.
The whole program would be a different one, as all length calculations
and checks would change.
I have been there, done that.

Using clearly specified variable width is GOOD, unless you want to go
back to BASIC.
:-)

From: Martin Brown on
Kim Enkovaara wrote:
> Martin Brown wrote:
>> It might be an interesting academic study to see how the error rates
>> of hardware engineers using VHDL compare with those using Verilog
>> tools for the same sorts of design. The latter I believe is less
>> strongly typed.
>
> In real industry design flows things get more complicated, because
> almost all of the Verilog flows seem to suggest the use of Lint
> type of tools (actually they do much more than just rudimentary language
> checks). Those tools do some of the checks that VHDL type system
> does during the compilation as a part of the language.

OK. Then it is the proportion of static testing errors detected by the
lint style tools in the Verilog environment as compared to the defects
that are found later on and the same for VHDL.

It would be unreasonable to compare Verilog without the normal workflow
tools being used. Although the number of defects they pick up per KLOC
would be an interesting number for both environments.

Regards,
Martin Brown
** Posted from http://www.teranews.com **
From: Wilco Dijkstra on

"Terje Mathisen" <terje.mathisen(a)hda.hydro.com> wrote in message news:V92dnbsbmsAAST7VRVnyvwA(a)giganews.com...
> Wilco Dijkstra wrote:
>> time consuming ISO process used. One of the funny moments was when I
>> spotted 5 mistakes in a 20-line function that calculated the log2 of an integer
>> during a code review. I hope that wasn't their average bugrate!
>
> Indeed.
>
> How many ways can you define such a function?
>
> The only serious alternatives would be in the handling of negative-or-zero inputs or when rounding the actual fp
> result to integer:
>
> Do you want the Floor(), i.e. truncate, Ceil() or Round_to_nearest_or_even()?
>
> Using the latest alternative could make it harder to come up with a perfect implementation, but otherwise it should be
> trivial.

It was a trivial routine, just floor(log2(x)), so just finding the top bit that is set.
The mistakes were things like not handling zero, using signed rather than
unsigned variables, looping forever for some inputs, returning the floor result + 1.

Rather than just shifting the value right until it becomes zero, it created a mask
and shifted it left until it was *larger* than the input (which is not going to work
if you use a signed variable for it or if the input has bit 31 set etc).

My version was something like:

int log2_floor(unsigned x)
{
int n = -1;
for ( ; x != 0; x >>= 1)
n++;
return n;
}

Wilco


From: Nick Maclaren on

In article <g810br$ou9$1(a)aioe.org>,
Jan Panteltje <pNaonStpealmtje(a)yahoo.com> writes:
|> >
|> >The length is passed around as uint32_t, but so are rather a lot of
|> >other fields. In a year or so, the program is upgraded to support
|> >another interface, which allows 48- or 64-bit file lengths.
|>
|> That is babble.
|> A new file specification will have an other header, or will be in
|> an extention field of the current header.
|> The whole program would be a different one, as all length calculations
|> and checks would change.
|> I have been there, done that.

Precisely. And, if you would learn from the experience of the past,
all you would have to change is the interface code - the rest of the
program would not even need inspecting.

Been there - done that. Many times, in many contexts.


Regards,
Nick Maclaren.
From: Wilco Dijkstra on
"Nick Maclaren" <nmm1(a)cus.cam.ac.uk> wrote in message news:g7vfa8$3km$1(a)gemini.csx.cam.ac.uk...
>
> In article <cVDok.93318$tc1.22014(a)newsfe24.ams2>,
> "Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes:

> As I said, I will send you my document if you like, which includes
> examples and explanations. Otherwise I suggest that you look at the
> archives of comp.std.c, which has dozens of examples. I don't have
> time to search my records for other examples for you.

I'd certainly be interested in the document. My email is above, just make
the obvious edit.

> |> I bet that most code will compile and run without too much trouble.
> |> C doesn't allow that much variation in targets. And the variation it
> |> does allow (eg. one-complement) is not something sane CPU
> |> designers would consider nowadays.
>
> The mind boggles. Have you READ the C standard?

More than that. I've implemented it. Have you?

It's only when you implement the standard you realise many of the issues are
irrelevant in practice. Take sequence points for example. They are not even
modelled by most compilers, so whatever ambiguities there are, they simply
cannot become an issue. Similarly various standard pendantics are moaning
about shifts not being portable, but they can never mention a compiler that fails
to implement them as expected...

Btw Do you happen to know the reasoning behind signed left shifts being
undefined while right shifts are implementation defined?

> |> Not specifying the exact size of types is one of C's worst mistakes.
> |> Using sized types is the right way to achieve portability over a wide
> |> range of existing and future systems (including ones that have different
> |> register sizes). The change to 128-bit is not going to affect this software
> |> precisely because it already uses correctly sized types.
>
> On the contrary. Look, how many word size changes have you been
> through? Some of my code has been through about a dozen, in
> succession, often with NO changes. Code that screws 32 bits in
> will not be able to handle data that exceeds that.

It will work as long as the compiler supports a 32-bit type - which it will of
course. But in the infinitesimal chance it doesn't, why couldn't one
emulate a 32-bit type, just like 32-bit systems emulate 64-bit types?

> You are making PRECISELY the mistake that was made by the people
> who coded the exact sizes of the IBM System/360 into their programs.
> They learnt better, but have been replaced by a new set of kiddies,
> determined to make the same old mistake :-(

Actually various other languages support sized types and most software
used them long before C99. In many cases it is essential for correctness
(imagine writing 32 bits to a peripheral when it expects 16 bits etc). So
you really have to come up with some extraordinary evidence to explain
why you think sized types are fundamentally wrong.

Wilco


First  |  Prev  |  Next  |  Last
Pages: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Prev: LM3478 design gets insanely hot
Next: 89C51ED2