From: Nick Maclaren on

In article <1Jwqk.33791$ah4.20206(a)newsfe15.ams2>,
"Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes:
|>
|> > [#4] The result of E1 << E2 is E1 left-shifted E2 bit
|> > positions; vacated bits are filled with zeros. If E1 has an
|> > unsigned type, the value of the result is E1�2^E2, reduced
|> > modulo one more than the maximum value representable in the
|> > result type. If E1 has a signed type and nonnegative value,
|> > and E1�2^E2 is representable in the result type, then that is
|> > the resulting value; otherwise, the behavior is undefined.
|>
|> Exactly my point. It clearly states that ALL leftshifts of negative values are
|> undefined, EVEN if they would be representable. The "and nonnegative value"
|> excludes negative values! The correct wording should be something like:

Yes, you are correct there, and I was wrong. I apologise.


Regards,
Nick Maclaren.
From: Wilco Dijkstra on

"Nick Maclaren" <nmm1(a)cus.cam.ac.uk> wrote in message news:g8e20j$3j4$1(a)gemini.csx.cam.ac.uk...
>
> In article <222180a4-a9d1-48c5-94ae-e8ae643b1a6a(a)v57g2000hse.googlegroups.com>,
> already5chosen(a)yahoo.com writes:
> |>
> |> Byte addressability is still uncommon in DSP world. And no, C
> |> compilers for DSPs do not emulate char in a manner that you suggested
> |> below. They simply treat char and short as the same thing, on 32-bit
> |> systems char, short and long are all the same. I am pretty sure that
> |> what they do is in full compliance with the C standard.
>
> Well, it is and it isn't :-( There was a heated debate on SC22WG14,
> both in C89 and C99, where the UK wanted to get the standard made
> self-consistent. We failed. The current situation is that it is in
> full compliance for a free-standing compiler, but not really for a
> hosted one (think EOF). This was claimed not to matter, as all DSP
> compilers are free-standing!

Eventhough the standard is vague as usual about the relative sizes of integer
types besides the minimum sizes, it is widely accepted that int must be larger
than char and long long larger than int. That means a 32-bit DSP must support
at least 3 different sizes. Even so, making short=int=long is bound to cause
trouble, a lot of software can deal with short=int or int=long but not both.

> |> > Putting in extra effort to allow for a theoretical system with
> |> > sign-magnitude 5-bit char or a 31-bit one-complement int is
> |> > completely insane.
> |>
> |> Agreed
>
> However, allowing for ones with 16- or 32-bit chars, or signed
> magnitude integers is not. The former is already happening, and there
> are active, well-supported attempts to introduce the latter (think
> IEEE 754R). Will they ever succeed? Dunno.

32-bit wchar_t is OK, but 32-bit char is a bad idea (see above).
C99 already allows sign magnitude integers. Or do you mean BCD integers?
That would be a disaster of unimaginable proportion...

> |> It seems you overlooked the main point of Nick's concern - sized types
> |> prevent automagical forward compatibility of the source code with
> |> larger problems on bigger machines.

That's not true. Most problems do not get "larger" over time. Since DSP's
are mentioned, imagine implementing a codec like AMR. You need a certain
minimum size to process the fixed point samples. Larger types do not help
at all (one often needs to saturate to a certain width, in other cases you can
precalculate the maximum width needed for the required precision).
For this kind of problem sized types are the most natural.

Now there are of course cases where the problem does get larger. That's
why we've got ptrdiff_t - there is no reason to fix it size. I never said that we
should completely abolish variable sized types, but that the standard should
*mandate* that all implementations support the sized types int8, int16 etc.

One of the key advantages of sized types is that software needs less
porting effort. Eventhough Nick will claim his software runs on any system
ever made, in reality it's nontrivial to ensure software works on systems
with different integer sizes. I bet a lot of C code fails on this 32-bit-only DSP.
However if the sized types were supported any code would work unchanged.
Java uses sized types for the same reason.

Wilco


From: Nick Maclaren on

In article <jQxqk.42827$2X3.25799(a)newsfe13.ams2>,
"Wilco Dijkstra" <Wilco.removethisDijkstra(a)ntlworld.com> writes:
|>
|> Even though Nick will claim his software runs on any system
|> ever made,

Please don't be ridiculous. I have never made such a claim, and
have used some systems so tricky that I had trouble writing even
simple Fortran that worked on them.


Regards,
Nick Maclaren.
From: already5chosen on
On Aug 19, 6:14 am, Andrew Reilly <andrew-newsp...(a)areilly.bpc-
users.org> wrote:
>
> For "modern", besides the C6000 series, I'd include the ADI/Intel
> Blackfin and the VLSI ZSP series.

Why don't you count C55? It is relatively new and, according to my
understanding of the market, by far the most popular general purpose
DSP in the world.

> I haven't used either of those in
> anger, but I believe that they're both more-or-less "C" compliant.

Of those you mentioned I only used Blackfin. It's support for 'C" is,
indeed, idiomatic as you call it.

>The
> main other "newness" in DSP-land are all of the DSP-augmented RISC
> processors, and they're all essentially pure "C" machines, too (alignment
> issues can be worse though, and you often have to use asm or intrinsics
> to get at the DSP features.)
>

IMHO, the main newness in DSP world is that on "simple algorithms,
high throughput" front classic programmable Von-Neuman or Harward
machines are less and less competitive with FPGAs. Appearance of HW
multipliers in cost-oriented Spartan and Cyclone series changed the
game once and for all. So traditional DSP vendors, esp. TI and ADI,
should look for new niches. IMHO, it also means that C6000 and to less
extend TigerSharc lines don't have a bright future. On the other hand,
C55, Blackfin and flash-based C28 and similar Freescale products are
not at danger.
Oh, quite off topic...

From: Michel Hack on
On Aug 18, 6:36 pm, "Wilco Dijkstra"
<Wilco.removethisDijks...(a)ntlworld.com> wrote:
> "Michel Hack" <h...(a)watson.ibm.com> wrote in message
> > On some machines the high-order bit is shifted out, on others (e.g.
> > S/370) it remains unchanged:
>
> It would be correct as long as there is no overflow. Ie. 0xffffffff << 1
> becomes 0xfffffffe as expected.

Ah yes -- but is that high-order one-bit the ORIGINAL one-bit, or is
it a one-bit that was shifted into it? ;-)

Michel.
First  |  Prev  | 
Pages: 23 24 25 26 27 28 29 30 31 32 33
Prev: LM3478 design gets insanely hot
Next: 89C51ED2