From: Andrew Reilly on 22 Jun 2010 07:47 On Tue, 22 Jun 2010 11:33:13 +0100, nmm1 wrote: > In article <88be3vFcceU2(a)mid.individual.net>, Andrew Reilly > <areilly---(a)bigpond.net.au> wrote: >>I don't need fixed point arithmetic or new types, I just want a "C" that >>defines sane behavior for all of the operations on the fixed-width, 2's >>compliment integers that exist on essentially all processors made today. > > Ah, so you want the compiler to trap overflow, diagnose where it > happened and terminate your program. LIA-1 specifies that as a (perhaps > the) preferred mode. Do you know *any* hardware that operates that way? (I wasn't previously aware of LIA-1, btw. Thanks for pointing it out. Good luck with that, guys.) No: I want the 2's compliment, fixed-point integers to wrap, just like the hardware does. I don't mind an interrupt on overflow, so long as I can install an "ignore" handler (which isn't really in the scope of a language even vaguely like C). It wouldn't be my preference. Program termination would be useless. I want to be able to detect overflow when it occurrs, either with the cumbersome if-tree that I showed in a previous message or some sort of "magic intrinsic function" that could be used like: if (oveflows(x+y)) {...}. The more domains that C leaves to assembly language, the less useful it becomes, and the more likely it is to be replaced. There is no virtue in C pretending to be some sort of abstract, high-level language: there are plenty of those. I realize that I'm arguing this in the wrong arena, but I'm not up to c.l.c... Here's a comp.arch tangent: I believe that processor architects only design and optimise for the requirements of the bulk of the "important" code base. If C compilers actively *prevent* the detection of signed integer overflow, then application code will find ways to avoid depending on being able to. How long before new processors just don't bother including the functionality? (That would probably seriously annoy the lisp implementers, though: graceful conversion to bignums would be harder if you couldn't detect fixnum overflow.) I'm sure unsigned right shift is cheaper/lower-power than signed: maybe we don't need that, either? Cheers, -- Andrew
From: =?ISO-8859-1?Q?Niels_J=F8rgen_Kruse?= on 22 Jun 2010 08:11 Andrew Reilly <areilly---(a)bigpond.net.au> wrote: > This is the same sort of epic compiler fail as eliding (x<<16)>>16 (once > a common idiom to sign-extend 16-bit integers) on the grounds that the > standard doesn't require anything in particular to happen when signed > integers are shifted. You could mask x first, so that overflow is impossible. A compiler this smart should evaporate the mask. -- Mvh./Regards, Niels J�rgen Kruse, Vanl�se, Denmark
From: nmm1 on 22 Jun 2010 08:34 In article <88bm5vFfraU1(a)mid.individual.net>, Andrew Reilly <areilly---(a)bigpond.net.au> wrote: > >>>I don't need fixed point arithmetic or new types, I just want a "C" that >>>defines sane behavior for all of the operations on the fixed-width, 2's >>>compliment integers that exist on essentially all processors made today. >> >> Ah, so you want the compiler to trap overflow, diagnose where it >> happened and terminate your program. LIA-1 specifies that as a (perhaps >> the) preferred mode. > >Do you know *any* hardware that operates that way? (I wasn't previously >aware of LIA-1, btw. Thanks for pointing it out. Good luck with that, >guys.) Yes. Lots do, if you select the right options. Or, rather, they generate an interrupt, which the language run-time system can trap. >No: I want the 2's compliment, fixed-point integers to wrap, just like >the hardware does. From the viewpoint of a high-level language, that is insane behaviour. And, for better or worse, ISO C attempts to be a high-level language. >I don't mind an interrupt on overflow, so long as I >can install an "ignore" handler (which isn't really in the scope of a >language even vaguely like C). It wouldn't be my preference. Program >termination would be useless. I want to be able to detect overflow when >it occurrs, either with the cumbersome if-tree that I showed in a >previous message or some sort of "magic intrinsic function" that could be >used like: if (oveflows(x+y)) {...}. You haven't had that in any 'third generation language' that I know of since Fortran II, except possibly for Cobol. Your hack works for integer addition, but not multiplication, floating-point, conversion (whether to smaller integers or between types) and so on. A language specification isn't much use if it specifies only the most trivial aspects and leaves the rest undefined. >The more domains that C leaves to assembly language, the less useful it >becomes, and the more likely it is to be replaced. There is no virtue in >C pretending to be some sort of abstract, high-level language: there are >plenty of those. While C started out as a semi-portable assembler, I should be absolutely flabberghasted if you were prepared to accept the consequences. Have you ever written portable code for such languages? The point is that its specification was primarily the syntax and intent, and the semantics was entirely system-dependent. You would then get different behaviour according to what instructions the compiler generated. I have coded for such specifications, but you need skills that are VERY rare nowadays. Regards, Nick Maclaren.
From: George Neuner on 22 Jun 2010 10:14 On Tue, 22 Jun 2010 12:06:02 +0200, Terje Mathisen <"terje.mathisen at tmsw.no"> wrote: >Watcom allowed you to define pretty much any operation yourself, in the >form of inline macro operations where you specified to the compiler >where the inputs needed to be, where the result would end up and exactly >which registers/parts of memory would be modified. Does Open-Watcom still have that ability? George
From: Andy 'Krazy' Glew on 22 Jun 2010 10:35
On 6/22/2010 4:47 AM, Andrew Reilly wrote: > Here's a comp.arch tangent: I believe that processor architects only > design and optimise for the requirements of the bulk of the "important" > code base. If C compilers actively *prevent* the detection of signed > integer overflow, then application code will find ways to avoid depending > on being able to. How long before new processors just don't bother > including the functionality? (That would probably seriously annoy the > lisp implementers, though: graceful conversion to bignums would be harder > if you couldn't detect fixnum overflow.) Actually, that is exactly what has happened over the years. Instructions like INTO (detect overflow flag settings, typically signed, and trap) have been deprecated. Chicken and egg: INTO was slow; INTO did not do all it needed to do; so nobody used INTO. So INTO was made even worse. Plus, there were workarounds like checking if x+y < x which could substitute for INTO, with more useful semantics. But C has now made these officially not work, although de facto they usually still do. I have vague hopes that the pendulum is swinging the other way, but I admit that I got tired of waiting and left. > I'm sure unsigned right shift is cheaper/lower-power than signed: maybe > we don't need that, either? Yes, this has been proposed. More commonly: load with sign extension is usually slower than loading without sign extension [*], since in normal representation it involves waiting for the MSB and smearing it over many upper bits. So many new instruction proposals have proposed doing away with signed loads. One of my contributions was pointing out that, if you have a datapath using redundant arithmetic, sign extension can be done cheaply by simply doubling the high order bit. See not yet written page https://semipublic.comp-arch.net/wiki/sign_extension_using_redundant_representation (I'm finding it convenient to make links to not yet existing pages, then click and edit. I.e. to apply the wiki principle.) |