From: Grant Edwards on
On 2010-05-26, David Brown <david(a)westcontrol.removethisbit.com> wrote:
> On 26/05/2010 08:04, 42Bastian Schick wrote:
>> On Tue, 25 May 2010 14:24:00 +0000 (UTC), Grant Edwards
>> <invalid(a)invalid.invalid> wrote:
>>
>>>>
>>>> http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf
>>>>
>>>> It's an interesting paper in several ways
>>>
>>> Is the paper available somewhere?
>>
>> I entered the URL in firefox and got it. What is your problem ?
>
> I suspect he was hoping to find a full text paper, with the
> transcript of the talk, rather than just the slides.

I don't really care about a transcript of the talk (nor the slides
that accompanied the talk), I was just hoping to read the actual
paper.

--
Grant Edwards grant.b.edwards Yow! And then we could sit
at on the hoods of cars at
gmail.com stop lights!
From: George Neuner on
On Wed, 26 May 2010 06:09:00 GMT, bastian42(a)yahoo.com (42Bastian
Schick) wrote:

>On Tue, 25 May 2010 08:57:17 -0400, Walter Banks
><walter(a)bytecraft.com> wrote:
>
>
>>Code motion and other simple optimizations leaves GCC's
>>source level debug information significantly broken forcing
>>many developers to debug applications with much of the
>>optimization off then recompile later with optimization on but
>>the code largely untested.
>
>I see not, why "broken debug information" is an excuse for not testing
>the final version. In an ideal world, there should be no need to debug
>the final version ;-)
>
>And if optimization breaks your code, it is likely your code was
>broken before ( e.g. missing 'volatile').

That isn't true ... optimizations frequently don't play well together
and many combinations are impossible to reconcile on a given chip.

GCC isn't a terribly good compiler and its high optimization modes are
notoriously unstable. A lot of perfectly good code is known to break
under 03 and even 02 is dangerous in certain situations.

George
From: George Neuner on
On Tue, 25 May 2010 08:57:17 -0400, Walter Banks
<walter(a)bytecraft.com> wrote:

>Code motion and other simple optimizations leaves GCC's
>source level debug information significantly broken ...

GCC isn't a terribly good compiler. Nonetheless I think it is
misleading to lump code motion with "simple" optimizations. The
dependency analyses required to safely move any but the simplest
straight-line code are quite involved.

George
From: Dombo on
George Neuner schreef:
> On Wed, 26 May 2010 06:09:00 GMT, bastian42(a)yahoo.com (42Bastian
> Schick) wrote:
>
>> On Tue, 25 May 2010 08:57:17 -0400, Walter Banks
>> <walter(a)bytecraft.com> wrote:
>>
>>
>>> Code motion and other simple optimizations leaves GCC's
>>> source level debug information significantly broken forcing
>>> many developers to debug applications with much of the
>>> optimization off then recompile later with optimization on but
>>> the code largely untested.
>> I see not, why "broken debug information" is an excuse for not testing
>> the final version. In an ideal world, there should be no need to debug
>> the final version ;-)
>>
>> And if optimization breaks your code, it is likely your code was
>> broken before ( e.g. missing 'volatile').
>
> That isn't true ... optimizations frequently don't play well together
> and many combinations are impossible to reconcile on a given chip.

It isn't entirely false either; sometimes buggy code (e.g.uninitialized
variables) appears to work fine with optimizations turned off but fails
when using the optimizer. When code breaks when changing compiler
settings I'm more inclined to suspect the code than the compiler, even
though I have been bitten by compiler bugs in the past.

> GCC isn't a terribly good compiler and its high optimization modes are
> notoriously unstable. A lot of perfectly good code is known to break
> under 03 and even 02 is dangerous in certain situations.

This problem is not unique to GCC, I have seen the same problem with
some commercial compilers which produced clearly incorrect code when
using the more aggressive optimization levels.
From: David Brown on
On 26/05/2010 21:32, George Neuner wrote:
> On Wed, 26 May 2010 06:09:00 GMT, bastian42(a)yahoo.com (42Bastian
> Schick) wrote:
>
>> On Tue, 25 May 2010 08:57:17 -0400, Walter Banks
>> <walter(a)bytecraft.com> wrote:
>>
>>
>>> Code motion and other simple optimizations leaves GCC's
>>> source level debug information significantly broken forcing
>>> many developers to debug applications with much of the
>>> optimization off then recompile later with optimization on but
>>> the code largely untested.
>>
>> I see not, why "broken debug information" is an excuse for not testing
>> the final version. In an ideal world, there should be no need to debug
>> the final version ;-)
>>
>> And if optimization breaks your code, it is likely your code was
>> broken before ( e.g. missing 'volatile').
>
> That isn't true ... optimizations frequently don't play well together
> and many combinations are impossible to reconcile on a given chip.
>

I have never seen a situation where correct C code failed because of
higher optimisation except when exact timing is needed, or in the case
of compiler errors. I have seen lots of code where the author has said
it works without optimisations, but fails when they are enabled - in
every case, it was the code that was at fault.

The most common issue is a misunderstanding of how "volatile" works.
Some people have all sorts of believes about "volatile" giving atomic
access to data, or that it "disables optimisations". Another favourite
is the belief that a volatile access (or inline assembly code or
intrinsic function, such as an interrupt disable) acts as a memory barrier.

I've also seen code where people try to limit the compiler in an attempt
to control optimisations, using "tricks" like calling functions through
function pointers to force the compiler to avoid inlining or other
optimisations. Then they get caught out when a newer, smarter version
of the compiler can see through that trick.

There /are/ times when you want more precise control over the code
generation - perhaps there are issues with code size or stack size,
precise timing requirements (too fast can be as bad as too slow), or
integration with assembly or external code that depends on exact
generated code. Proper use of volatile and memory barriers is usually
enough to write correct code, and is the best solution - it is written
in the source code, and is independent of optimisation settings.
Occasionally assembly code or specific optimisation settings for
specific modules are required, but these cases are very rare.

Note that this is all independent of the compiler - there is nothing
gcc-specific about these issues. However, I've seen many "optimisation
broke my code" questions on gcc mailing lists, because people have
ported code from compilers with weaker code generators (including
earlier gcc versions) and their code now breaks, because gcc is smarter.
The archetypical example is "my delay loop worked with compiler X, but
gcc skips it".


> GCC isn't a terribly good compiler and its high optimization modes are
> notoriously unstable. A lot of perfectly good code is known to break
> under 03 and even 02 is dangerous in certain situations.
>

A lot of code that is widely used is /not/ perfectly good. For most
embedded development with gcc, the standard optimisation is "-Os" which
is "-O2" but with a little more emphasis on code size, and avoiding
those optimisations that increase the code size for relatively little
speed gain (such as a lot of loop unrolling, or aggressive inlining).
The standard for desktop and "big" systems is "-O2".

Level "-O3" is seldom used in practice with gcc, but it is not because
the compiler is unstable or generates bad code. It is simply that the
additional optimisations used here often make the code bigger for very
little, if any, speed gain. Even on large systems, bigger code means
more cache misses and lower speed. So these optimisations only make
sense for code that actually benefits from them.

There /are/ optimisations in gcc that are considered unstable or
experimental - there are vast numbers of flags that you can use if you
want. But they don't get added to the -Ox sets unless they are known to
be stable and reliable and to help for a wide range of code. Flags that
are known to be experimental or problematic are marked as such in the
documentation.

Of course, there is no doubt that gcc has its bugs. And many of these
occur with the rarer optimisation flags - that's code that has had less
time to mature and had less testing than code that is used more often.
The gcc bug trackers are open to the public, feel free to look through
them or register any bugs you find.


Now, how is this different from any other compiler? The big difference
is that with gcc, everything is open. When people are having trouble,
they will often discuss it in a public mailing list or forum. When a
bug is identified, it is reported in a public bug tracker. With a
commercial compiler, when a user thinks they have hit a bug they will
often talk to the vendor's support staff, who will help identify the
bug, fix it, and perhaps ship out a fixed version to the user and
include the fix in later releases. The end result is similar - the bug
gets found and fixed. But with gcc, everyone (at least, everyone who is
interested!) knows about the bug - with the commercial compiler, other
users are blissfully unaware. Thus people can see that gcc has bugs -
very few of which they would ever meet in practice, while with the
commercial compiler they know of no bugs - and also meet very few in
practice. There are a few commercial compiler vendors that publish
detailed lists of bugs fixed in their change logs or release notes -
these are similar to the lists published with new versions of gcc.


I expect at this point somebody is going to say that commercial vendors
have better testing routines than gcc, and therefore less bugs. There
is no objective way to know about the different testing methodologies,
or their effectiveness at finding bugs, so any arguments one way or the
other are futile. There is also such a wide range of commercial tools
that any generalisation is also meaningless. There are commercial tools
that I know have good procedures, though they still have bugs. There
are other commercial tools that I know have far lower testing and
qualification standards than gcc.

Another difference with gcc is to consider the code that is compiled
with it. My guess is that much of the code that people consider "risky
to compile with optimisation" is from open source software, typically
running on *nix machines - after all, this covers a very large
percentage of the use of gcc. It should be remembered that although a
lot of open source software is very good quality, a lot is not. A great
deal of software for desktop use (open source or closed source, it
matters not) is not written with the level of quality that we expect in
embedded systems. There are also lower standards on what is good enough
for use - failing to compile and run cleanly when optimised shows that
the code is broken, but it may still be good enough for the job it does.



First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Simulation of ARM7TDMI-S
Next: Which controller to use?