From: Oliver Betz on
Chris H wrote:

[...]

>There is no need except for amusement. GCC is a LONG way behind the main
>commercial compilers.

which Compiler(s) would you recommend for the Coldfire and CM3?

Oliver
--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
From: David Brown on
On 21/05/2010 13:39, Oliver Betz wrote:
> David Brown wrote:
>
> [...]
>
>> In my experience, gcc produces very good code for general C (and C++,
>> Ada, Fortran, etc.) for the main 32-bit targets, such as m68k/Coldfire,
>> ARM, MIPS, x86, and PPC as well as the 64-bit targets PPC, MIPS, and amd64.
>
> what I have seen in my tests till now looked good, besides a strange
> re-ordering of instructions making the generated code not faster but
> unreadable (e.g. in the debugger).
>
> And it could be that the re-ordering affects performance when
> accessing slow Coldfire V2 peripherals (consecutive accesses to
> peripherals cost more wait states), but I didn't investigate this yet.
>

Re-ordering is done for many reasons - confusing the debugger is not an
aim, but it is a side-effect! When you want accurate stepping during
debugging, it can be useful to reduce optimisation to -O1 to avoid a
fair amount of the re-ordering.

How much re-ordering affects the running code depends on the target.
For some cpus, pipelining of instructions is important for speed, so the
compiler does a lot of re-arranging. Typically you have a latency
between starting a instruction, and the resulting value being available
in a register. If you can fit an unrelated instruction in between the
first instruction and the code using that result, you can make use of
that "dead" time.

> [...]
>
>> One area in which gcc has been poor at compared to commercial compilers
>> is whole-program optimisation (a.k.a. link time optimisation, inter
>> module optimisation, omniscient code generation, etc.). For several
>
> since this affects mainly code size, this is no problem for me. My
> applications are small and time critical, so I need only speed.
>

It affects speed too, depending on how your code is structured. In
particular, with LTO the compiler is able to inline functions across
modules, which is a speed gain. gcc 4.5 is able to do even more fun
things - if you have a function that is called often as "foo(1, x)" and
"foo(2, x)", but never anything else for the first parameter, it can
effectively re-factor your code into "foo1(x)" and "foo2(x)" as two
functions with constant values. These constant values can then be used
to optimise the implementation of the two copies of foo(). Typically
(though not always), this leads to extra code space, but it can speed up
some types of code.

> [...]
>
>>> General code generation problems or library quality?
>>
>> Libraries also vary a lot in quality. There are also balances to be
>> made - libraries aimed for desktop use will put more effort into
>> flexibility and standards compliance (such as full IEEE floating point
>> support), while those aimed at embedded system emphasis size and speed.
>
> newlib, uClibc? IMO still bloated for small applications.
>

Yes, these are aimed for larger systems (for example, with code space
0.5 MB - 16 MB), or embedded Linux systems.

>> This is an area where the various commercial gcc vendors differentiate
>> their products.
>
> At least Codesourcery doesn't tell much about specific advantages of
> their libraries.
>

There is a fair amount of information in the documentation - perhaps
there's not much in the marketing details. But you can download the
documentation if you want - you can also download the free version of
the tools as well as getting an evaluation license.

I don't actually make much use of the standard C library in small
systems, so I can't tell you much about CodeSourcery's implementation.

> And since the libraries have to cover a broad range of applications,
> it might be necessary to compile them with specific settings - who
> provides sources?
>

CodeSourcery gives you the sources, depending on the version of the
license that you buy.

> Oliver

From: Oliver Betz on
David Brown wrote:

[...]

>>> In my experience, gcc produces very good code for general C (and C++,
>>> Ada, Fortran, etc.) for the main 32-bit targets, such as m68k/Coldfire,
>>> ARM, MIPS, x86, and PPC as well as the 64-bit targets PPC, MIPS, and amd64.
>>
>> what I have seen in my tests till now looked good, besides a strange
>> re-ordering of instructions making the generated code not faster but
>> unreadable (e.g. in the debugger).
>>
>> And it could be that the re-ordering affects performance when
>> accessing slow Coldfire V2 peripherals (consecutive accesses to
>> peripherals cost more wait states), but I didn't investigate this yet.
>
>Re-ordering is done for many reasons - confusing the debugger is not an
>aim, but it is a side-effect! When you want accurate stepping during
>debugging, it can be useful to reduce optimisation to -O1 to avoid a
>fair amount of the re-ordering.
>
>How much re-ordering affects the running code depends on the target.
>For some cpus, pipelining of instructions is important for speed, so the

AFAIK not very important for the Coldfire V2, because...

>compiler does a lot of re-arranging. Typically you have a latency
>between starting a instruction, and the resulting value being available
>in a register.

....this doesn't happen.

> If you can fit an unrelated instruction in between the
>first instruction and the code using that result, you can make use of
>that "dead" time.

This applies to accesses to chip peripherals in Coldfire
microcontrollers. After a write access, subsequent write accesses are
delayed for a certain time. If the compiler "collects" these writes
(which might happen because they are usually volatile), the result is
much slower than immediate (and therefore distributed) writes.

But as I wrote, I didn't yet try whether I can construct such cases.

>> [...]
>>
>>> One area in which gcc has been poor at compared to commercial compilers
>>> is whole-program optimisation (a.k.a. link time optimisation, inter
>>> module optimisation, omniscient code generation, etc.). For several
>>
>> since this affects mainly code size, this is no problem for me. My
>> applications are small and time critical, so I need only speed.
>>
>
>It affects speed too, depending on how your code is structured. In
>particular, with LTO the compiler is able to inline functions across
>modules, which is a speed gain. gcc 4.5 is able to do even more fun
>things - if you have a function that is called often as "foo(1, x)" and
>"foo(2, x)", but never anything else for the first parameter, it can
>effectively re-factor your code into "foo1(x)" and "foo2(x)" as two
>functions with constant values. These constant values can then be used

I see. Until now, I do this manually for relevant functions. Of
course, it would be cleaner to have the compiler doing the
optimization.

[...]

>> At least Codesourcery doesn't tell much about specific advantages of
>> their libraries.
>
>There is a fair amount of information in the documentation - perhaps
>there's not much in the marketing details. But you can download the
>documentation if you want - you can also download the free version of
>the tools as well as getting an evaluation license.

I did so, but the evaluation version contains the same documentation
of newlib (!). The "Getting Started" document tells me: "CSLIBC is
derived from Newlib but has been optimized for smaller code size on
embedded targets. Additional performance improvements will be added in
future releases".

Well, I had a brief look at newlib. IMO the "one for all" approach and
the attempt to be compatible also with every non-standard environment
leads to a rather convoluted coding.

>I don't actually make much use of the standard C library in small
>systems, so I can't tell you much about CodeSourcery's implementation.

This seems to be the more efficient approach. Likely I implement the
needed (trivial) function in less time than I need to fiddle with
newlib (-descendants), uClibc etc.

>> And since the libraries have to cover a broad range of applications,
>> it might be necessary to compile them with specific settings - who
>> provides sources?
>
>CodeSourcery gives you the sources, depending on the version of the
>license that you buy.

I'm not sure about this (see earlier in this thread), but I didn't yet
ask them.

Oliver
--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
From: Albert van der Horst on
In article <9O2dndwkDrE9h2vWnZ2dnUVZ8vydnZ2d(a)lyse.net>,
David Brown <david.brown(a)hesbynett.removethisbit.no> wrote:

<SNIP>

>
>If this particular story is referring to software development, then it's
>a different matter. Trying to make use of existing open source software
>in the development of your own products can be a legal minefield,
>especially if you want to mix and match code with different licenses.
>And in this context, people often consider the GPL to be very
>restrictive, especially compared to BSD licenses.

Is this a deliberate misrepresentation?
Incorporating open source software in your own software, especially
if you want to circumvent the spirit of the license, can be tricky.

"Making use of existing open source software" like for instance
using Linux and a gcc compiler to develop software for an
embedded system has in general fewer restrictions, is easier,
involves less risks of breaking license counts (this happens
sometimes despite due vigilance) etc. etc.

I once ported an embedded 68K system to gcc. 1]
I ended up scrapping the (supposedly high quality) SUN C-compiler
we bought a license for in order to build the 68K cross-compiler.
(In order to not hamper other developments we wanted an extra
license.)
This was because we already had a legal SUN compiler, but it
was less pain to install a gcc SUN compiler than get even the
license manager working on a cluster properly with those two licenses.
Now who is laying down a "legal minefield"?
Building gcc took a fraction of the time and effort to get even a
service engineer on the phone.
The resulting gcc 68K cross compiler generated the exact same code,
but not so fast. Even if a total build would be 10 minutes instead
of 5, who cares? (That would be a dramatic influence of a compiler
on a total build process.)

Regards quality. The 68K gcc was a dramatic improvement and
plenty good enough to shrink the code by 30% which meant
allowing adding new features to EPROM restricted hardware.
A difference of 10% of gcc in "performance" (read speed)
is a big deal in advertisements, but much less so in practice.
(In this project we didn't care and didn't measure performance.
Mechanics was determining the speed.)

Groetjes Albert

1] There were no other options than gcc.
I needed to change the c-compiler to adapt to existing
assembler libraries. So only a source license would do.
Oh, and I investigated how to get a source license on the
original compiler. I gave up because I didn't even manage to
establish who owned the rights to this compiler.
Talking of "legal minefields", shees!

--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert(a)spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

From: David Brown on
Albert van der Horst wrote:
> In article <9O2dndwkDrE9h2vWnZ2dnUVZ8vydnZ2d(a)lyse.net>,
> David Brown <david.brown(a)hesbynett.removethisbit.no> wrote:
>
> <SNIP>
>
>> If this particular story is referring to software development, then it's
>> a different matter. Trying to make use of existing open source software
>> in the development of your own products can be a legal minefield,
>> especially if you want to mix and match code with different licenses.
>> And in this context, people often consider the GPL to be very
>> restrictive, especially compared to BSD licenses.
>
> Is this a deliberate misrepresentation?

I didn't think it was a misrepresentation, and if I was unclear then it
certainly wasn't deliberate. Re-reading what I wrote, I can see how it
could be misinterpreted, and I thank you for clarifying it. I was
trying to say the same thing as you do below.

> Incorporating open source software in your own software, especially
> if you want to circumvent the spirit of the license, can be tricky.
>

Yes, that's correct. It is clearly /possible/ to incorporate open
source software in your own software. You just need to follow the
license requirements. And many pieces of open source software aimed at
embedded targets come with very developer-friendly licences to make it
easier. However, some are more awkward - you have to check carefully.

Of course, the same thing applies when you are incorporating closed
source software in your own software. While commercially licensed
libraries and code seldom has restrictions on the license for code that
links to it (unlike the GPL, for example), and seldom requires prominent
copyright notices (unlike some BSD licenses), they all have a license
with legal requirements and restrictions. This might for example
restrict you from selling it on to third parties in part of another
library, or perhaps restrict the countries you can export your product
to. There might be complicated requirements for royalties, auditing,
developer PC node locking, etc. The issues are different from those of
open source software, but there are issues nonetheless.

> "Making use of existing open source software" like for instance
> using Linux and a gcc compiler to develop software for an
> embedded system has in general fewer restrictions, is easier,
> involves less risks of breaking license counts (this happens
> sometimes despite due vigilance) etc. etc.
>

Use of open source developer programs like gcc is very seldom a problem
(unless you have company management with bizarre company rules, of
course). It is very difficult to violate typical open source licenses
by simply /using/ the programs. As in your example below, this is in
contrast to commercial programs, some of which can be particularly
awkward to use legally and correctly within their licenses.

> I once ported an embedded 68K system to gcc. 1]
> I ended up scrapping the (supposedly high quality) SUN C-compiler
> we bought a license for in order to build the 68K cross-compiler.
> (In order to not hamper other developments we wanted an extra
> license.)
> This was because we already had a legal SUN compiler, but it
> was less pain to install a gcc SUN compiler than get even the
> license manager working on a cluster properly with those two licenses.
> Now who is laying down a "legal minefield"?
> Building gcc took a fraction of the time and effort to get even a
> service engineer on the phone.
> The resulting gcc 68K cross compiler generated the exact same code,
> but not so fast. Even if a total build would be 10 minutes instead
> of 5, who cares? (That would be a dramatic influence of a compiler
> on a total build process.)
>

I've seen similar cases where using gcc was simply much faster and
easier than using a commercial compiler. I've also seen cases where
using gcc has taken more time and effort than getting a commercial tool
in action. All one can say for sure is that there are no easy ways to
judge which would be the best tool for a given job - high price is no
indication of quality or time-saving, just as free cost price is no
indication of low real-world costs.

> Regards quality. The 68K gcc was a dramatic improvement and
> plenty good enough to shrink the code by 30% which meant
> allowing adding new features to EPROM restricted hardware.
> A difference of 10% of gcc in "performance" (read speed)
> is a big deal in advertisements, but much less so in practice.
> (In this project we didn't care and didn't measure performance.
> Mechanics was determining the speed.)
>

My own experience with gcc for the 68k is similar - it's of similar code
generation quality to the modern big-name commercial compiler I've
compared it to (and much better than the older big-name commercial
compiler I used previously). The balance between code size and code
speed varies a little, and the techniques for squeezing the very best
out of the code are compiler dependent, but certainly gcc is a fully
appropriate compiler for good code on the 68k.

Slower run-time performance for the compiler itself doesn't come as a
big surprise. gcc is built up in parts rather than a single
speed-optimized tool. Part of this is from its *nix heritage - if you
use gcc on a windows machine it can be noticeably slower than using it
on a *nix machine, simply because process creation and communication is
slower on windows.

> Groetjes Albert
>
> 1] There were no other options than gcc.
> I needed to change the c-compiler to adapt to existing
> assembler libraries. So only a source license would do.
> Oh, and I investigated how to get a source license on the
> original compiler. I gave up because I didn't even manage to
> establish who owned the rights to this compiler.
> Talking of "legal minefields", shees!
>
> --
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11
Prev: Simulation of ARM7TDMI-S
Next: Which controller to use?