From: David Brown on
Richard Tobin wrote:
> In article <87ska5ezlg.fsf(a)fever.mssgmbh.com>,
> Rainer Weikusat <rweikusat(a)mssgmbh.com> wrote:
>
>> UNIX(*) has a single type of 'interactive command processor/ simple
>> scripting language' and its features are described by an IEEE
>> standard.
>
> This is pedantry of the most pointless kind. You're welcome to
> your "UNIX(*)", but don't pretend that your comments have anything
> to do with the real world.
>

Actually, his comment /does/ have a lot to do with the real world - it
was just very badly expressed. There is a posix standard for shells,
which gives a standard base for almost all shells in the *nix (Linux,
BSD, "real" unix, etc.) world. Most shells have features beyond the
posix base, and those are often incompatible, but if you stick to the
posix subset your scripts should work under any shell.

Of course, this is getting /way/ off topic for this thread...
From: David Brown on
Jon Kirwan wrote:
> On Sun, 17 Jan 2010 17:26:12 -0500, Walter Banks
> <walter(a)bytecraft.com> wrote:
>
>> -jg wrote:
>>
>>> Not sure how you'd 'compiler automate' this ?
>>> perhaps insert a start tag, and a series of stop tags,
>>> all in the source, and create/maintain/calibrate a whole series of
>>> cycle-tables, for the cores your compiler supports. There are over a
>>> dozen timing choices on 80C51's alone now.
>>> (NOT going to be easy for the compiler to correctly add value-
>>> dependant multiple branches, so a pencil is _still_ needed)
>> We have one advantage in our compilers for this because we
>> normally compile directly to machine code. For processors with
>> deterministic timing constant timing is possible for the limited
>> set of problems whose timing is deterministic.
>
> I'd imagine that by deferring some of the work involved into
> the link process, much can also be done here. I think I read
> recently here that GNU GCC, v4.5, starts to do more of the
> significant optimizations in the link phase. But I might
> have misunderstood what I read.
>

gcc 4.5 has merged the experimental LTO (link-time optimisation) branch
of gcc into the mainline. Such optimisations are not about getting
exact, predictable or consistent timing - it's about getting the fastest
and/or smallest code. As such, using LTO would probably make it harder
to get deterministic timing.

The basic idea of LTO is that when the compiler compiles a C (or CPP,
Ada, whatever) file, it saves a partly digested internal tree to the
object file as well as the generated object code. When you later link a
set of object files (or libraries) that have this LTO code, the linker
passes the LTO code back to the compiler again for final code
generation. The compiler can then apply cross-module optimisations
(such as inlining, constant propagation, code merging, etc.) across
these separately partially-compiled modules.

In other words, it is a very flexible form of whole program
optimisation, since it works with libraries, separately compiled modules
(no need to have the whole source code on hand), different languages,
and it can work step-wise for very large programs as well as for small
programs.

Another feature of gcc 4.5 that is more directly relevant here is that
you can now specify optimisation options for particular functions
directly in the source code. Thus you can have your timing-critical
bit-bang function compiled with little or no optimisation to be sure you
get the same target code each time, while the rest of the module can be
highly optimised as the compiler sees fit.
From: Pascal J. Bourguignon on
David Brown <david(a)westcontrol.removethisbit.com> writes:

> Richard Tobin wrote:
>> In article <87ska5ezlg.fsf(a)fever.mssgmbh.com>,
>> Rainer Weikusat <rweikusat(a)mssgmbh.com> wrote:
>>
>>> UNIX(*) has a single type of 'interactive command processor/ simple
>>> scripting language' and its features are described by an IEEE
>>> standard.
>>
>> This is pedantry of the most pointless kind. You're welcome to
>> your "UNIX(*)", but don't pretend that your comments have anything
>> to do with the real world.
>>
>
> Actually, his comment /does/ have a lot to do with the real world - it
> was just very badly expressed. There is a posix standard for shells,
> which gives a standard base for almost all shells in the *nix (Linux,
> BSD, "real" unix, etc.) world. Most shells have features beyond the
> posix base, and those are often incompatible, but if you stick to the
> posix subset your scripts should work under any shell.

You changed the context. It wasn't scripts, it was interactive use.

You're forgetting chsh, and the fact that not all shells are designed
to be somewhat compatible with POSIX shell.


> Of course, this is getting /way/ off topic for this thread...

Let's put it back on-topic:

chsh /usr/bin/emacs

Et voil�! Instant "word" processor shell...

--
__Pascal Bourguignon__ http://www.informatimago.com/
From: Walter Banks on


David Brown wrote:

> Jon Kirwan wrote:
> > I'd imagine that by deferring some of the work involved into
> > the link process, much can also be done here. I think I read
> > recently here that GNU GCC, v4.5, starts to do more of the
> > significant optimizations in the link phase. But I might
> > have misunderstood what I read.
> >
>
> gcc 4.5 has merged the experimental LTO (link-time optimisation) branch
> of gcc into the mainline. Such optimisations are not about getting
> exact, predictable or consistent timing - it's about getting the fastest
> and/or smallest code. As such, using LTO would probably make it harder
> to get deterministic timing.
>
> The basic idea of LTO is that when the compiler compiles a C (or CPP,
> Ada, whatever) file, it saves a partly digested internal tree to the
> object file as well as the generated object code. When you later link a
> set of object files (or libraries) that have this LTO code, the linker
> passes the LTO code back to the compiler again for final code
> generation. The compiler can then apply cross-module optimisations
> (such as inlining, constant propagation, code merging, etc.) across
> these separately partially-compiled modules.

All of our compilers can do either absolute code generation where
there is no link step. The compiler can bring in pre-compiled objects
or libraries if needed. We can compile to object and link where
linker will call the compiler's code generator to resolve application
level optimization.

Neither step has an assembler intermediate code.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com


From: Vladimir Vassilevsky on

Just a couple of things that would be good to have:

1. A tool which combines all of the C/C++ source code into one temporary
file prior to compillation, resolving name conflicts automatically. So
the compiler could optimize through the whole project.

2. A function attribute with the meaning opposite to "inline". So the
function with this attribute will never be inlined by compiler. Why:
autimatic inlining is great, however different functions may need to be
placed in the different memory sections. If compiler inlines a function
automatically, then the actual code could go into the wrong section.

Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com