From: Walter Banks on


Jon Kirwan wrote:

> On Sat, 16 Jan 2010 13:01:24 -0800 (PST), -jg
> <jim.granville(a)gmail.com> wrote:
>
> >On Jan 16, 11:18 pm, Walter Banks <wal...(a)bytecraft.com> wrote:
> >
> >> To illustrate this point. Your exact timing example is not a
> >> C compiler limitation but a language limitation. How do
> >> you describe exact timing all paths in a function in an
> >> unambiguous way in C? Exact timing is an application
> >> objective.
> >
> > Something like exact timing would be via local directives, and is
> >going to be very locally dependant.
>
> On some machines (on one extreme, the modern x86
> with branch detection logic, read-around-write data flows,
> registration stations, all manner of out-of-order execution,
> and multi-level caching), it would be a nightmare to achieve.
> On others, quite achievable in practice. The "requirement"
> would instead be a "goal" for the compiler language.
> Something like "register" is for a c variable -- a suggestion

.... snipped more supporting remarks.

In a programming language where exact time is a "requirement"
there are several options available to the compiler.

1) cycle counting on all paths padded with appropriate
instructions.

2) resyncing to a timer at the end of a function so path time
for all paths will be constant

3) Pre computing the next result and posting the result when
needed. (This is routinely the approach in automotive
controllers)

btw our compilers on most processors can post in the listing
file instruction cycle counts and integration of the counts from
some marked point in the source.

Regards,

--
Walter Banks
Byte Craft Limited
http://www.bytecraft.com





--- news://freenews.netfront.net/ - complaints: news(a)netfront.net ---
From: Richard Tobin on
In article <87ska5ezlg.fsf(a)fever.mssgmbh.com>,
Rainer Weikusat <rweikusat(a)mssgmbh.com> wrote:

>UNIX(*) has a single type of 'interactive command processor/ simple
>scripting language' and its features are described by an IEEE
>standard.

This is pedantry of the most pointless kind. You're welcome to
your "UNIX(*)", but don't pretend that your comments have anything
to do with the real world.

-- Richard
--
Please remember to mention me / in tapes you leave behind.
From: Jon Kirwan on
On Sun, 17 Jan 2010 12:15:16 -0800 (PST), -jg
<jim.granville(a)gmail.com> wrote:

>On Jan 18, 7:52�am, Jon Kirwan <j...(a)infinitefactors.org> wrote:
>> On Sun, 17 Jan 2010 07:11:53 -0500, Walter Banks
>> <wal...(a)bytecraft.com> wrote:
>> >3) Pre computing the next result and posting the result when
>> > � �needed. (This is routinely the approach in automotive
>> > � controllers)
>
>Pre computing can also mean taking a multi-branch-derived answer, and
>applying it earliest in an interrupt.
>So each interrupt sample/decision/calculates/stores, but before it
>starts the SW branches, it pops out the answer from the last
>interrupt.
>So you trade off latency, for less jitter.
><snip>

Probably would have been slightly better if you'd cited your
post directly to Walter's, rather than back-quoting him from
my post. Just a thought.

Jon
From: Jon Kirwan on
On Sun, 17 Jan 2010 12:15:16 -0800 (PST), -jg
<jim.granville(a)gmail.com> wrote:

><snip>
>'Timer-snap' that Walter mentioned, does not need to wholly consume a
>timer, just have it running.
>Useful when you have too many branches to control...
>
>You read the lower-byte cycle value as a starting value, run all your
>variant branches, and then pad the fastest ones with a timer-derived
>pause.
>Timer granularity is usually less an issue then SW granularity.
>Getting single cycle increments in SW usually means multiple paths..
>and your fix-it SW can consume more than the do-it sw ;)
><snip>

In my case, this was something similar to doing I2C in
software. (It was cobbled up for a specific hardware
circumstance, though, and I2C wasn't a possibility here.)

It needed to be _fast_, on the order of 1MHz bit timing, and
couple up multiple asynch units to a resource tied to some
common open-drain lines. We had exactly two pins to use for
asynch arbitration of access as well as for accessing all
communications with the shared resource once arbitration was
complete. The technique I used was similar to what is used
on the x86 cpus over the APIC bus, if you are familiar with
it. In my case _any_ variation at all immediately translated
into much longer 'bit times' since all asynch processors had
to accomodate execution time variations to make sure enough
time had passed. It was the driving 'time cost' factor in
the communications scheme. The code I used was cycle-exact
on both branches. It needed to be to meet the goals.

Jon
From: Jon Kirwan on
On Sun, 17 Jan 2010 17:26:12 -0500, Walter Banks
<walter(a)bytecraft.com> wrote:

>-jg wrote:
>
>> Not sure how you'd 'compiler automate' this ?
>> perhaps insert a start tag, and a series of stop tags,
>> all in the source, and create/maintain/calibrate a whole series of
>> cycle-tables, for the cores your compiler supports. There are over a
>> dozen timing choices on 80C51's alone now.
>> (NOT going to be easy for the compiler to correctly add value-
>> dependant multiple branches, so a pencil is _still_ needed)
>
>We have one advantage in our compilers for this because we
>normally compile directly to machine code. For processors with
>deterministic timing constant timing is possible for the limited
>set of problems whose timing is deterministic.

I'd imagine that by deferring some of the work involved into
the link process, much can also be done here. I think I read
recently here that GNU GCC, v4.5, starts to do more of the
significant optimizations in the link phase. But I might
have misunderstood what I read.

Jon