From: Vladimir Vassilevsky on


John Larkin wrote:

> I don't like interrupts. The state of a system can become
> unpredictable if important events can happen at any time. A
> periodically run, uninterruptable state machine has no synchronization
> problems. Interrupts to, say, put serial input into a buffer, and
> *one* periodic interrupt that runs all your little state blocks, are
> usually safe. Something like interrupting when a switch closes can get
> nasty.

As everything else, this approach has its limits.

1. Once the number of states in the state machine gets over a hundred,
the code is very difficult to manage. The dependencies are growing all
the way. Changing anything can be a pain in the butt. It is almost
impossible to verify all kinds of transitions between the states. For
that reason it is very easy to overlook something.

2. There are kinds of tasks which call for multithreading. Caching,
hashing, calculations, vector graphics and such. Those tasks can be
organized as the state machines however it is going to be messy.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

http://www.abvolt.com
From: nospam on
"Didi" <dp(a)tgi-sci.com> wrote:

>Oh I am sure nobody can even dream of 1mS latency with windows.
>Some time ago, when they had only NT, a guy told me 22 mS was
>the best achievable (he was living in a windows world, though, so
>I don't know if this was possible or wishfull thinking).

There is no inherent reason for high interrupt latencies on PCs running
Windows.

I did quite detailed testing on a fast PC running 2k server, an edge on an
input pin triggered an interrupt which flipped an output pin.

The delay between input and output edges was nominally 17us -0 +4 which
occasionally stretched to +15 during intense disc activity. The system was
quite happy taking interrupts at 10kHz.

That of course was interrupt latency to a driver interrupt handler, not to
an associated DPC or back through the scheduling system to application
level event handlers.

You do rely on interrupt handlers in other drivers complete promptly, some,
particularly network card drivers are poor in this respect.

--
From: Didi on
> The delay between input and output edges was nominally 17us -0 +4 which
> occasionally stretched to +15 during intense disc activity. The system was
> quite happy taking interrupts at 10kHz.

While 17 uS is a bit too long for a GHz range CPU, this is a sane
figure.
I have seen people complain about hundreds of milliseconds response
time to the INT line of ATA drives, but this may have been on windows
95/98 which were... well, nothing one could call "working".

> That of course was interrupt latency to a driver interrupt handler, not to
> an associated DPC or back through the scheduling system to application
> level event handlers.

Well plain user experience is enough to see the latencies there, they
are in the seconds range, sometimes tens of seconds. They may learn
how to do this in another 10 years time...

Dimiter

On Feb 9, 6:10 pm, nospam <nos...(a)please.invalid> wrote:
> "Didi" <d...(a)tgi-sci.com> wrote:
> >Oh I am sure nobody can even dream of 1mS latency with windows.
> >Some time ago, when they had only NT, a guy told me 22 mS was
> >the best achievable (he was living in a windows world, though, so
> >I don't know if this was possible or wishfull thinking).
>
> There is no inherent reason for high interrupt latencies on PCs running
> Windows.
>
> I did quite detailed testing on a fast PC running 2k server, an edge on an
> input pin triggered an interrupt which flipped an output pin.
>
> The delay between input and output edges was nominally 17us -0 +4 which
> occasionally stretched to +15 during intense disc activity. The system was
> quite happy taking interrupts at 10kHz.
>
> That of course was interrupt latency to a driver interrupt handler, not to
> an associated DPC or back through the scheduling system to application
> level event handlers.
>
> You do rely on interrupt handlers in other drivers complete promptly, some,
> particularly network card drivers are poor in this respect.
>
> --


From: Paul Keinanen on
On 9 Feb 2007 06:13:39 -0800, "Didi" <dp(a)tgi-sci.com> wrote:

>> tell me about it. A couple of years back I developed some testers that
>> used a PC to talk to a range of little blue I/O boxes. The PC(s) were >=
>> 1GHz pentiummyjigs, and our pc guru (who is good) couldnt even get a
>> guaranteed 1ms interrupt out of the poxy OS.

While you can not get _guaranteed_ 1 ms response from standard Windows
or Linux (or in fact from any system with virtual memory, without
locking all referenced pages into memory), but perhaps 95 % to 99 % of
all events.

One way to test response times is to run a half duplex slave protocol
in the device to be tested. This will test the latencies from the
serial card to kernel mode device driver into the user mode protocol
code and then back to the device. Observe the pause between the last
character of the request and the first character of the response with
an oscilloscope or serial line analyzer. With 1 ms serial line
analyser time stamp resolution, the two way latency was somewhere
between 0 and 2 ms (or 1-2 character times at 9600 bit/s).

>Oh I am sure nobody can even dream of 1mS latency with windows.
>Some time ago, when they had only NT, a guy told me 22 mS was
>the best achievable (he was living in a windows world, though, so
>I don't know if this was possible or wishfull thinking).

A few years ago I did some tests with NT4 on 166 MHz and the 20 ms
periodic wakeup occurred within +/-2 ms more than 99 % of the time,
provided that no user interactions happened at the same time. With
user interactions, the worst case wakeup observed was about 50 ms.

Of course, any application using SetTimer can be delayed by seconds,
if the user grabs a window and shakes it all over the screen :-).

Paul

From: Paul Keinanen on
On Fri, 9 Feb 2007 14:54:48 +0000 (UTC), kensmith(a)green.rahul.net (Ken
Smith) wrote:

>In article <1171028460.691147(a)ftpsrv1>, Terry Given <my_name(a)ieee.org> wrote:
>>Didi wrote:
>[....]
>>tell me about it. A couple of years back I developed some testers that
>>used a PC to talk to a range of little blue I/O boxes. The PC(s) were >=
>>1GHz pentiummyjigs, and our pc guru (who is good) couldnt even get a
>>guaranteed 1ms interrupt out of the poxy OS.
>
>
>There are special drivers for serial ports that get about that sort of
>timing.
>
>
>The trend these days is to offload the work from the PC to some external
>box. This way you can have the PC only set the parameters and run the
>user interface. The actual work is done by a much more capable processor
>such as an 8051.

While there are protocol specific intelligent I/O processors doing all
the protocol handling, but for instance RocketPort 8-32 line
multiplexor cards simply implement deep Rx and Tx FIFOs for each
channel in an ASIC. No interrupts are used, but the driver scans all
Rx FIFOs once every 1-10 ms and each FIFO is emptied at each scan. The
Tx side works in a similar way.

The latency with such cards does not depend so much about the number
of active channels or number of bytes in a channel, but rather about
the scan rate. So if the scan rate is 10 ms, the Rx-processing-Tx two
way latency is 10-20 ms regardless of number of lines. With 115200
bit/s, there can be about 120 character at each scan with 10 ms scan
rate. However, if the received message ends just after the previous
scan, there can be a more than 100 character time pause before the
response is sent.

Paul