From: 42Bastian Schick on
On Mon, 26 Oct 2009 13:30:42 -0500, Vladimir Vassilevsky
<nospam(a)nowhere.com> wrote:

>> A common approach to providing an atomic operation.
>> Some CPUs don't need this.
>
>Some OSes claim that they never disable interrupts, however from what I
>have seen it was all very impractical.

And some explain it with the longest un-interruptible instruction such
as saving or storing a bunch of registers.
--
42Bastian
Do not email to bastian42(a)yahoo.com, it's a spam-only account :-)
Use <same-name>@monlynx.de instead !
From: Vladimir Vassilevsky on


42Bastian Schick wrote:
> On Mon, 26 Oct 2009 12:19:02 -0700, D Yuniskis
> <not.going.to.be(a)seen.com> wrote:
>
>
>>FreeRTOS info wrote:
>>
>>>D Yuniskis wrote:
>>>
>>>>and then "schedule" a defered activation. So, the jiffy
>>>>terminates as expected. The interrupted routine (probably
>>>>an OS action) finishes up what it was working on, then,
>>>>examines a flag to see if it can "simply return" or if it has
>>>>to process some deferred "activity"
>>>
>>>....and how are you protecting access to the flag - or are you assuming
>>>the hardware supports atomic read-modify-writes on variables - or that
>>>the hardware supports atomic semaphore type operations?
>>
>>Assuming you don't have a second processor...
>>
>>ever hear of a "Test and Set" instruction?
>
>
> "Test and set" or how you name it is impractical in a interrupt
> context. You just can't loop and wait for the semaphore.

Yes. Also test-set is not suitable when manipulating stack and frame
pointers or dealing with nested interrupts.


Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com
From: David Brown on
42Bastian Schick wrote:
> On Mon, 26 Oct 2009 21:35:11 +0100, David Brown
> <david.brown(a)hesbynett.removethisbit.no> wrote:
>
>> I suspect that Richard has looked very carefully through a very large
>> number of instruction sets looking for exactly that sort of instruction...
>>
>> There are certainly plenty of architectures that don't have test-and-set
>> instructions, or anything similar. As a general rule, test-and-set type
>> instructions don't fit well with the ethos of RISC, so RISC processors
>
> Hmm, I wonder which.
>
>> typically don't have such instructions. They either rely on disabling
>> interrupts (common for microcontrollers and smaller processors, where
>> this is a simple task), or have more sophisticated mechanisms such as
>> "reservations" used by the PPC processors.
>
> Yeah, but don't rely on it. I had to re-write great parts of my RTOS
> because Freescale limited the use of the reservation e.g. in MPC55xx
> that it is not usable anymore.

Reservations are hardware intensive to implement, unless you are willing
to pay extra memory latency. But they scale well for multi-processor
systems, making them a good solution for "standard" PPC processors. The
PPC core in the MPC55xx is more like a microcontroller core than a "big"
processor core, and Freescale probably saved a fair amount of silicon
and complexity by dropping reservations so that you must use traditional
interrupt disabling. I think it is the right choice for these devices,
but then I haven't been trying to write optimal RTOS code that is
portable across different PPC devices!
From: Stefan Reuther on
D Yuniskis wrote:
> Vladimir Vassilevsky wrote:
>> Some OSes claim that they never disable interrupts, however from what
>> I have seen it was all very impractical. Once you have more or less
>> sophisticated structure of threads and interrupts, you've got to have
>> critical parts with the interrupts disabled.
>
> You only need to disable interrupts if an interrupt context
> can access those "shared objects" *without* observing whatever
> other "mutex" mechanism you are using.
>
> It *can* be done. But, it is a lot trickier than just
> a tiny little critical region.

Yep.

> E.g., if the jiffy comes along (perhaps the most notable
> active element that *would* be interrupt spawned and asynchronously
> compete for access to those strctures), it has to notice that a
> critical region has been entered (by whatever it has interrupted!)
> and then "schedule" a defered activation. So, the jiffy
> terminates as expected. The interrupted routine (probably
> an OS action) finishes up what it was working on, then,
> examines a flag to see if it can "simply return" or if it has
> to process some deferred "activity" (i.e. those things that the
> jiffy *would* have done had it been fortunate enough to come
> along "outside" that critical region.

The problem here is how you "schedule" the deferred activity. You can
make an array of sig_atomic_t variables, one for each possible activity,
the interrupt sets them, the kernel exit checks and clears them. But
this obviously does not scale.

When you make something generic, you'll start needing linked lists of
stuff where you put the deferred actions in. So you now need some kind
of atomic link-creation or swap routine. Okay, this can also be
implemented without disabling interrupts, by having the interrupt
routine detect whether it interrupted the atomic routine. It's possible,
but it's easy to get wrong.

It's much simpler to just wrap the three statements into a CLI/STI.
In particular when your processor has multi-cycle instructions which
implicitly block interrupts for a dozen cycles while they execute - so
why should these be permitted and a three-cycle CLI/STI should not?


Stefan

From: D Yuniskis on
Stefan Reuther wrote:
> D Yuniskis wrote:
>
>> E.g., if the jiffy comes along (perhaps the most notable
>> active element that *would* be interrupt spawned and asynchronously
>> compete for access to those strctures), it has to notice that a
>> critical region has been entered (by whatever it has interrupted!)
>> and then "schedule" a defered activation. So, the jiffy
>> terminates as expected. The interrupted routine (probably
>> an OS action) finishes up what it was working on, then,
>> examines a flag to see if it can "simply return" or if it has
>> to process some deferred "activity" (i.e. those things that the
>> jiffy *would* have done had it been fortunate enough to come
>> along "outside" that critical region.
>
> The problem here is how you "schedule" the deferred activity. You can
> make an array of sig_atomic_t variables, one for each possible activity,
> the interrupt sets them, the kernel exit checks and clears them. But
> this obviously does not scale.

Append a pointer to the ASR that needs to be deffered to
the end of a list of "deffered procedure calls". When the
ISR returns, the OS schedules this "list" of DPC's (in
sequential order).

Of course, you still have to guarantee that you have enough
"overall" time to handle the tasks at hand, lest this list
grow indefinitely. (i.e., your design has to "work"! :> )

> When you make something generic, you'll start needing linked lists of
> stuff where you put the deferred actions in. So you now need some kind
> of atomic link-creation or swap routine. Okay, this can also be
> implemented without disabling interrupts, by having the interrupt
> routine detect whether it interrupted the atomic routine. It's possible,
> but it's easy to get wrong.

This last point is why folks seem to resort to the more heavy-handed
approach of unilaterally disabling interrupts for *everything*
that is "hard" or "easy to get wrong".

> It's much simpler to just wrap the three statements into a CLI/STI.
> In particular when your processor has multi-cycle instructions which
> implicitly block interrupts for a dozen cycles while they execute - so
> why should these be permitted and a three-cycle CLI/STI should not?

There is nothing wrong with disabling interrupts! But, like
everything else, you have to understand *why* you are doing
it and when it is *appropriate* to do so.

In my experience, it seems that people turn interrupts off
"too early" in a code fragment and turn them back on "too late".
Often, if you look at the code, you can see things that could
have been moved outside of the critical region (sometimes with
some added complexity) to reduce the size of that region.
This usually results in "needing more CPU" than you *really* do.

*If* your system can handle high interrupt latencies, then
you can "afford" to disable them for longer periods of
time. But, you have to make sure you know just how long
those periods can *become* -- since a deferred IRQ is
more likely to encounter some *other* IRQ during its execution
(i.e., you may return from one ISR just to find yourself
in *another*, etc.)

You also have to be careful about how you apply the DI/EI
idiom. If, for example, the code you are executing can
take place *within* a region where interrupts are off, then
you want to be sure you don't blindly re-enable them at
the end of "this" critical region only to (belatedly)
discover that you have corrupted some other *encasing*
critical region. I.e., you may have to examine the state
of the interrupt mask/level before disabling so that you can
correctl restore it afterwards.

The same is true of *all* locking mechanisms. E.g., taking a lock
on a file before you *really* need it, etc. (especially if that
"file" is a *special* file!)