From: brOS on
Dear all,

I have just read David Kalyinsky's article which can be found on following
link
http://www.jmargolin.com/uavs/jm_rpv2_npl_16.pdf ...

What confuses me is part of the article named "fixed time task switching"
and figure number 4. It says that real time OS have constant context
switching time (including time for finding task with highest priority).
What I am confused about is how can it be constant if number of tasks in
systems grows. Of course kernel can be implemented in such a way that lists
which contain TCB are sorted. If this is the case then scheduler's job is
just to take first TCB from the list and then do dispatching. But in that
case functions that moving TCB from waiting or suspendin lists back to
ready list should do that sorting. So scheduler execution time is constant,
but overhead is introduced when calling sorting functions, and execution
time of these functions are not deterministic...

I dont think that Mr Kalinsky is right about this. In my opinion something
has to depend of number of tasks in system. It can be scheduler execution
time or execution time of kernel functions which sorts TCB ready list....
Am I wrong about this?


---------------------------------------
This message was sent using the comp.arch.embedded web interface on
http://www.EmbeddedRelated.com
From: Jon Kirwan on
On Sat, 02 Jan 2010 11:48:58 -0600, "brOS" wrote:

>I have just read David Kalyinsky's article which can be found on following
>link
>http://www.jmargolin.com/uavs/jm_rpv2_npl_16.pdf ...
>
>What confuses me is part of the article named "fixed time task switching"
>and figure number 4. It says that real time OS have constant context
>switching time (including time for finding task with highest priority).
>What I am confused about is how can it be constant if number of tasks in
>systems grows. Of course kernel can be implemented in such a way that lists
>which contain TCB are sorted. If this is the case then scheduler's job is
>just to take first TCB from the list and then do dispatching. But in that
>case functions that moving TCB from waiting or suspendin lists back to
>ready list should do that sorting. So scheduler execution time is constant,
>but overhead is introduced when calling sorting functions, and execution
>time of these functions are not deterministic...
>
>I dont think that Mr Kalinsky is right about this. In my opinion something
>has to depend of number of tasks in system. It can be scheduler execution
>time or execution time of kernel functions which sorts TCB ready list....
>Am I wrong about this?

It depends on what you consider as 'right' or 'wrong,' I
suppose. Your question is broad enough that a full answer
would take a very long time. So I hope you'll accept some
pot-shots at it as examples to illustrate, but not as a
sweeping survey.

Before I start, you appear to conflate two ideas -- execution
time and task switching time -- when thinking about the
author's meaning. You write above, "...and execution time of
these functions are not deterministic..." That isn't what
the author was talking about. Execution time may be
variable. In fact, it often is. It's rare (but I've done
it, as well) that a task takes equal time every time. Few
languages (none other than assembly, so far as I know) allow
the programmer to specify that different code edges must take
the same execution time. So it's given that execution time
usually varies. Burying the insertion time within the
execution time of some task requesting the insertion is a
valid mechanism for helping ensure that tasks have a fixed
switching time. I use this method all the time, in fact.

Many real-time embedded systems know the number of active
processes, as a function of time, beforehand. There is a
clear understanding of what needs to be done, and when, and
the system is designed from the ground up to meet that
requirement.

So a real-time operating system might have a fixed number of
tasks with fixed schedules, baked into the final binary at
compile-time. In this case, everything is known a priori.
Adding tasks means re-coding a little and then re-compiling,
so it is all known a priori, yet again. That's one example
which meets a constant context switching time, with
repeatable timing performances known in advance.

You might think I'm cheating here, so I'll provide another
example often applied in my own systems to achieve what the
author you cite says:

For, in fact, the term "real-time" does not
mean "as fast as possible" but rather "real-
time" demands consistent, repeatable, known
timing performance.

I employ a single, circular buffer that includes three
states: available, ready, and sleep. In this case, the
worst case number of tasks/processes is known at compile time
and I can easily modify that with a #define. Three pointers
are also used to indicate the 'head' of each queue. If all
three point to the same place, the available queue is
considered to be full and there are no tasks ready to run or
sleeping on the timer. Insertion is obviously fast, here.
The ready queue holds all tasks that are ready to run in
sequential order (I design things so that there is never more
than one of these at a time -- not hard, really.) The
available queue holds a sequential list of slots.

But I use a delta queue for the sleep queue. Each task or
process inserted into the sleep queue is provided a relative
time value from "now" for it to be inserted. This is really
very simple... it's just the number of timer ticks (always,
in my systems, a value larger than zero... but it could be
zero, I suppose.) The insertion routine starts at the head
of the sleep queue, subtracting the remaining time for the
first entry from the given time value and inserting the new
task before it if the result is negative, then negating that
value to a positive one and storing it in the task that was
just observed as waiting longer. Otherwise, I continue on to
the next one in the list. At the end, if all of the sleeping
tasks are to be fired off earlier, the remaining time value
in the one to be inserted is now the number of ticks to wait
after all the others have started and it gets inserted at the
end, obviously. When the timer ticks, it only decrements the
time value on the head of the sleep queue. It never needs to
search through the rest.

Tasks re-insert themselves by making an insertion call when
they start, so that the next starting time is known relative
to the current execution. And I set the granularity of the
timer ticks long enough so that all necessary insertions can
be done within the first tick time so that there is a
reliable relative clock time.

For example, I might have:

void func_01( void ) {
Insert( func_01, 100 );
Insert( func_02, 6 );
Insert( func_03, 27 );
Insert( func_04, 50 );
/* do stuff */
}

In the above case, func_01 re-inserts itself for the next
timing cycle and at the same time also sets up some other
tasks for their appropriate relative times against the main
timing loop event edge. func_02, func_03, and func_04 do NOT
insert themselves. They are single-shot, so to speak,
triggered only by func_01. Just as an example, though. Your
own creativity can expand on any of this.

If the timing needs to be calculated, I make sure I calculate
that at the _end_ of some task, not at the beginning. That
way it is available when the task re-starts and doesn't delay
things across my timer granularity barrier.

There are many other methods that also come to mind. But
this should give a start.

Jon
From: Tim Wescott on
On Sat, 02 Jan 2010 11:48:58 -0600, brOS wrote:

> Dear all,
>
> I have just read David Kalyinsky's article which can be found on
> following link
> http://www.jmargolin.com/uavs/jm_rpv2_npl_16.pdf ...
>
> What confuses me is part of the article named "fixed time task
> switching" and figure number 4. It says that real time OS have constant
> context switching time (including time for finding task with highest
> priority). What I am confused about is how can it be constant if number
> of tasks in systems grows. Of course kernel can be implemented in such a
> way that lists which contain TCB are sorted. If this is the case then
> scheduler's job is just to take first TCB from the list and then do
> dispatching. But in that case functions that moving TCB from waiting or
> suspendin lists back to ready list should do that sorting. So scheduler
> execution time is constant, but overhead is introduced when calling
> sorting functions, and execution time of these functions are not
> deterministic...
>
> I dont think that Mr Kalinsky is right about this. In my opinion
> something has to depend of number of tasks in system. It can be
> scheduler execution time or execution time of kernel functions which
> sorts TCB ready list.... Am I wrong about this?

I think you are right, at least in the main. AFAIK the only way that
you're going to get absolutely deterministic task switching times is (as
you mention) by losing determinency on calls to functions that determine
when the next task comes up on the queue. Unless some clever comp-sci
person has invented new algorithms, that is.

Where he really loses it for me is his assertion that an RTOS provides
absolute determinency in the first place. This is not the contract
between a developer and an RTOS. What an RTOS _should_ provide is a
_guaranteed maximum_ response time to any OS call*. If you know the
maximum response time for the OS, and if you know the maximum response
time for the rest of your code, then you have the information you need to
use rate monotonic analysis to make sure that your system will meet its
scheduling requirements.

I would expect that any kernel that is switching a known number of tasks
and calls itself real time should have a known maximum latency, but I
wouldn't hold it to having the _same_ maximum latency if you added
another task, and I certainly wouldn't hold it to _always_ having the
same latency.

* And note that as soon as the developer starts doing things like turning
off interrupts or otherwise slowing down the OS's access to the processor
these guarantees go out the window -- you become one with the OS when you
start using it. I have even seen complaints fired at processors with
pipelining that the processor's response to instructions is itself not
deterministic, and it is therefor difficult to assess one's code's
execution time. These are valid complaints, and I frankly don't know if
and how they've been resolved.

--
www.wescottdesign.com
From: Paul Keinanen on
On Sat, 02 Jan 2010 11:41:38 -0800, Jon Kirwan
<jonk(a)infinitefactors.org> wrote:

>Many real-time embedded systems know the number of active
>processes, as a function of time, beforehand. There is a
>clear understanding of what needs to be done, and when, and
>the system is designed from the ground up to meet that
>requirement.
>
>So a real-time operating system might have a fixed number of
>tasks with fixed schedules, baked into the final binary at
>compile-time. In this case, everything is known a priori.

Many small simple real time kernels have a bit mask for up to 8, 16 or
32 tasks. When a task becomes runable e.g. due to an interrupt, the
ISR sets the corresponding bit in the bit mask.

To find the next task to run, just find the most significant bit that
is set and execute the corresponding task. If the processor has
FindFirstBitSet type instructions, this is trivial.

Using a processor with only 8 bit masking instructions, 16 tasks can
be handled with a binary search requiring up to 8 bit_testand branch
instruction pairs, while a machine with 16 bit masking instruction,
the highest priority task can be determined from up to 32 tasks with
up to 10 mask and branch instruction pairs.

With less than 9 tasks, simply use a linear search by shifting the bit
mask into the carry flag one step at the time and branch accordingly.

From: Vladimir Vassilevsky on


Paul Keinanen wrote:

> On Sat, 02 Jan 2010 11:41:38 -0800, Jon Kirwan
> <jonk(a)infinitefactors.org> wrote:
>
>
>>Many real-time embedded systems know the number of active
>>processes, as a function of time, beforehand. There is a
>>clear understanding of what needs to be done, and when, and
>>the system is designed from the ground up to meet that
>>requirement.
>>
>>So a real-time operating system might have a fixed number of
>>tasks with fixed schedules, baked into the final binary at
>>compile-time. In this case, everything is known a priori.
>
>
> Many small simple real time kernels have a bit mask for up to 8, 16 or
> 32 tasks. When a task becomes runable e.g. due to an interrupt, the
> ISR sets the corresponding bit in the bit mask.

If the speed of scheduler and the context switching time does matter, it
usually means poor design. Normally, system needs draw insignificant
amount of time, so if the context switching time is fixed or dependent
on the number of tasks, etc. is irrelevant.

VLV
 |  Next  |  Last
Pages: 1 2
Prev: Flowcode4 strings?
Next: Infineon EEPROM die