From: Boudewijn Dijkstra on
Op Wed, 18 Nov 2009 17:09:39 +0100 schreef Ari Okkonen
<ari.okkonen(a)obp.fi>:
> [snip]
>
> The shared stores with locking operations improve the communication
> between tasks.

Compared to what? How did you come to this conclusion? Which inter-task
communication methods did you investigate?

> No server tasks are needed to protect the data.
>
> Priority ceiling protocol keeps the locking operations simple, avoids
> priority inversion, and avoids deadlocks.

If you can lock, you can have deadlock. What if a process dies before it
has the chance to unlock?

> The new features of ReaGeniX graphical language and code generator
>
> * New connectors shared and shared_W to support ReaGOS or other OS
> - Separate shared connectors for inter-task data and store
> connectors for intra-task data allow generation/compile-time
> assurance
> that inter-task data must be locked during access, but no overhead
> is
> introduced to intra-task store access.
> * Separation of store and store_W (read and read/write) connectors for
> data stores. Earlier only read/write connector was available.
> -This is to keep better track who is updating (and possibly messing
> up) a store.

Is it possible that, at any given time, one can determine which source
code line or which model element has caused a store to be locked?

> * A locking operator "lock(X)" to be used in transition condition, if
> the expression refers to a shared store connector X

Is this operator inserted by the code generator or by the user?

> * A release operator "unlock_all" to release all shared stores locked
> in condition. This is to be used in the outermost block level of the
> transition action.
> * A locking block construct "begin_lock(X) ... end_lock(X)" to protect
> X from modification (read lock)
> * A locking block construct "begin_lock_W(X) ... end_lock_W(X)" to
> protect X from all other accessess (write lock)

What if I write inside a read lock or read inside a write lock?

> * A mechanism to discourage accessess of a shared store outside of
> locks and to discourage modification of a shared store outside of
> write locks.

This mechanism sounds quite interesting. Could you provide some more
details?


--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
From: Ari Okkonen on
Boudewijn Dijkstra wrote:
> Op Wed, 18 Nov 2009 17:09:39 +0100 schreef Ari Okkonen
> <ari.okkonen(a)obp.fi>:
>> [snip]
>>
>> The shared stores with locking operations improve the communication
>> between tasks.
>
> Compared to what? How did you come to this conclusion? Which
> inter-task communication methods did you investigate?
>
An example is a measurement data block in a mechanical or chemical
process control system. Several measurement tasks update hundreds of
values in the data block each in own pace. Several control tasks read
these values in different paces. There will be lots of messages sent
back and forth if these tasks cannot share the same data store.
Using messages to transfer and query values seems more complicated
than just assigning or reading the values, including locking.

>> No server tasks are needed to protect the data.
>>
>> Priority ceiling protocol keeps the locking operations simple, avoids
>> priority inversion, and avoids deadlocks.
>
> If you can lock, you can have deadlock. What if a process dies before
> it has the chance to unlock?
>
In the priority ceiling protocol there are no specific locks for the resources.
The priority of the task is raised so much that no other task having
access to that resource cannot get scheduled. If the task dies, its
priority has no meaning anymore. See e.g.
http://en.wikipedia.org/wiki/Priority_ceiling_protocol

>> The new features of ReaGeniX graphical language and code generator
>>
>> * New connectors shared and shared_W to support ReaGOS or other OS
>> - Separate shared connectors for inter-task data and store
>> connectors for intra-task data allow generation/compile-time
>> assurance
>> that inter-task data must be locked during access, but no
>> overhead is
>> introduced to intra-task store access.
>> * Separation of store and store_W (read and read/write) connectors for
>> data stores. Earlier only read/write connector was available.
>> -This is to keep better track who is updating (and possibly messing
>> up) a store.
>
> Is it possible that, at any given time, one can determine which source
> code line or which model element has caused a store to be locked?
>
No specific mechanism is planned for that. Anyway it can be done using
a debugger in a breakpoint. The main idea, however, is that if
a functional block has given a read access to a store. You get a generation
/compile time error if the block tries to modify the store.

>> * A locking operator "lock(X)" to be used in transition condition, if
>> the expression refers to a shared store connector X
>
> Is this operator inserted by the code generator or by the user?
>
It is inserted by the user. If not, an error will be given.
We thing that you (and the review team) should clearly know what you are
locking.

>> * A release operator "unlock_all" to release all shared stores locked
>> in condition. This is to be used in the outermost block level of the
>> transition action.
>> * A locking block construct "begin_lock(X) ... end_lock(X)" to protect
>> X from modification (read lock)
>> * A locking block construct "begin_lock_W(X) ... end_lock_W(X)" to
>> protect X from all other accessess (write lock)
>
> What if I write inside a read lock or read inside a write lock?
>
if you write inside a read lock you get an error. Reading inside a write
lock is allowed.

>> * A mechanism to discourage accessess of a shared store outside of
>> locks and to discourage modification of a shared store outside of
>> write locks.
>
> This mechanism sounds quite interesting. Could you provide some more
> details?
>
Because the locking is block structured it is possible to check the
accesses during generation/compile time and give errors if needed.
From: Niklas Holsti on
Ari Okkonen wrote:
> Boudewijn Dijkstra wrote:
>> Op Wed, 18 Nov 2009 17:09:39 +0100 schreef Ari Okkonen
>> <ari.okkonen(a)obp.fi>:
>>> [snip]
>>> No server tasks are needed to protect the data.
>>>
>>> Priority ceiling protocol keeps the locking operations simple, avoids
>>> priority inversion, and avoids deadlocks.
>>
>> If you can lock, you can have deadlock. What if a process dies before
>> it has the chance to unlock?
>>
> In the priority ceiling protocol there are no specific locks for the
> resources.
> The priority of the task is raised so much that no other task having
> access to that resource cannot get scheduled. If the task dies, its
> priority has no meaning anymore. See e.g.
> http://en.wikipedia.org/wiki/Priority_ceiling_protocol

The priority ceiling protocol works for single-processor systems where
only one task can be scheduled at a time. Do you intend to support
multi-processor (multi-core) systems? If so, how will you make the
priority ceiling protocol work for them? The multi-processor extensions
of the priority ceiling protocol seem to need the same kinds of task
queues and real "locks" that the single-processor PCP neatly avoids.

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
. @ .
From: Ari Okkonen on
Niklas Holsti wrote:
> Ari Okkonen wrote:
>> Boudewijn Dijkstra wrote:
>>> Op Wed, 18 Nov 2009 17:09:39 +0100 schreef Ari Okkonen
>>> <ari.okkonen(a)obp.fi>:
>>>> [snip]
>>>> No server tasks are needed to protect the data.
>>>>
>>>> Priority ceiling protocol keeps the locking operations simple, avoids
>>>> priority inversion, and avoids deadlocks.
>>>
>>> If you can lock, you can have deadlock. What if a process dies
>>> before it has the chance to unlock?
>>>
>> In the priority ceiling protocol there are no specific locks for the
>> resources.
>> The priority of the task is raised so much that no other task having
>> access to that resource cannot get scheduled. If the task dies, its
>> priority has no meaning anymore. See e.g.
>> http://en.wikipedia.org/wiki/Priority_ceiling_protocol
>
> The priority ceiling protocol works for single-processor systems where
> only one task can be scheduled at a time. Do you intend to support
> multi-processor (multi-core) systems? If so, how will you make the
> priority ceiling protocol work for them? The multi-processor extensions
> of the priority ceiling protocol seem to need the same kinds of task
> queues and real "locks" that the single-processor PCP neatly avoids.
>
We have to do some redesign to our kernel for multi-processor systems,
anyway. Any pointers to multi-processor PCP (or multi-processor mutex
problem in general) texts are highly appreciated.
From: Niklas Holsti on
Ari Okkonen wrote:
> Niklas Holsti wrote:
>> Ari Okkonen wrote:
>>> Boudewijn Dijkstra wrote:
>>>> Op Wed, 18 Nov 2009 17:09:39 +0100 schreef Ari Okkonen
>>>> <ari.okkonen(a)obp.fi>:
>>>>> [snip]
>>>>> No server tasks are needed to protect the data.
>>>>>
>>>>> Priority ceiling protocol keeps the locking operations simple, avoids
>>>>> priority inversion, and avoids deadlocks.
>>>>
>>>> If you can lock, you can have deadlock. What if a process dies
>>>> before it has the chance to unlock?
>>>>
>>> In the priority ceiling protocol there are no specific locks for the
>>> resources.
>>> The priority of the task is raised so much that no other task having
>>> access to that resource cannot get scheduled. If the task dies, its
>>> priority has no meaning anymore. See e.g.
>>> http://en.wikipedia.org/wiki/Priority_ceiling_protocol
>>
>> The priority ceiling protocol works for single-processor systems where
>> only one task can be scheduled at a time. Do you intend to support
>> multi-processor (multi-core) systems? If so, how will you make the
>> priority ceiling protocol work for them? The multi-processor
>> extensions of the priority ceiling protocol seem to need the same
>> kinds of task queues and real "locks" that the single-processor PCP
>> neatly avoids.
>>
> We have to do some redesign to our kernel for multi-processor systems,
> anyway. Any pointers to multi-processor PCP (or multi-processor mutex
> problem in general) texts are highly appreciated.

Before writing my comment above, I looked at this report:

Jim Ras, Albert M.K. Cheng, "An Evaluation of the Dynamic and Static
Multiprocessor Priority Ceiling Protocol and the Multiprocessor Stack
Resource Policy in an SMP System," Real-Time and Embedded Technology and
Applications Symposium, IEEE, pp. 13-22, 2009 15th IEEE Real-Time and
Embedded Technology and Applications Symposium, 2009.

http://doi.ieeecomputersociety.org/10.1109/RTAS.2009.10


--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
. @ .
 |  Next  |  Last
Pages: 1 2 3
Prev: Possible Solution
Next: Kernel image size