From: Boudewijn Dijkstra on
Op Mon, 08 Mar 2010 18:08:39 +0100 schreef D Yuniskis
<not.going.to.be(a)seen.com>:
> Boudewijn Dijkstra wrote:
>> Op Mon, 08 Mar 2010 05:40:34 +0100 schreef D Yuniskis
>> <not.going.to.be(a)seen.com>:
>>> Boudewijn Dijkstra wrote:
>>>> Op Thu, 04 Mar 2010 23:54:22 +0100 schreef D Yuniskis
>>>> <not.going.to.be(a)seen.com>:
>>>>> Hi Boudewijn,
>>>>>
>>>>> (wow, every time I type that, I have to triple check my spelling!
>>>>> :-/ How would I pronounce it?
>>>
>>>> IPA: /ˈbʌʊdəˌʋɛɪn/
>>>> - /ʌʊ/ sounds like the 'o' in 'to bow', but shorter;
>>>> - /ʋ/ doesn't occur in English, the 'r' in 'red' comes close
>>>> phonetically (it is still a 'w', but not as long and vowely as the
>>>> English 'w').
>>>
>>> I'm confused. Is it a fricative?
>> If it were, I would have used /v/ instead. As IPA defines, /ʋ/ is a
>> labiodental approximant, voiced in this case.
>
> Hmmm... OK. I'll have to research it, then. Not used in
> the European languages?

I just realized that an almost equal sound occurs in the New-Jersey
dialect, as the characteristic thick 'r'.
Also see http://en.wikipedia.org/wiki/Labiodental_approximant .

>>>>> Boudewijn Dijkstra wrote:
>>>>>> Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
>>>>>> <not.going.to.be(a)seen.com>:
>>>>>>> [...]
>>>>>>>
>>>>>>> I also haven't decided how to handle multiple
>>>>>>> requests (queue based on task priority? size?)
>>>>>> Wake all waiters in malloc, let the scheduler activate them in
>>>>>> order, check if allocation can be satisfied (and check for
>>>>>> timed-out requests), go back to sleep otherwise.
>>>>>
>>>>> Timers are handled independantly -- since you want those
>>>>> tasks blocking with timeouts to be restarted when their
>>>>> timers expire regardless of whether or not you have any
>>>>> malloc/free activity.
>>>> Fair enough.
>>>
>>> I let callers optionally pass a Timer and/or Mutex to each routine.
>>> So, if you (the caller) *know* there are no competitors, you
>>> can skip the Mutex.
>> I'm not quite sure I understand. One would use the mutex to "hold off
>> the competition"?
>
> If the Heap (there can be many) is not shared, no need to
> protect it with a synchronization primitive as "you" are the
> only consumer.
>
> OTOH, if it *is* shared, pass a MUTEX (handle to a mutex) to
> the allocator (as well as free, when the time comes) and
> the allocator will attempt to take the mutex before accessing
> those aspects of the Heap data structure (e.g., free list)
> that need to be manipulated atomically. So, if something else
> is already doing so, you will end up blocking (until the
> mutex is released)

I disagree with this philosophy. This mutex should be kept internal to
the allocator. If the heap is created with a 'shared' flag, then it is
created internally, otherwise it is set to NULL.

>>> If the lower priority task *had* managed to gain the memory
>>> requested *when* originally requested *and* the higher priority
>>> task eventually came along and blocked, the lower priority
>>> task would have had to inherit its priority in order to "get out
>>> of its way"
>> How so? The allocator doesn't know that the higher priority task is
>> about to make a request.
>
> No, I'm assuming the higher priority task is now blocking waiting on
> that resource -- which is being held by a lower priority task!
> So, the lower priority task should have its priority elevated so
> that deadlock can't occur.
>
> E.g., don't think of it as memory. Think of a simple resource.
> If low priority task holds it and high priority task *needs*
> it, then deadlock can occur. If you elevate priority of
> low priority task holding resource, then low priority task
> won't get stuck later by some medium priority task, etc.
>
>>> (else deadlock as the higher priority task would
>>> effectively be blocked by the lower priority task -- forever).
>> Why forever? Are you talking about cyclic alloc/free?
>
> Low priority task takes resource (forget memory for the time being
> and think of just *a* shared resource). Some epsilon later,
> high priority task wants that resource. Can't have it because
> it is in use. So, high priority task blocks. Now, medium
> priority task comes along. Doesn't need the resource -- but,
> having higher priority than the "low" priority task, it forces
> the low priority task to wait. For as long as the medium priority
> task wants to run! High priority task can't preempt medium
> priority task because it is blocked. Low priority task
> can't finish using the resource that it holds because medium
> priority task outranks it. Deadlock.

I see. But it can be argued that this is a system design error, not an
error in the allocator.

> If, OTOH, low priority task had its priority elevated to that
> of the highest priority task waiting for the resource it
> holds, then that "was low" priority task could continue to run
> and, presumably, release that resource (at which time its
> priority would return to normal, high priority task would
> become unblocked because resource was now available and
> resume running).

But now, with this priority elevation, the low priority task forces the
medium priority task to wait. For as long as the low priority task wants
to run (until it releases the resource)!

> This was the essence of my post. I *think* requests can
> be satisfied in the order they arrive. As a thought experiment,
> imagine what would have happened if the heap had been bigger
> and the first request hadn't blocked...



--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
From: D Yuniskis on
Hi Boudewijn,

Boudewijn Dijkstra wrote:
>>>>>> (wow, every time I type that, I have to triple check my spelling!
>>>>>> :-/ How would I pronounce it?
>> Hmmm... OK. I'll have to research it, then. Not used in
>> the European languages?
>
> I just realized that an almost equal sound occurs in the New-Jersey
> dialect, as the characteristic thick 'r'.
> Also see http://en.wikipedia.org/wiki/Labiodental_approximant .

<frown> It's been too many years since I lived East
for me to recall how folks from Joisey spoke. :<
(except, of course, "joisey" and "earl"/oil).

I think I will see if DECTalk supports it and just type
in your name phonetically and see what comes out. Heck,
it can't mangle it much worse than it does *mine*! :>

[snips]

>>>> I let callers optionally pass a Timer and/or Mutex to each routine.
>>>> So, if you (the caller) *know* there are no competitors, you
>>>> can skip the Mutex.
>>> I'm not quite sure I understand. One would use the mutex to "hold
>>> off the competition"?
>>
>> If the Heap (there can be many) is not shared, no need to
>> protect it with a synchronization primitive as "you" are the
>> only consumer.
>>
>> OTOH, if it *is* shared, pass a MUTEX (handle to a mutex) to
>> the allocator (as well as free, when the time comes) and
>> the allocator will attempt to take the mutex before accessing
>> those aspects of the Heap data structure (e.g., free list)
>> that need to be manipulated atomically. So, if something else
>> is already doing so, you will end up blocking (until the
>> mutex is released)
>
> I disagree with this philosophy. This mutex should be kept internal to
> the allocator. If the heap is created with a 'shared' flag, then it is
> created internally, otherwise it is set to NULL.

That's how I do things in bigger (fleshier) RTOS's. Here, I
am trying to balance hidden complexity with that which is
exposed to the developer for maintenance "manually".

It lets the sizeof(Heap) be a bit smaller as well as letting
me (developer) decide how and when to use the Mutex-es to my
best advantage (the applications on which I am deploying this
are pretty tightly constrained, resource wise).

It also makes it clear in each *invocation* of a service
whether that service *might* have to compete, or not.
E.g., if I invoked one of these routines in an ISR with
the mutex parameter non-NULL, it would be a red flag to
me that I was "asking for trouble". OTOH, if I invoke
it with the flag set to NULL, it "self-documents" the
fact that there can be no competitors at this time

Note that that can change, dynamically!

E.g., if I EXPLICTLY enter a critical region (take a lock
on a particular mutex that, by convention, protects some
set of resources), then I can invoke these routines freely
WITHOUT needing a malloc-specific Mutex -- that would just
be extra overhead!

Yet, outside such a region, I *would* want the protection of
a Mutex (assuming competition for *that* Heap exists). If
I bury it in the Heap struct, then I have to put the flag
in all calls (i.e., it can't be a "configure/create-time"
option). So, I *do* put the flag there! Except, instead of
"TRUE" or "FALSE", it is "NO MUTEX REQUIRED" or "USE *THIS*
MUTEX"

<shrug> Style issue?

>>>> If the lower priority task *had* managed to gain the memory
>>>> requested *when* originally requested *and* the higher priority
>>>> task eventually came along and blocked, the lower priority
>>>> task would have had to inherit its priority in order to "get out
>>>> of its way"
>>> How so? The allocator doesn't know that the higher priority task is
>>> about to make a request.
>>
>> No, I'm assuming the higher priority task is now blocking waiting on
>> that resource -- which is being held by a lower priority task!
>> So, the lower priority task should have its priority elevated so
>> that deadlock can't occur.
>>
>> E.g., don't think of it as memory. Think of a simple resource.
>> If low priority task holds it and high priority task *needs*
>> it, then deadlock can occur. If you elevate priority of
>> low priority task holding resource, then low priority task
>> won't get stuck later by some medium priority task, etc.
>>
>>>> (else deadlock as the higher priority task would
>>>> effectively be blocked by the lower priority task -- forever).
>>> Why forever? Are you talking about cyclic alloc/free?
>>
>> Low priority task takes resource (forget memory for the time being
>> and think of just *a* shared resource). Some epsilon later,
>> high priority task wants that resource. Can't have it because
>> it is in use. So, high priority task blocks. Now, medium
>> priority task comes along. Doesn't need the resource -- but,
>> having higher priority than the "low" priority task, it forces
>> the low priority task to wait. For as long as the medium priority
>> task wants to run! High priority task can't preempt medium
>> priority task because it is blocked. Low priority task
>> can't finish using the resource that it holds because medium
>> priority task outranks it. Deadlock.
>
> I see. But it can be argued that this is a system design error, not an
> error in the allocator.

It *can* be. It's just a typical scenario that can lead to deadlock
(hence the title of the thread :> ). E.g., if I take X and get
ready to take Y but you have taken Y and are waiting for X, we're
both screwed. "Design error".

OTOH, since it is a multitasking environment, you never know
when a low priority task is going to run and "happen" to
end up with a resource that some higher priority task might
happen to want at the same time. "Luck of the draw".

E.g., that low priority task might be doing background garbage
collection -- "when nothing of greater importance needs to be done".
It might (temporarily) need some resource that the "real"
application also needs. But, the "real" application doesn't
seem to be needing it, right now. I.e., the real appplication
is idle -- maybe waiting for input from the user?

Foolish to tell the low priority task that it can't TEMPORARILY
use that resource -- heck, that's the whole point of having the
low priority task "mop things up" when there is spare time to
do so!

Of course, coincidence could cause the user to provide the
input that the high priority task was waiting *just* after
the low priority task took the resource. Now, high priority
task is ready to run, goes looking for the resource and
has to block.

In the meantime, it (or something else) has caused a medium
priority task to become eligible to run so that task preempts
the low priority task.

See http://en.wikipedia.org/wiki/Priority_inheritance

If you don't address these sorts of possibilities in your
resource management (e.g., memory -- whether managed in a
heap or as "buffer pools"), then your allocation scheme
is immaterial -- your RTOS is broke! :>

>> If, OTOH, low priority task had its priority elevated to that
>> of the highest priority task waiting for the resource it
>> holds, then that "was low" priority task could continue to run
>> and, presumably, release that resource (at which time its
>> priority would return to normal, high priority task would
>> become unblocked because resource was now available and
>> resume running).
>
> But now, with this priority elevation, the low priority task forces the
> medium priority task to wait. For as long as the low priority task
> wants to run (until it releases the resource)!

Yes. Presumably, it *will* release the resource. Otherwise you
have a design error. :> It would be no different than the (a?)
high priority task seizing the resource indefinitely -- and
thereby preventing any other task from ever using it.

But, you *can* control whether or not your code (the low priority
task's code) releases a resource or not. You *can't* (usually)
control the timing of these other external events and other
dependencies that affect when a given task decides to wake up, etc.

So, you design the system (in this case, resource manager) so that
if you "get unlucky" and things queue up in some perverse order
("priority inversion"), then the system does what it can to help
things untangle themselves.

It's *engineering*, after all... :>

I think I've sorted out (in my head) how to tackle this.
Now I just have to figure out an efficient implementation.
Then, see what else I can harvest from that implementation
to enhance the rest of the RTOS.

>> This was the essence of my post. I *think* requests can
>> be satisfied in the order they arrive. As a thought experiment,
>> imagine what would have happened if the heap had been bigger
>> and the first request hadn't blocked...