From: D Yuniskis on
Hi Jacko,

jacko wrote:
> Lists linking an malloc? This is a bad excuse. The pascal book was
> fine for learning, but (void*) allows so much more. An array of (void
> *) statically defined. It's the odd size requirement that make malloc
> slow, and the overhead of say 4K per list item with a page allocator

Not sure I understand all of what you are saying... :-(

This is not for deployment in a PMMU environment (though it
could be). The cost of maintaining the linked list of fragments
within the Heap is a void* per fragment (i.e. pointer to the
"next free fragment". This is stored in the fragment(s)
themselves. And, since they are "free" (unused), this is
"no cost".

The overhead associated with allocated fragments is a size_t.
Instead of accumulating the overhead in a kernel structure
(as would be commonplace for large systems), the costs
are distributed *with* the allocated fragments (this is
a vulnerability as a careless application can scribble on
this "(should be) protected" data and screw things up.
(we have a name for this sort of behavior: it's called a BUG!)

Unlike paged environments, the fragment (whether allocated
or free) is always "mapped". So, there is no risk of thrashing
when trying to walk the free list, etc.

> is just stupid. I thought list dual (void*) structure was so that

Single link. You never have to go backwards (unless your
free list maintenance strategy sorts the list using other
criteria).

> recycling of nones would be by making a free chain. This can then be
> pseudo quick sorted by a b ranging to combine local list items into
> allocatable cache lines.
>
> ;)
>
> Cheers Jacko
From: Boudewijn Dijkstra on
Op Thu, 04 Mar 2010 23:54:22 +0100 schreef D Yuniskis
<not.going.to.be(a)seen.com>:
> Hi Boudewijn,
>
> (wow, every time I type that, I have to triple check my spelling!
> :-/ How would I pronounce it?

When I travel abroad, (especially France and some regions of the US) I
sometimes call myself "Bo". It saves time and prevents people from
"embarrassing" themselves trying and failing time after time, and getting
into a bad mood.

IPA: /ˈbʌʊdəˌʋɛɪn/
- /ʌʊ/ sounds like the 'o' in 'to bow', but shorter;
- /ʋ/ doesn't occur in English, the 'r' in 'red' comes close phonetically
(it is still a 'w', but not as long and vowely as the English 'w').

> That might make it easier to
> commit to memory...)
>
> Boudewijn Dijkstra wrote:
>> Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
>> <not.going.to.be(a)seen.com>:
>>> [...]
>>>
>>> I also haven't decided how to handle multiple
>>> requests (queue based on task priority? size?)
>> Wake all waiters in malloc, let the scheduler activate them in order,
>> check if allocation can be satisfied (and check for timed-out
>> requests), go back to sleep otherwise.
>
> Timers are handled independantly -- since you want those
> tasks blocking with timeouts to be restarted when their
> timers expire regardless of whether or not you have any
> malloc/free activity.

Fair enough.

> I am still uncertain about what strategy to use when stuff
> is added to the heap (by free). I.e., try to satisfy *any*
> pending request? Try to satisfy the request of the task
> with the highest runtime priority? Try to satisfy the
> first/oldest request? etc.

Surely the highest priority task is the one you want to hold up waiting
for the shortest time possible. If a lower priority was there first, then
it can wait a little longer, because it is just a lower priority.



--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
From: D Yuniskis on
Hi Boudewijn,

Boudewijn Dijkstra wrote:
> Op Thu, 04 Mar 2010 23:54:22 +0100 schreef D Yuniskis
> <not.going.to.be(a)seen.com>:
>> Hi Boudewijn,
>>
>> (wow, every time I type that, I have to triple check my spelling!
>> :-/ How would I pronounce it?
>
> When I travel abroad, (especially France and some regions of the US) I
> sometimes call myself "Bo". It saves time and prevents people from
> "embarrassing" themselves trying and failing time after time, and
> getting into a bad mood.

<grin> When I order pizza, I tell them my name is "Fred -- with a 'P'"
Saves the hassle of them misspelling my name on the ticket -- and then
the delivery driver mispronouncing that misspelling! :-/

> IPA: /ˈbʌʊdəˌʋɛɪn/
> - /ʌʊ/ sounds like the 'o' in 'to bow', but shorter;
> - /ʋ/ doesn't occur in English, the 'r' in 'red' comes close
> phonetically (it is still a 'w', but not as long and vowely as the
> English 'w').

I'm confused. Is it a fricative? I should just push this through
my speech synthesizer but that will add it's own artifacts :<

>> That might make it easier to commit to memory...)
>>
>> Boudewijn Dijkstra wrote:
>>> Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
>>> <not.going.to.be(a)seen.com>:
>>>> [...]
>>>>
>>>> I also haven't decided how to handle multiple
>>>> requests (queue based on task priority? size?)
>>> Wake all waiters in malloc, let the scheduler activate them in
>>> order, check if allocation can be satisfied (and check for timed-out
>>> requests), go back to sleep otherwise.
>>
>> Timers are handled independantly -- since you want those
>> tasks blocking with timeouts to be restarted when their
>> timers expire regardless of whether or not you have any
>> malloc/free activity.
>
> Fair enough.

I let callers optionally pass a Timer and/or Mutex to each routine.
So, if you (the caller) *know* there are no competitors, you
can skip the Mutex. And, for "guaranteed to run to completion"
routines (without competitors) -- like "release()" -- there is no
need for a Timer.

The whole point is to let the developer get the subsystem to
fit *his* needs instead of the other way around.

>> I am still uncertain about what strategy to use when stuff
>> is added to the heap (by free). I.e., try to satisfy *any*
>> pending request? Try to satisfy the request of the task
>> with the highest runtime priority? Try to satisfy the
>> first/oldest request? etc.
>
> Surely the highest priority task is the one you want to hold up waiting
> for the shortest time possible. If a lower priority was there first,
> then it can wait a little longer, because it is just a lower priority.

That *seems* right but I haven't thought it through to its
conclusion. E.g., if the lower priority task *had* arrived
earlier *and* memory had been available, it wouldn't have
blocked.

Presumably, it's blocking had some effect on the higher priority
task eventually queuing up in the allocator as well.

If the lower priority task *had* managed to gain the memory
requested *when* originally requested *and* the higher priority
task eventually came along and blocked, the lower priority
task would have had to inherit its priority in order to "get out
of its way" (else deadlock as the higher priority task would
effectively be blocked by the lower priority task -- forever).

So, perhaps the *right* solution involves *whichever* task
is granted the resource inherits the priority of the highest
waiting (blocked) task. This *seems* right. Which leaves
the only decision to be "which task to give the resource to"

Dunno. I have to think through the various scenarios to
see which is least prone to deadlock (hence the reason for
the post). Its these sort of "little" decisions that
cause things to break in very *grand* ways. :<
From: Boudewijn Dijkstra on
Op Mon, 08 Mar 2010 05:40:34 +0100 schreef D Yuniskis
<not.going.to.be(a)seen.com>:
> Boudewijn Dijkstra wrote:
>> Op Thu, 04 Mar 2010 23:54:22 +0100 schreef D Yuniskis
>> <not.going.to.be(a)seen.com>:
>>> Hi Boudewijn,
>>>
>>> (wow, every time I type that, I have to triple check my spelling!
>>> :-/ How would I pronounce it?
>
>> IPA: /ˈbʌʊdəˌʋɛɪn/
>> - /ʌʊ/ sounds like the 'o' in 'to bow', but shorter;
>> - /ʋ/ doesn't occur in English, the 'r' in 'red' comes close
>> phonetically (it is still a 'w', but not as long and vowely as the
>> English 'w').
>
> I'm confused. Is it a fricative?

If it were, I would have used /v/ instead. As IPA defines, /ʋ/ is a
labiodental approximant, voiced in this case.

> I should just push this through
> my speech synthesizer but that will add it's own artifacts :<

If it is English-only, then sure.

>>> That might make it easier to commit to memory...)
>>>
>>> Boudewijn Dijkstra wrote:
>>>> Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
>>>> <not.going.to.be(a)seen.com>:
>>>>> [...]
>>>>>
>>>>> I also haven't decided how to handle multiple
>>>>> requests (queue based on task priority? size?)
>>>> Wake all waiters in malloc, let the scheduler activate them in
>>>> order, check if allocation can be satisfied (and check for timed-out
>>>> requests), go back to sleep otherwise.
>>>
>>> Timers are handled independantly -- since you want those
>>> tasks blocking with timeouts to be restarted when their
>>> timers expire regardless of whether or not you have any
>>> malloc/free activity.
>> Fair enough.
>
> I let callers optionally pass a Timer and/or Mutex to each routine.
> So, if you (the caller) *know* there are no competitors, you
> can skip the Mutex.

I'm not quite sure I understand. One would use the mutex to "hold off the
competition"?

> And, for "guaranteed to run to completion"
> routines (without competitors) -- like "release()" -- there is no
> need for a Timer.
>
> The whole point is to let the developer get the subsystem to
> fit *his* needs instead of the other way around.
>
>>> I am still uncertain about what strategy to use when stuff
>>> is added to the heap (by free). I.e., try to satisfy *any*
>>> pending request? Try to satisfy the request of the task
>>> with the highest runtime priority? Try to satisfy the
>>> first/oldest request? etc.
>> Surely the highest priority task is the one you want to hold up
>> waiting for the shortest time possible. If a lower priority was there
>> first, then it can wait a little longer, because it is just a lower
>> priority.
>
> That *seems* right but I haven't thought it through to its
> conclusion. E.g., if the lower priority task *had* arrived
> earlier *and* memory had been available, it wouldn't have
> blocked.

Obviously.

> Presumably, it's blocking had some effect on the higher priority
> task eventually queuing up in the allocator as well.

The most probable cause is that neither allocation request can be
satisfied at this time.

> If the lower priority task *had* managed to gain the memory
> requested *when* originally requested *and* the higher priority
> task eventually came along and blocked, the lower priority
> task would have had to inherit its priority in order to "get out
> of its way"

How so? The allocator doesn't know that the higher priority task is about
to make a request.

> (else deadlock as the higher priority task would
> effectively be blocked by the lower priority task -- forever).

Why forever? Are you talking about cyclic alloc/free?

> So, perhaps the *right* solution involves *whichever* task
> is granted the resource inherits the priority of the highest
> waiting (blocked) task. This *seems* right. Which leaves
> the only decision to be "which task to give the resource to"
>
> Dunno. I have to think through the various scenarios to
> see which is least prone to deadlock (hence the reason for
> the post). Its these sort of "little" decisions that
> cause things to break in very *grand* ways. :<


--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
From: D Yuniskis on
Hi Boudewijn,

Boudewijn Dijkstra wrote:
> Op Mon, 08 Mar 2010 05:40:34 +0100 schreef D Yuniskis
> <not.going.to.be(a)seen.com>:
>> Boudewijn Dijkstra wrote:
>>> Op Thu, 04 Mar 2010 23:54:22 +0100 schreef D Yuniskis
>>> <not.going.to.be(a)seen.com>:
>>>> Hi Boudewijn,
>>>>
>>>> (wow, every time I type that, I have to triple check my spelling!
>>>> :-/ How would I pronounce it?
>>
>>> IPA: /ˈbʌʊdəˌʋɛɪn/
>>> - /ʌʊ/ sounds like the 'o' in 'to bow', but shorter;
>>> - /ʋ/ doesn't occur in English, the 'r' in 'red' comes close
>>> phonetically (it is still a 'w', but not as long and vowely as the
>>> English 'w').
>>
>> I'm confused. Is it a fricative?
>
> If it were, I would have used /v/ instead. As IPA defines, /ʋ/ is a
> labiodental approximant, voiced in this case.

Hmmm... OK. I'll have to research it, then. Not used in
the European languages?

>> I should just push this through
>> my speech synthesizer but that will add it's own artifacts :<
>
> If it is English-only, then sure.

<grin> I figured even *if* it "tried correctly" it would still
end up coming out stilted.

>>>> That might make it easier to commit to memory...)
>>>>
>>>> Boudewijn Dijkstra wrote:
>>>>> Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
>>>>> <not.going.to.be(a)seen.com>:
>>>>>> [...]
>>>>>>
>>>>>> I also haven't decided how to handle multiple
>>>>>> requests (queue based on task priority? size?)
>>>>> Wake all waiters in malloc, let the scheduler activate them in
>>>>> order, check if allocation can be satisfied (and check for
>>>>> timed-out requests), go back to sleep otherwise.
>>>>
>>>> Timers are handled independantly -- since you want those
>>>> tasks blocking with timeouts to be restarted when their
>>>> timers expire regardless of whether or not you have any
>>>> malloc/free activity.
>>> Fair enough.
>>
>> I let callers optionally pass a Timer and/or Mutex to each routine.
>> So, if you (the caller) *know* there are no competitors, you
>> can skip the Mutex.
>
> I'm not quite sure I understand. One would use the mutex to "hold off
> the competition"?

If the Heap (there can be many) is not shared, no need to
protect it with a synchronization primitive as "you" are the
only consumer.

OTOH, if it *is* shared, pass a MUTEX (handle to a mutex) to
the allocator (as well as free, when the time comes) and
the allocator will attempt to take the mutex before accessing
those aspects of the Heap data structure (e.g., free list)
that need to be manipulated atomically. So, if something else
is already doing so, you will end up blocking (until the
mutex is released)

>> And, for "guaranteed to run to completion"
>> routines (without competitors) -- like "release()" -- there is no
>> need for a Timer.
>>
>> The whole point is to let the developer get the subsystem to
>> fit *his* needs instead of the other way around.
>>
>>>> I am still uncertain about what strategy to use when stuff
>>>> is added to the heap (by free). I.e., try to satisfy *any*
>>>> pending request? Try to satisfy the request of the task
>>>> with the highest runtime priority? Try to satisfy the
>>>> first/oldest request? etc.
>>> Surely the highest priority task is the one you want to hold up
>>> waiting for the shortest time possible. If a lower priority was
>>> there first, then it can wait a little longer, because it is just a
>>> lower priority.
>>
>> That *seems* right but I haven't thought it through to its
>> conclusion. E.g., if the lower priority task *had* arrived
>> earlier *and* memory had been available, it wouldn't have
>> blocked.
>
> Obviously.

Yes. So I am trying to think that line of reasoning through to its
conclusion. And see how it affects the outcome of competing
requests (i.e., it seems that the first request might want
to be allowed to run when it can -- but inherit the priority
of the highest priority task that is currently waiting for
the resource it now holds)

>> Presumably, it's blocking had some effect on the higher priority
>> task eventually queuing up in the allocator as well.
>
> The most probable cause is that neither allocation request can be
> satisfied at this time.
>
>> If the lower priority task *had* managed to gain the memory
>> requested *when* originally requested *and* the higher priority
>> task eventually came along and blocked, the lower priority
>> task would have had to inherit its priority in order to "get out
>> of its way"
>
> How so? The allocator doesn't know that the higher priority task is
> about to make a request.

No, I'm assuming the higher priority task is now blocking waiting on
that resource -- which is being held by a lower priority task!
So, the lower priority task should have its priority elevated so
that deadlock can't occur.

E.g., don't think of it as memory. Think of a simple resource.
If low priority task holds it and high priority task *needs*
it, then deadlock can occur. If you elevate priority of
low priority task holding resource, then low priority task
won't get stuck later by some medium priority task, etc.

>> (else deadlock as the higher priority task would
>> effectively be blocked by the lower priority task -- forever).
>
> Why forever? Are you talking about cyclic alloc/free?

Low priority task takes resource (forget memory for the time being
and think of just *a* shared resource). Some epsilon later,
high priority task wants that resource. Can't have it because
it is in use. So, high priority task blocks. Now, medium
priority task comes along. Doesn't need the resource -- but,
having higher priority than the "low" priority task, it forces
the low priority task to wait. For as long as the medium priority
task wants to run! High priority task can't preempt medium
priority task because it is blocked. Low priority task
can't finish using the resource that it holds because medium
priority task outranks it. Deadlock.

If, OTOH, low priority task had its priority elevated to that
of the highest priority task waiting for the resource it
holds, then that "was low" priority task could continue to run
and, presumably, release that resource (at which time its
priority would return to normal, high priority task would
become unblocked because resource was now available and
resume running).

This was the essence of my post. I *think* requests can
be satisfied in the order they arrive. As a thought experiment,
imagine what would have happened if the heap had been bigger
and the first request hadn't blocked...

>> So, perhaps the *right* solution involves *whichever* task
>> is granted the resource inherits the priority of the highest
>> waiting (blocked) task. This *seems* right. Which leaves
>> the only decision to be "which task to give the resource to"
>>
>> Dunno. I have to think through the various scenarios to
>> see which is least prone to deadlock (hence the reason for
>> the post). Its these sort of "little" decisions that
>> cause things to break in very *grand* ways. :<