From: David Brown on
On 02/03/2010 22:31, D Yuniskis wrote:
> Hi Tim,
>
> Tim Wescott wrote:
<snip>
>> But I think you're better off just not using malloc at all unless you
>> don't care about the system being real time.
>
> It is a fallacy to assume that malloc precludes real-time
> system designs. That sort of argument extends to imply that
> you can't design a real-time system with a network stack.
> Or a serial port. Or any other "thing" that can't be
> deterministically controlled in the development effort.
>

I suspect that the majority of the embedded systems I have written over
the years have had a serial port - /none/ have used malloc. (I'm
talking about small/medium systems here, not systems with an MMU, "big"
OS, etc.)

You are mixing up "dynamic memory" and "malloc" here. malloc/free is
only one way to implement dynamic memory - it aims to be very general
and convenient, but is almost always non-deterministic in run time, and
has a lot of overhead in time, ram space, and code space.

Additionally, it is often very hard to predict if malloc allocations
will succeed or not, and failure to allocate memory can easily mean
failure of the application (and I don't just mean if the programmer
forgets to check the return value - there may be nothing sensible the
program can do if 0 is returned).

Thus when a small and/or real-time embedded system needs dynamic memory
(such as for handling network buffers), you typically use specialised
code using statically-allocated space.

Apart from the special case of only using malloc at startup, and never
freeing the memory, you should treat malloc with extreme caution in an
embedded program - it is just too difficult to be sure your program is
really correct.

> As with any RT design, you carefully decide which actions
> *require* deterministic responses; which of those are HRT
> vs. SRT; and which things can be done "as available". You

That's true enough - while malloc is totally inappropriate for sections
that need RT response, it is certainly possible to use non-deterministic
functions in non-RT parts. However, malloc really has to be considered
an unreliable function, not just non-deterministic, since it may fail at
unexpected times. And unreliable functions have no place in an embedded
system.

> can also design the system so that it *handles* missed
> deadlines gracefully -- even rearranging the workload to
> prevent such overruns from happening in the future.
>

Why would you even want to do this - except as a way to work around an
inappropriate dynamic memory system? It is far better to figure out
what your system needs, and if you need dynamic memory then write code
that you know will work in your system. If you know that you will have
8 tasks that need large buffers, and you only have ram space for 4 at a
time, then declare 4 static buffers and write code to control allocation
of them - it will be deterministic, reliable and predictable.

> E.g., crt0 brings up the RTOS. Do *all* of the objects used
> by the RTOS have to be statically defined? Is there something
> that recludes building task contexts from the system heap
> (instead of declaring them as static)? Is there something
> that precludes creating per-thread stacks from that heap?
> Is there something that precludes creating per-thread
> *heaps* from that heap?
>

Allocating memory during startup and initialisation, and never freeing
it, is a special case of dynamic memory. Even then I would not use
malloc - I'd work hard to figure out a way to allocate the memory
statically, and failing that would write my own allocator. But it is
certainly possible to use malloc in such cases - the run-time is
manageable, it is easy to test for correctness, and runs should be
repeatable. Of course, you still waste time, ram and flash space in
malloc overhead.

> Depriving yourself of a tool (e.g., the Heap) is something
> you should only do if you *know* it won't work -- or, if you
> can't figure out how to craft your application to use it
> effectively within the constraints imposed by it. It's
> like deciding *universally* not to use long doubles
> "because they are too slow"...

Yes, and depriving yourself of a pneumatic drill in your electronics lab
is something you should only do if you /know/ it won't work....

Dynamic memory /can/ be a useful tool, but it should be considered
dangerous because it is often unpredictable. It is always better if you
can find a clearer solution which you can easily see is correct. It is
certainly true that dynamic memory is sometimes the cleanest solution -
but even then, standard "malloc" should generally be avoided. Consider
your program's dynamic memory needs, and write appropriate code for it -
you can almost always do better than malloc.

It is a good thing that you are thinking about the problems in malloc,
and want to write something better. Just don't call it "malloc",
because if it avoids the failings of malloc (in the context of small
embedded systems), then it is no longer malloc.
From: Boudewijn Dijkstra on
Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
<not.going.to.be(a)seen.com>:
> [no idea why I bother including c.realtime as it's deader'n a
> doornail, there! :< ]
>
> I'm thinking of implementing a "wait until satisfied"
> option in malloc() [call it by some other name soas not
> to muddy the typical expectations of malloc(3)].
> In essence, if malloc() *would* return NULL, it,
> instead, *blocks* waiting for the Heap to be able to
> satisfy the request.

What is this talk of "heap" I keep hearing whenever malloc is mentioned?
The C standard, nor any POSIX/Unix specifies that a heap must be used.

> The first (naive) strategy I considered was to block all
> future malloc()'s -- i.e., don't let anything else be
> withdrawn from the heap -- and only allow free()'s to
> proceed. *Naive* in that it assumes the resources the
> first caller awaits are going to be released "soon".
>
> *Dangerous* in that it is rife with potential for deadlock! :<

Not if you can specify a time-out period (between 'endless' and 'none').

> So, the only realistic implementation is to allow malloc()
> and free() to proceed unimpeded. However, I can hook all
> free()'s so the blocked request(s) is reissued prior to
> return-ing.
>
> I've not decided if the free() should then preempt the current
> task or simply perform the deferred allocation and return
> to its caller.

Shouldn't the scheduler take care of that?

> I also haven't decided how to handle multiple
> requests (queue based on task priority? size?)

Wake all waiters in malloc, let the scheduler activate them in order,
check if allocation can be satisfied (and check for timed-out requests),
go back to sleep otherwise.

> [Note that this can be implemented efficiently and need not
> bear the cost of an embedded "malloc()"-like invocation.]
>
> Notice that the behavior blocking-for-request differs from
> moving this activity up to the caller (i.e., call malloc,
> get NULL, lather/rinse/repeat).
>
> Are there any other issues that I am missing? Or, dragons to
> be wary of?

Look how modern commercial RTOSes do this.
http://sciopta.com/doc/doc.html


--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
(remove the obvious prefix to reply by mail)
From: jacko on
Lists linking an malloc? This is a bad excuse. The pascal book was
fine for learning, but (void*) allows so much more. An array of (void
*) statically defined. It's the odd size requirement that make malloc
slow, and the overhead of say 4K per list item with a page allocator
is just stupid. I thought list dual (void*) structure was so that
recycling of nones would be by making a free chain. This can then be
pseudo quick sorted by a b ranging to combine local list items into
allocatable cache lines.

;)

Cheers Jacko
From: WangoTango on
In article <op.u8zi45kaj0diun(a)azrael.indes.com>,
sp4mtr4p.boudewijn(a)indes.com says...
>
> What is this talk of "heap" I keep hearing whenever malloc is mentioned?
> The C standard, nor any POSIX/Unix specifies that a heap must be used.

Now, I am pulling this out of LONG unused memory locations, so it might
be less than the gospel, but......

You can thank Intel/segmented architecture for that little bit of
terminology.
The memory models used on the PC has some of them using a 'near' or
'far' heap for memory allocation. The near heap was all in one segment
and could be accessed quickly using one segment register, BUT it was
limited to 64K. The far heap, on the other hand, was only limited by
available memory but had to use segment math to get to, so it was
slower.
So, I think the term heap was coined to denote a pile of memory
available for dynamic use. Pull it out of the heap and toss it back on
when done.

I'm sure I have some part of that wrong, and will be corrected by the
authorities on such matters in record time. :)

From: D Yuniskis on
Hi Boudewijn,

(wow, every time I type that, I have to triple check my spelling!
:-/ How would I pronounce it? That might make it easier to
commit to memory...)

Boudewijn Dijkstra wrote:
> Op Tue, 02 Mar 2010 19:06:14 +0100 schreef D Yuniskis
> <not.going.to.be(a)seen.com>:
>> [no idea why I bother including c.realtime as it's deader'n a
>> doornail, there! :< ]
>>
>> I'm thinking of implementing a "wait until satisfied"
>> option in malloc() [call it by some other name soas not
>> to muddy the typical expectations of malloc(3)].
>> In essence, if malloc() *would* return NULL, it,
>> instead, *blocks* waiting for the Heap to be able to
>> satisfy the request.
>
> What is this talk of "heap" I keep hearing whenever malloc is
> mentioned? The C standard, nor any POSIX/Unix specifies that a
> heap must be used.

I think it's just one of those terms that has migrated into
the vernacular over the years. E.g., I see no mention of it
in System 6 (which is about as far back as I can conveniently
go). Note, also, that memory was a different beast in the
early days of UNIX (e.g., sbrk())

>> The first (naive) strategy I considered was to block all
>> future malloc()'s -- i.e., don't let anything else be
>> withdrawn from the heap -- and only allow free()'s to
>> proceed. *Naive* in that it assumes the resources the
>> first caller awaits are going to be released "soon".
>>
>> *Dangerous* in that it is rife with potential for deadlock! :<
>
> Not if you can specify a time-out period (between 'endless' and 'none').
>
>> So, the only realistic implementation is to allow malloc()
>> and free() to proceed unimpeded. However, I can hook all
>> free()'s so the blocked request(s) is reissued prior to
>> return-ing.
>>
>> I've not decided if the free() should then preempt the current
>> task or simply perform the deferred allocation and return
>> to its caller.
>
> Shouldn't the scheduler take care of that?

Yes. I'm talking about an efficiency hack. I lump memory
allocation *in* the OS instead of on *top* of it in a standard
library. As such, the memory management routines can avail
themselves of the hooks that are available to the OS.

So, free can know about malloc's implementation.
And, know that it is possible for some request(s) to be pending
"within" malloc "fragment(s) larger than presently available".
Since free "knows" that it is adding to the amount of available
memory; and, since free()'s actions may result in the addition
of a "large enough" fragment -- or *creation* of a large
enough fragment (as the freed region is coallesced with
existing fragments within the heap), it would be easy for free
to peek at the "pending request" list to see if its actions
would enable any of those requests to be satisfied. Then,
instead of returning directly to the caller, it can return
"through" malloc deliberately handing the newly created
"large enough" fragment to whichever request awaits it.

It's just "faster".

>> I also haven't decided how to handle multiple
>> requests (queue based on task priority? size?)
>
> Wake all waiters in malloc, let the scheduler activate them in order,
> check if allocation can be satisfied (and check for timed-out requests),
> go back to sleep otherwise.

Timers are handled independantly -- since you want those
tasks blocking with timeouts to be restarted when their
timers expire regardless of whether or not you have any
malloc/free activity.

I am still uncertain about what strategy to use when stuff
is added to the heap (by free). I.e., try to satisfy *any*
pending request? Try to satisfy the request of the task
with the highest runtime priority? Try to satisfy the
first/oldest request? etc.

>> [Note that this can be implemented efficiently and need not
>> bear the cost of an embedded "malloc()"-like invocation.]
>>
>> Notice that the behavior blocking-for-request differs from
>> moving this activity up to the caller (i.e., call malloc,
>> get NULL, lather/rinse/repeat).
>>
>> Are there any other issues that I am missing? Or, dragons to
>> be wary of?
>
> Look how modern commercial RTOSes do this.
> http://sciopta.com/doc/doc.html

Thanks, I'll have a look to see if they have anything pertinent.