From: Tim Bradshaw on
On 2010-04-29 18:14:55 +0100, Peter Keller said:

> 546MB up to 1.640GB.
>
> Now this needed memory for which I can easily account!
>
> That is a lot of memory to keep resident and perform churn on and hence
> why I'm very paranoid about consing or calling make-array. I use other
> libraries and things and who knows what their memory usages are. But
> at the scaling levels I'm desiring, I have to pay attention to it.

10 years ago this was a lot of memory. Now it's small change. 50 or
100G is something to think about, but less than 4 doesn't count for
much unless you're planning on some vast number of instances.

From: Espen Vestre on
Tim Bradshaw <tfb(a)tfeb.org> writes:

> 10 years ago this was a lot of memory. Now it's small change. 50 or
> 100G is something to think about, but less than 4 doesn't count for
> much unless you're planning on some vast number of instances.

It depends a bit on the life time of the memory and the requirements of
your server. I have one server application instance that uses 6-7GB of
memory, and which needs to run a full GC freeing most of that once a
day. That takes a about 5 seconds (I think, I haven't looked at it in
detail for a while, it runs for months without problems now), which is
not critical at all for *this* server, since I can run the GC at 2 in
the morning when the server rarely is needed. But other applications may
have other requirements.
--
(espen)
From: Tim Bradshaw on
On 2010-05-02 09:48:38 +0100, Espen Vestre said:

> It depends a bit on the life time of the memory and the requirements of
> your server. I have one server application instance that uses 6-7GB of
> memory, and which needs to run a full GC freeing most of that once a
> day. That takes a about 5 seconds (I think, I haven't looked at it in
> detail for a while, it runs for months without problems now), which is
> not critical at all for *this* server, since I can run the GC at 2 in
> the morning when the server rarely is needed. But other applications may
> have other requirements.

I think that if GCs have problems which make using large heaps
difficult (and 6 or 7GB is not really large), then GC implementation
need to get better - perhaps the Azul people are right about that.

From: Espen Vestre on
Tim Bradshaw <tfb(a)tfeb.org> writes:

> I think that if GCs have problems which make using large heaps
> difficult (and 6 or 7GB is not really large), then GC implementation
> need to get better - perhaps the Azul people are right about that.

I looked at the real numbers for last night, it freed 1.76GB out of 4.48
and used 7.6 seconds cpu time. At least it's way better than a server
app I struggled with some 10 years ago, which used 15 hours for a couple
of hundred megabytes ;-)
--
(espen)
From: Peter Keller on
Espen Vestre <espen(a)vestre.net> wrote:
> Tim Bradshaw <tfb(a)tfeb.org> writes:
>
>> 10 years ago this was a lot of memory. Now it's small change. 50 or
>> 100G is something to think about, but less than 4 doesn't count for
>> much unless you're planning on some vast number of instances.
>
> It depends a bit on the life time of the memory and the requirements of
> your server.

So, I got my master/worker codes working and the epilogue to this was I
would run out of heap memory (and get a heap exhaustion error!) because
I was creating and interning too many keyword symbols(!!) which permanently
consumed all of the memory.

I was using them for unique identification purposes for objects in
the codes.... I see now the folly of that. I've changed the unique
identification to some other kind of object which is garbage collected. :)

Later,
-pete