From: Patricia Shanahan on
Arved Sandstrom wrote:
> Nigel Wade wrote:
>> On Mon, 23 Nov 2009 10:14:32 -0800, Roedy Green wrote:
>>
>>> On 23 Nov 2009 13:54:01 GMT, Thomas Pornin <pornin(a)bolet.org> wrote,
>>> quoted or indirectly quoted someone who said :
>>>
>>>> most of the memory used in a
>>>> typical application is for non-pointer data, e.g. byte[] or char[],
>>> In my own code, Strings and arrays of pointers to Strings would be the
>>> most common ram hog.
>>>
>>> I wonder if someone could cook up a simple tool to predict the effect of
>>> going to 64 bit on any given app.
>>
>> Given that going 64bit lifts you out of the 3GB straight-jacket, I
>> predict the effect of going 64bit would be to free you from worries
>> about the amount of RAM your application requires and concentrate on
>> other, more important issues.
>>
> Leaving aside the specific numbers, how many times over the past thirty
> years (rough timeframe for the Age of PCs [1]) have we said the same
> thing? :-)
>

I spend a lot less time on memory issues now than I did in 1970. I
remember being allowed a couple of weeks for adding a new input record
type to a program. I would do the same task in minutes in my current
Java program.

The main difficulty was that the program was close to no longer fitting
in a 16KB machine. Most of the time went on rearranging overlays and
reusing buffers to make space to add new code and data. Working in
assembly language on punch cards with one machine time slot per day
didn't help, but was less of a problem than the memory.

However, there is a definite tendency for programming to stay hard.
Features get added until programs are almost, but not quite, impossible
to write. It's just the type of hardness that changes. I think
multi-threading to get full performance on many core computers has a lot
of potential.

Patricia
From: Thomas Pornin on
According to DuncanIdaho <Duncan.Idaho2008(a)googlemail.com>:
> Do 64 bit registers mean a 64 bit memory bus ... do we really need to
> fetch 64 bits for a short (oops, sorry Short) ...

CPU already work by cache lines. A CPU fetches data only(*) from its
innermost cache (the L1 cache), which it fills by "lines" from the
L2 cache, and so on. The lines in the L1 cache have a size which
depends on the specific CPU architecture; typical sizes are 32 or
64 bytes (note: bytes, not bits).

64-bit mode means that you will need a bit more cache to handle the
same set of references. Cache pressure may reduce performance. On the
other hand, in 64-bit mode on x86 processors, registers are not only
larger, there are also more of them, which helps the JVM keep things
in registers, i.e. not using cache at all.

On my own experience, most of the time, going 64-bit does not measurably
modify performance. When performance _is_ modified in a non-negligeable
way, then most of the time the 64-bit version is faster. For some very
specific usages, performance gain with 64-bit mode can be dramatic (in
particular BigInteger.modPow(); also, the SHA-512 hash function). It is
probable that one can find some very specific usages where 32-bit mode
beats 64-bit mode by a large margin, but I have not come accross any.


--Thomas Pornin

(*) That is, except when the CPU fetches data from main memory directly.
This happens on some models when using synchronization features such as
'volatile'.
From: Arved Sandstrom on
Patricia Shanahan wrote:
> Arved Sandstrom wrote:
[ SNIP ]

>> Leaving aside the specific numbers, how many times over the past
>> thirty years (rough timeframe for the Age of PCs [1]) have we said the
>> same thing? :-)
>>
>
> I spend a lot less time on memory issues now than I did in 1970. I
> remember being allowed a couple of weeks for adding a new input record
> type to a program. I would do the same task in minutes in my current
> Java program.
>
> The main difficulty was that the program was close to no longer fitting
> in a 16KB machine. Most of the time went on rearranging overlays and
> reusing buffers to make space to add new code and data. Working in
> assembly language on punch cards with one machine time slot per day
> didn't help, but was less of a problem than the memory.
>
> However, there is a definite tendency for programming to stay hard.
> Features get added until programs are almost, but not quite, impossible
> to write. It's just the type of hardness that changes. I think
> multi-threading to get full performance on many core computers has a lot
> of potential.
>
> Patricia

I was being somewhat tongue in cheek about the memory thing. :-) I have
to agree, even with application bloat I spend a much smaller fraction of
my time these days worrying about memory than I did 20 or 30 years ago.
Well, to be accurate, I still worry about memory but now it's not RAM
I'm thinking about.

Interesting observation about programming tending to stay hard, with
only the type of hardness changing. I agree. I suspect that neither one
of us is referring to "hardness" at the architectural and design levels,
which despite the flurry of new software development methodologies (and
an explosion of tool support) still doesn't seem to be getting much
better, but rather to "hardness" at the implementation level. I think
you're right. 15 years ago I would have considered one of my harder
problems doing up a desktop GUI app, and now that's relatively easy. I
think that right now, and for the past while, if I had to identify one
"hard" area it would probably be threading. As you point out, threading
and parallelism is probably going to remain a "hard" area for a while.

AHS
From: Thomas Pornin on
According to Arved Sandstrom <dcest61(a)hotmail.com>:
> Well, to be accurate, I still worry about memory but now it's not RAM
> I'm thinking about.

When I worry about RAM, it is mostly for a few routines where performance
happens to be critical, and there I worry about "fast RAM". Fast RAM is
the RAM which is accessed without any extra delay. On a recent CPU this
means L1 cache. My relatively recent Intel CPU has a 32 kB L1 cache
for data.

The home computer I was using in 1984 (that's 25 years ago) also had
32 kB of fast RAM. To be fair, it had only 32 kB of RAM, but at least
all of it was as fast as the machine could handle.

I tend to think that problems which humans want to handle with a
computer have a rather constant size, about 32 kB. Computers get faster
and faster, and have much more RAM, but problem size does not expand.
Human patience, however, shrinks fast.


--Thomas Pornin
From: DuncanIdaho on
Thomas Pornin wrote:
> According to DuncanIdaho <Duncan.Idaho2008(a)googlemail.com>:
>> Do 64 bit registers mean a 64 bit memory bus ... do we really need to
>> fetch 64 bits for a short (oops, sorry Short) ...
>

snip


> It is
> probable that one can find some very specific usages where 32-bit mode
> beats 64-bit mode by a large margin, but I have not come accross any.
>

OK, the responses to my original post have really got my head buzzing
with all this low level stuff now, it's been years and I forgot how
obssessive I can get about these nitty gritty little bits and pieces
however the bottom line for me, in a practical sense is (Ignoring
execution issues for the time being) ...


If I develop against a 64bit JVM (say because I gotta have the 'latest
and greatest' no other reason) and then compile the code on a client
machine against a 32bit JVM then the only possible problem could be that
I'm using some humoungous great data structure that needs bigger that 32
bit addresses in which case I should probably be sacked anyway ... or am
I wide of the mark.


-- Idaho