From: Thomas Pornin on
According to Patricia Shanahan <pats(a)acm.org>:
> This effect happens, to some degree, even for simple integers in C. I
> remember having to allocate an extra register in some situations for
> x++. The code for ++x could use the same register to represent x and
> to carry the intermediate result.

That's because you use the expression result. The difference between x++
and ++x is precisely that: x++ evaluates to the old value of x, while
++x evaluates to the new value. This is not an optimization question at
all: semantics are distinct. There is no question of preferring one over
the other on efficiency grounds: they do not compute the same thing.

On the other hand, if you do not use the resulting value, then any
half-decent optimizing compiler will produce the same code for x++ and
++x. That is, when the matter is purely a question of optimization, then
the compiler should optimize it perfectly without bothering the
programmer. This is a fully local and syntaxic optimization. If your
compiler does not handle it perfectly, and you still use that compiler,
then optimization is not important for you.

(In C++ with their overloading of operators with arbitrary code, the
situation could be different. In a way, in the presence of overloading
and complex classes, the "result" of x++ is always used, at least
implicitely.)


--Thomas Pornin
From: Thomas Pornin on
According to Andreas Leitgeb <avl(a)logic.at>:
> So, what to say to those whining^H^H^H^H^H^H^Hsuggesting
> making just "++" atomic?

First, atomicity makes sense only with regards to concurrent access.
So this has any impact only on class and instance fields, not local
variables.

At the bytecode level, class and instance fields are accessed through
four specific opcodes: putfield, getfield, putstatic and getstatic. To
make '++' atomic would require the addition of a few additional opcodes
(four more, for left and right '++', for static and instance fields, and
four others for '--'). On the whole, people at Sun seem quite loath to
add new opcodes, so it is understandable that they shrank away from
adding eight of them.

Also, atomic read-modify-write operations are expensive in multi-core
systems. A '++' on a field is quite common in Java, and programmers
expect it to be fast. So atomicity could not be made the default (Java
designers care about such micro-optimization, otherwise they would not
have defined the plethora of pseudo-integer types with distinct sizes).
If atomicity of '++' is not the default (e.g. they could have reserved
it to 'volatile' fields) then it is reserved for some rather specific
situations, in which using some library code is not an ordeal: an atomic
'++' is sufficiently rare that forcing the programmer to use
AtomicInteger.getAndIncrement() in that situation is no big deal. Java
designers are conservative with regards to syntax extensions and new
opcodes, but they also are quite trigger happy when it comes to new
library functions and classes, so it is quite understandable that they
chose the latter.

To put it the other way round: if you want new features integrated in
the Java syntax, rather than as library opcodes, then do not use Java,
use C#. Rate of addition of syntaxic features is precisely the most
compelling difference between C# and Java.

(Note that I do not claim that '++' in C# provides atomicity -- I
actually do not know -- but if C# '++' is not atomic, at least I find it
quite plausible that it could become atomic in a future version.)


--Thomas Pornin
From: Andreas Leitgeb on
Eric Sosman <esosman(a)ieee-dot-org.invalid> wrote:
> On 2/14/2010 10:32 PM, Andreas Leitgeb wrote:
>> Lew <noone(a)lewscanon.com> wrote:
>>> Eric Sosman wrote in this thread on 2/12:
>>>>>> Before anybody whines^H^H^H^H^H^Hsuggests that making +=
>>>>>> atomic would be easy, let him ponder
>>>>>> volatile int a,b,c,...,z;
>>>>>> a += b += c+= ... += z;
>> On re-thought, even Eric's argument seems not too strong anymore:
>> First, z is read,
>> Then y is atomically increased by the previously read value.
>> ...
> It breaks Java's "left to right evaluation" rule, though.

I surrender to this point.

// in class-context
int a;
int foo() { a=42; return 21; }
{ a=21; a+=foo(); } // -> a==42

That renders my previous posts moot w.r.t. "op=".

>> The real reason boils down to that those atomic operations are still
>> slightly slower than the non-atomic ones, while the cases not caring
>> about atomicity by far outnumber the others. That's why I added
>> those "(not me!)"s.
> My impression (I'm by no means a hardware expert) is that the
> time penalty is considerably worse than "slight." It'll depend a
> lot on the nature of the hardware, though.

If just "++" and "--" were changed and only for volatiles, then the result
may fix more broken programs than slow down others. (And volatiles already
"suffer" from uncached memory-access, performance-wise, so the slowdown
wouldn't be all that bad, relatively speaking)
Also, that change wouldn't break a program, unless it explicitly relied on
non-atomicity - is that something, a "correct" program is allowed to do?

I want to learn about arguments that thwart even that part, just as
I'm happy to have just learnt an argument against atomic op=.

From: Eric Sosman on
On 2/15/2010 6:47 PM, Lew wrote:
> Eric Sosman wrote:
>> When I actually want the value of the expression, I write
>> whichever I need (usually a[x++] or a[--x]). When all I want
>> is the side-effect, I write ++x because "increment x" seems to
>> read more smoothly than "x increment."
>
> Which effect is the "side" effect? Isn't incrementation a primary effect
> of the "increment" operator?

The "side effect" is the storing of a new value in x.
JLS Chapter 15, first sentence:

"Much of the work in a program is done by evaluating
expressions, either for their side effects, such as
assignments to variables, [...]"

Section 15.1, second paragraph:

"Evaluation of an expression can also produce side
effects, because expressions may contain embedded
assignments, increment operators, decrement operators,
and method invocations."

--
Eric Sosman
esosman(a)ieee-dot-org.invalid
From: Mike Schilling on
Thomas Pornin wrote:
>
> (In C++ with their overloading of operators with arbitrary code, the
> situation could be different. In a way, in the presence of overloading
> and complex classes, the "result" of x++ is always used, at least
> implicitely.)

That is, it's always computed, whether it's used or not.


First  |  Prev  |  Next  |  Last
Pages: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Prev: 64-bit JNI
Next: Problem with interface implementation