From: MitchAlsup on 24 May 2010 21:26 On May 24, 8:18 pm, Andrew Reilly <areilly...(a)bigpond.net.au> wrote: > Floating point numbers and operations really only > "work" at an assembly language level: order is vitally important. If > that turns out to make auto-optimisation painful, then so be it: > optimisation that requires things to happen in different orders > necessarily requires different numerical analysis, For example, many budding numericalists are horrified to discover that: ((X+0.5)+0.5) has 3 more bits of precision on IBM floating point for certain ranges of X than the similar looking (X+1.0) A compiler converting ((X+0.5)+0.5) into (X+1.0) is very bad indeed. Mitch
From: nmm1 on 25 May 2010 04:13 In article <860mqgFipU1(a)mid.individual.net>, Andrew Reilly <areilly---(a)bigpond.net.au> wrote: >On Mon, 24 May 2010 15:40:00 +0200, Terje Mathisen wrote: > >> However, I contend that the definition of "correctness" for fp >> operations is severely broken in Java. > >I agree that Java's (original) notion that the results should always be >bit-exact is obviously spurious. I think that I agree with Piotr, >though, that floating point code should (at least by default) be compiled >"exactly as written". Floating point numbers and operations really only >"work" at an assembly language level: order is vitally important. If >that turns out to make auto-optimisation painful, then so be it: >optimisation that requires things to happen in different orders >necessarily requires different numerical analysis, and so is rightly >something that happens at a library level (where that library might well >lean on explicit parallelism or thread mechanisms.) I don't know whether to laugh, cry or scream. Floating-point is merely an instantiation of the language's arithmetic model, and its details are rarely of interest to sane numerical code. The language issue is the arithmetic model and its analysis and NOT the hardware details - just as with anything else, including I/O. 40 years ago, that was common knowledge, and it is reflected in the books of that era - the conflation of the arithmetic model with floating-point is a recent aberration. As I have posted, Fortran did not require it until 1990, and even now there is very little dependence on it in the standard (MUCH less than C). Lastly, every single general-purpose language that has left all of the tricky semantics to a library and said "that's different" has been a failure in the long term, not least because it's not extensible. Regards, Nick Maclaren.
From: Thomas Womack on 25 May 2010 07:17 In article <htg0qv$ach$1(a)smaug.linux.pwf.cam.ac.uk>, <nmm1(a)cam.ac.uk> wrote: >In article <860mqgFipU1(a)mid.individual.net>, >Andrew Reilly <areilly---(a)bigpond.net.au> wrote: >>On Mon, 24 May 2010 15:40:00 +0200, Terje Mathisen wrote: >> >>> However, I contend that the definition of "correctness" for fp >>> operations is severely broken in Java. >> >>I agree that Java's (original) notion that the results should always be >>bit-exact is obviously spurious. I think that I agree with Piotr, >>though, that floating point code should (at least by default) be compiled >>"exactly as written". Floating point numbers and operations really only >>"work" at an assembly language level: order is vitally important. If >>that turns out to make auto-optimisation painful, then so be it: >>optimisation that requires things to happen in different orders >>necessarily requires different numerical analysis, and so is rightly >>something that happens at a library level (where that library might well >>lean on explicit parallelism or thread mechanisms.) > >I don't know whether to laugh, cry or scream. Floating-point is >merely an instantiation of the language's arithmetic model, and >its details are rarely of interest to sane numerical code. This sounds like an argument for doing 'I-know-what-I'm-doing-and-insist-on-ieee754' using intrinsics, and using arithmetic for 'oh compiler, kindly do roughly these sums for me'; a=_fadd_754(0.5,_fadd_754(b,0.5)) turns into two FADD operations and a=(b+0.5)+0.5 turns into a=b+1. Of course, the usual use of intrinsics at the moment is for dealing with vectorised registers, where you often do want the compiler to rearrange your intrinsics to fit more nicely, and the Intel compiler is quite good at this. Tom
From: Robert Myers on 25 May 2010 13:47 On May 25, 7:58 am, n...(a)cam.ac.uk wrote: > In article <861p9dFie...(a)mid.individual.net>, > Andrew Reilly <areilly...(a)bigpond.net.au> wrote: > > > > >So how do you make it do anything useful? Or, indeed, specific? > > Hmm. I don't have time to start at that level, I am afraid, and can't > offhand think of any books that cover basic numerical programming. > Who needs a book? In the end, the computer does what it does, all abstract claims and written documents notwithstanding. You are naive if you're counting on computer scientists to get it right in a way that would necessarily satisfy a numerical analyst. No one can be accused of being evil or lazy, or, at least, such accusations won't help anyone. We all have different priorities, and there is absolutely no way that meetings of committees can make the world safe for the clueless. If your calculation is critically dependent on the last few bits of precision, it's your problem, not the problem of some language committee. Robert.
From: nmm1 on 25 May 2010 13:59
In article <d8ea4097-64cf-42c4-af0c-8ab0a4d7bf4f(a)m4g2000vbl.googlegroups.com>, Robert Myers <rbmyersusa(a)gmail.com> wrote: >> >> >So how do you make it do anything useful? =A0Or, indeed, specific? >> >> Hmm. =A0I don't have time to start at that level, I am afraid, and can't >> offhand think of any books that cover basic numerical programming. > >Who needs a book? In the end, the computer does what it does, all >abstract claims and written documents notwithstanding. You are naive >if you're counting on computer scientists to get it right in a way >that would necessarily satisfy a numerical analyst. That's true, but I was thinking of the numerical books that taught how to write numerically reliable code - and there were some, but the last time I would have looked at such a thing was 40 years back. And, no, they were NOT written by computer scientists! >No one can be accused of being evil or lazy, or, at least, such >accusations won't help anyone. We all have different priorities, and >there is absolutely no way that meetings of committees can make the >world safe for the clueless. If your calculation is critically >dependent on the last few bits of precision, it's your problem, not >the problem of some language committee. That is true, and it is the books on how to avoid getting bitten by that that I was thinking of. I am sure that there must have been some. Regards, Nick Maclaren. |