From: Giorgio Pastore on 20 Jan 2010 14:10 Les Neilson wrote: .... > The first difficulty would be specifying a *finite* list of constants > (there are rather a lot across the sciences that I can think of and > presumably plenty more of which I am unaware or have forgotten - > someone's favourite constant being left out might cause a diplomatic > incident!) But, why such a difficulty should be more serious for Fortran community than for C-speaking people ? I would guess that at least the issue of compatibility with C should prompt a more careful consideration of the possibility of including mathematical constants in the language. Giorgio
From: Giorgio Pastore on 20 Jan 2010 14:23 Richard Maine wrote: .... > The arc cosine of -1 is just about the worst case I can think of off the > top of my head in terms of numerical robustness. Hint: look at the arc > cosine of numbers very close to -1. If one is going to use trig > expressions for things like that, at least use them at decently robust > points. That's independent of the language and the hardware - just plain > numerics. > > I use literals. > I have used pi=acos(-1.0) for not less than 30 years of intensive computational physics programming in Fortran without any evidence of troubles with my numerics. Could you provide a real-life example in support of your claim of a "acos(-1.0) considered harmful" ? :-) Giorgio
From: Richard Maine on 20 Jan 2010 16:57 Giorgio Pastore <pastgio(a)units.it> wrote: > I have used pi=acos(-1.0) for not less than 30 years of intensive > computational physics programming in Fortran without any evidence of > troubles with my numerics. Could you provide a real-life example in > support of your claim of a "acos(-1.0) considered harmful" ? :-) Am I going to go to the trouble of trying it on numerous compilers (including ones that I don't have handy current access to)? No. Have I bothered to keep records of such a particular from the past? No. But do I know enough about numerics to consider it a real problem? Yes. It doesn't take much numeric knowledge to be concerned about it. As I mentioned, consider the result of input values slightly perturbed from the nominal -1.0. That kind of consideration is the basis of much of the study of numerical stability. For a value 1 bit smaller in magnitude than -1.0, the acos ought to be off by quite a bit. No, I'm not going to sit down and run a case (though I think I recall people doing such and posting the results here in the past). Just look at the shape of the acos curve there; no computer required. For a value 1 bit larger in magnitude than -1.0, there is no correct answer. You might well happen to get accurate results with particular compilers - even with a largish collection of particular compilers. That doesn't change the fact that it is a numerically problematic point. Ignoring that kind of numeric issue will get you in trouble, if not for that particular case, then for others. It might even have gotten you in trouble in the past without being evident to you. I have sure seen plenty of cases of people producing meaningless results without realizing it. I have used literals for pi for over 40 years of intensive numerical computing (I started in 1968). I have never seen evidence of trouble with that. -- Richard Maine | Good judgment comes from experience; email: last name at domain . net | experience comes from bad judgment. domain: summertriangle | -- Mark Twain
From: Carlie Coats on 20 Jan 2010 17:09 Richard Maine wrote: [snip...] > I have used literals for pi for over 40 years of intensive numerical > computing (I started in 1968). I have never seen evidence of trouble > with that. There is a US EPA air quality model that for a long time had something like DOUBLE PRECISION, PARAMETER :: PI = 3.14159265401 when it should have been more like DOUBLE PRECISION, PARAMETER :: PI = 3.14159265358979323846 -- not enough digits for accurate DOUBLE PRECISION on most platforms, and downright *wrong* from the tenth digit on. (I caught this one about two years ago.) Unfortunately, not everyone is as careful as you or I, Richard. I've seen stuff even worse from the "use-a-trig-identity" school. Is there a solution? I don't know a sufficient one ;-( -- Carlie Coats
From: Steven Correll on 20 Jan 2010 18:28
On Jan 18, 9:29 am, "James Van Buskirk" <not_va...(a)comcast.net> wrote: > gfortran is the most agile cross compiler out there and it does use > a gpl software package to compute initialization expressions. For > it at least the initialization expression results are often more > accurate than the ordinary expression results [snip] But is that a Good Thing or a Bad Thing? On the one hand, various numerical analysts tell us that additional precision is generally a good thing; on the other hand, it's hard not to be sympathetic to a compiler user who complains about being unpleasantly surprised when a result changes (either because optimization has made something computable at compile-time which was previously computed at run-time, or because an apparently unrelated change in the program by the customer had that effect, and the compile- time result doesn't match the run-time result.) I wouldn't suggest there's a right answer--rather, I guess my point is that there isn't a single right answer. But whatever answer the compiler-writers chose, it won't bother you if you chose to hard-code the constants yourself. |