From: Carl Banks on 10 Dec 2009 17:23 On Dec 10, 10:46 am, dbd <d...(a)ieee.org> wrote: > On Dec 7, 12:58 pm, Carl Banks <pavlovevide...(a)gmail.com> wrote: > > > On Dec 7, 10:53 am, dbd <d...(a)ieee.org> wrote: > > > ... > > > You're talking about machine epsilon? I think everyone else here is > > talking about a number that is small relative to the expected smallest > > scale of the calculation. > > > Carl Banks > > When you implement an algorithm supporting floats (per the OP's post), > the expected scale of calculation is the range of floating point > numbers. For floating point numbers the intrinsic truncation error is > proportional to the value represented over the normalized range of the > floating point representation. At absolute values smaller than the > normalized range, the truncation has a fixed value. These are not > necessarily 'machine' characteristics but the characteristics of the > floating point format implemented. I know, and it's irrelevant, because no one, I don't think, is talking about magnitude-specific truncation value either, nor about any other tomfoolery with the floating point's least significant bits. > A useful description of floating point issues can be found: [snip] I'm not reading it because I believe I grasp the situation just fine. But you are welcome to convince me otherwise. Here's how: Say I have two numbers, a and b. They are expected to be in the range (-1000,1000). As far as I'm concerned, if they differ by less than 0.1, they might as well be equal. Therefore my test for "equality" is: abs(a-b) < 0.08 Can you give me a case where this test fails? If a and b are too far out of their expected range, all bets are off, but feel free to consider arbitrary values of a and b for extra credit. Carl Banks
From: Raymond Hettinger on 10 Dec 2009 20:23 [Carl Banks] > > You're talking about machine epsilon? I think everyone else here is > > talking about a number that is small relative to the expected smallest > > scale of the calculation. That was also my reading of the OP's question. The suggestion to use round() was along the lines of performing a quantize or snap-to-grid operation after each step in the calculation. That approach parallels the recommendation for how to use the decimal module for fixed point calculations: http://docs.python.org/library/decimal.html#decimal-faq Raymond
From: dbd on 11 Dec 2009 03:37 On Dec 10, 2:23 pm, Carl Banks <pavlovevide...(a)gmail.com> wrote: > ... > > A useful description of floating point issues can be found: > > [snip] > > I'm not reading it because I believe I grasp the situation just fine. > ... > > Say I have two numbers, a and b. They are expected to be in the range > (-1000,1000). As far as I'm concerned, if they differ by less than > 0.1, they might as well be equal. > ... > Carl Banks I don't expect Carl to read. I posted the reference for the OP whose only range specification was "calculations with floats" and "equality of floats" and who expressed concern about "truncation errors". Those who can't find "floats" in the original post will find nothing of interest in the reference. Dale B. Dalrymple
First
|
Prev
|
Pages: 1 2 3 4 5 6 Prev: Redirecting stdin to a file Next: unsupported operand type(s) for %: 'NoneType' and 'tuple' |