From: Raymond Toy on
On 5/11/10 9:21 AM, Captain Obvious wrote:
> Captain>> Well, if you want to work at double accuracy, why don't
> Captain>> you explicitly specify 2 as double-float?
>
> RT> Yes, I am well aware of how to get the answer I want. My question is
> RT> what is the correct answer?
>
> When you're working with floating point numbers, there is no such thing
> as "the correct answer".

Get real. So, 2.0 * 3.0 should return 5.75 and that's ok? The answer
has to be 6.0.

If FP arithmetic didn't have correct answers, you could never implement
things such a double-double or quad-double. The fast algorithms
critically depend on the properties of FP numbers producing exactly the
correct rounded value.

> You should NEVER compare floating point numbers directly, but only with
> some precision threshold which is relevant to your application and
> calculations you're doing.

Who said anything about comparing floating point numbers? Stop making
up things that I didn't say.

>
> RT> How do you know there is a performance tradeoff?
>
> Let's say I guess so.
> Count all possible types for base and power. Mutiply them.
> That's a number of cases it has to deal with.
> If you deal with parameters one by one, number of cases is a lot smaller
> -- you still need to handle calculations which are supposed to be
> precise by standard in a special way, but for the rest, one common code
> path will do.
>
> RT> The cost of returning the accurate answer is very small.
>
> If you think that SBCL can be improved, why don't you speak with SBCL
> developers?
>
> RT> And who said it was only sbcl that does it this way?
>
> I guess other have same rationale.

Perhaps. Or they just wanted to use the simple formula for these cases.
CMUCL used to do the same, until I changed it to give the more accurate
answer.

>
> How about a full quote:
>
> "When rationals and floats are compared by a numerical function, the
> function rational is effectively called to convert the float to a
> rational and then an exact comparison is performed. In the case of
> complex numbers, the real and imaginary parts are effectively handled
> individually."
>
> They are handled individually only for comparison!

You can't compare real and complexes except with = or /=. The rest just
talks about comparison, not equality, so it's not really clear what the
intent is here.

>
> As I understand, complex numbers are considered to be different from
> floating numbers and rules of contagion are relaxed for them.
>
> 12.1.5.2 Rule of Complex Contagion
> "When a real and a complex are both part of a computation, the real is
> first converted to a complex by providing an imaginary part of 0."
>
> That's all. Formats are not mentioned at all.

So? By the same reasoning, I'm not precluded from changing format either.

Ray

From: Raymond Toy on
On 5/11/10 9:47 AM, Tamas K Papp wrote:
> On Tue, 11 May 2010 08:15:05 -0400, Raymond Toy wrote:
>
>>>>>>> "Barry" == Barry Margolin <barmar(a)alum.mit.edu> writes:
>>
>> Barry> In article <m18w7qfz65.fsf(a)earthlink.net>, Barry> Raymond
>> Toy <rtoy(a)earthlink.net> wrote:
>>
>> >> How do you know there is a performance tradeoff? Did you check
>> >> to see if sbcl does something different? AFAICT, sbcl does
>> >> nothing special in this case and punts to the generic expt
>> >> routine which checks the types of both arguments to figure out
>> >> what needs to be done. The cost of returning the accurate answer
>> >> is very small.
>>
>> Barry> But what's the benefit? How often do you use LOG with Barry>
>> floating point parameters where the number is an exact Barry> power
>> of the base?
>>
>> What? The benefit is a more accurate answer. What about this then:
>>
>> (expt 5 #c(3d0 0d0)) -> #C(125.00001127654393D0 0.0D0)
>>
>> Would people rather get #c(125d0 0d0) instead?
>>
>> It's not about the base, it's about implementing (expt x y) as (exp (* y
>> (log x))). CMUCL does this, but makes x and y have the same precision
>> before doing the computation. SBCL just uses the formula as is, with
>> the types as given.
>
> I prefer CMUCL's approach. I think that the extra cost, if any,
> should be trivial, especially on todays CPU's. I suggest that report it
> as a bug in SBCL.

I was annoyed at that behavior so I changed it in cmucl. Sbcl people
already know. I've also reported this for ccl.

>
> Now the other, completely orthogonal question is whether a numerical
> routine should calculate "exact" values for cases when it is possible.
> The only case when I don't like that to happen is when it causes a
> discontinuity: eg if a routine uses an approximation for non-integers
> but an "exact" method for integers, a discontinuity might arise in the
> neighborhood of integers. So I prefer if the user can signal which
> method to use: eg have (expt 5 3) calculate it the "exact" way, and
> (exp 5 3d0) or (exp #C(3d0 0)) using an approximation (which, in the
> case of EXP, is so good that it is hard to distinguish from the exact
> result). But this might matter for certain implementations of the
> gamma function, etc.
I think good implementations try to do this, but I think it's pretty hard.
>
> In the particular problem, I would prefer if (expt x (complex y 0))
> always gave the same result as (expt x y), even if there is some extra
> cost/overhead. I care much more about correctness than speed.

I don't think you're allowed to do this unless y is rational, in which
case (complex y 0) is required to be y anyway. For all other y, the
result must be complex, even if (expt x y) is real. But hopefully, the
imaginary part is 0.

Ray
From: Tamas K Papp on
On Tue, 11 May 2010 13:03:30 -0400, Raymond Toy wrote:

>> In the particular problem, I would prefer if (expt x (complex y 0))
>> always gave the same result as (expt x y), even if there is some extra
>> cost/overhead. I care much more about correctness than speed.
>
> I don't think you're allowed to do this unless y is rational, in which
> case (complex y 0) is required to be y anyway. For all other y, the
> result must be complex, even if (expt x y) is real. But hopefully, the
> imaginary part is 0.

Good point. I should have said that I want the real part of them to be
equal, and the imaginary part 0 where applicable.

Tamas
From: Captain Obvious on
RT> Who said anything about comparing floating point numbers? Stop making
RT> up things that I didn't say.

Are you an idiot or what?
You're comparing 4.0d0 to 4.000000015237235D0!
You do so by literal comparison of printed representation.
While in fact you should be comparing them w.r.t. single-float-epsilon.
E.g.:

CL-USER> (- 4.000000015237235D0 4.0d0)
1.5237234585185888d-8
CL-USER> single-float-epsilon
5.960465e-8
CL-USER> (< (abs (- 4.000000015237235D0 4.0d0)) single-float-epsilon)
T

This means that 4.0d0 IS EQUAL TO 4.000000015237235D0 in context of single
precision.
Obviously, they are not equal in context of double precision, but if you
think your program needs double precision, you need to make sure that you
use only double-floats and do not rely on contagion.
Standard does not require contagion to work the way you want it to, so you
absolutely should not rely on this in a conforming, portable program.

RT> The fast algorithms critically depend on the properties of FP numbers
producing exactly the
RT> correct rounded value.

Well, sure, but you want this to work with a given precision and not what
compiler is feeling today, don't you explicitly specify it?

Explicitly specified precision in parameters => know precision of a result.
Ambiguous, non-specified precision in parameters => ambiguous precision of a
result.
Seems to be fair.

RT> You can't compare real and complexes except with = or /=. The rest
just
RT> talks about comparison, not equality, so it's not really clear what the
RT> intent is here.

If you can't compare with anything but = or /=, then maybe it is about
comparing with = or /=?
That is the first idea which comes to my mind. But maybe it is actually a
cryptic prophesy about the alien invasion, I'm not sure.

No, really, WHAT ELSE can it mean?

RT> So? By the same reasoning, I'm not precluded from changing format
either.

If standard does not set requirements then implementation can do this any
way it wants to.
And if those requirements are not in the standard, portable programs should
not rely on them.

From: Raymond Toy on
On 5/11/10 2:45 PM, Captain Obvious wrote:
> RT> Who said anything about comparing floating point numbers? Stop making
> RT> up things that I didn't say.
>
> Are you an idiot or what?

Ad hominem arguments now?

> You're comparing 4.0d0 to 4.000000015237235D0!
> You do so by literal comparison of printed representation.
> While in fact you should be comparing them w.r.t. single-float-epsilon.

Both numbers are double-floats. Why would I use single-float-epsilon?
Maybe I can arbitrarily choose an epsilon of a google instead.


> If standard does not set requirements then implementation can do this
> any way it wants to.
> And if those requirements are not in the standard, portable programs
> should not rely on them.

Sure, I'm not arguing that. But don't whine to me if your lisp says
(sin 1) = 0. Or, in fact, if (sin x) for any number x returns 0. After
all, the spec doesn't say that sin x must be accurate. How are you
going to write portable programs then?

The question, I guess, boils down to what do I want. I would like Lisp
to return as accurate a value as reasonably possible. I've shown that
it's not unreasonably expensive to get a more accurate value.

Ray