From: Raymond Toy on
On 5/11/10 1:20 PM, Tamas K Papp wrote:
> On Tue, 11 May 2010 13:03:30 -0400, Raymond Toy wrote:
>
>>> In the particular problem, I would prefer if (expt x (complex y 0))
>>> always gave the same result as (expt x y), even if there is some extra
>>> cost/overhead. I care much more about correctness than speed.
>>
>> I don't think you're allowed to do this unless y is rational, in which
>> case (complex y 0) is required to be y anyway. For all other y, the
>> result must be complex, even if (expt x y) is real. But hopefully, the
>> imaginary part is 0.
>
> Good point. I should have said that I want the real part of them to be
> equal, and the imaginary part 0 where applicable.

I think this should always be true. It ought to fall out as a
consequence of defining x^y as exp(y*log(x)). If x > 0, all the
operations are numbers on the real axis, so exp should return a number
on the real axis.

Ray
From: Raymond Toy on
On 5/11/10 4:01 PM, Barry Margolin wrote:
> In article <ipadnQZgrbTdF3TWnZ2dnUVZ_sWdnZ2d(a)earthlink.com>,
> Raymond Toy <toy.raymond(a)gmail.com> wrote:
>
>> On 5/11/10 9:21 AM, Captain Obvious wrote:
>>> Captain>> Well, if you want to work at double accuracy, why don't
>>> Captain>> you explicitly specify 2 as double-float?
>>>
>>> RT> Yes, I am well aware of how to get the answer I want. My question is
>>> RT> what is the correct answer?
>>>
>>> When you're working with floating point numbers, there is no such thing
>>> as "the correct answer".
>>
>> Get real. So, 2.0 * 3.0 should return 5.75 and that's ok? The answer
>> has to be 6.0.
>
> Every floating point number is actually a representation of a range of
> numbers. So 2.0 is 2.0+/-epsilon, 3.0 is 3.0+/-epsilon, and therefore
> 2.0*3.0 should return 6.0+epsilon^2+/-5.0*epsilon.

I agree with what josephoswald said.

I think your example is much too simplified. The epsilons may not even
be the same for 2 and 3. And if epsilon were the same and equal to
double-float-epsilon, 6.0+/-5*epsilon would give 3 possible
floating-point answers.
>
> What makes you think the origial 2.0 and 3.0 are exact? They came from

On the other hand, what makes you think they weren't exact? Any epsilon
you want to ascribe is outside of the computation.

But I think that we can agree that for the exactly representable
floating point number(s) in #c(3d0 0d0) and the exact rational 2,

(expt 5 #c(3d0 0))

we would want answer closer to be as close to #c(125d0 0d0) as
reasonably possible. #C(125.00001127654393D0 0.0D0) is a bit farther
off than I would want.

I think I understand why some Lisp's produce this answer. I just happen
to think they could do better with very little extra cost. :-)

Ray
From: Espen Vestre on
Raymond Toy <toy.raymond(a)gmail.com> writes:

> we would want answer closer to be as close to #c(125d0 0d0) as
> reasonably possible. #C(125.00001127654393D0 0.0D0) is a bit farther
> off than I would want.
>
> I think I understand why some Lisp's produce this answer. I just happen
> to think they could do better with very little extra cost. :-)

The problem seems to be that the "Rule of Float and Rational Contagion"
only specifies what happens when rationals and floats are combined, but
not what happens when a complex with rational parts is combined with a
complex with float parts. In the case of (expt 5 #c(3d0 0d0)), 5 is (by
the rule of complex contagion) first converted to the rational-parts
complex #c(5 0). In the next step of the computation, one would have
expected that there was a complex version of the rule of float and
rational contagion that ensured that #c(5 0) is converted to #c(5d0 0d0)
and not #c(5f0 0f0), but there is no such rule, and unfortunately, it
seems like many lisp implementors haven't seen the need to make this
extension of the standard.
--
(espen)
From: Barry Margolin on
In article <w_mdnT_HV_PRPHfWnZ2dnUVZ_g6dnZ2d(a)earthlink.com>,
Raymond Toy <toy.raymond(a)gmail.com> wrote:

> On 5/11/10 4:01 PM, Barry Margolin wrote:
> > In article <ipadnQZgrbTdF3TWnZ2dnUVZ_sWdnZ2d(a)earthlink.com>,
> > Raymond Toy <toy.raymond(a)gmail.com> wrote:
> >
> >> On 5/11/10 9:21 AM, Captain Obvious wrote:
> >>> Captain>> Well, if you want to work at double accuracy, why don't
> >>> Captain>> you explicitly specify 2 as double-float?
> >>>
> >>> RT> Yes, I am well aware of how to get the answer I want. My question is
> >>> RT> what is the correct answer?
> >>>
> >>> When you're working with floating point numbers, there is no such thing
> >>> as "the correct answer".
> >>
> >> Get real. So, 2.0 * 3.0 should return 5.75 and that's ok? The answer
> >> has to be 6.0.
> >
> > Every floating point number is actually a representation of a range of
> > numbers. So 2.0 is 2.0+/-epsilon, 3.0 is 3.0+/-epsilon, and therefore
> > 2.0*3.0 should return 6.0+epsilon^2+/-5.0*epsilon.
>
> I agree with what josephoswald said.
>
> I think your example is much too simplified. The epsilons may not even
> be the same for 2 and 3. And if epsilon were the same and equal to
> double-float-epsilon, 6.0+/-5*epsilon would give 3 possible
> floating-point answers.
> >
> > What makes you think the origial 2.0 and 3.0 are exact? They came from
>
> On the other hand, what makes you think they weren't exact? Any epsilon
> you want to ascribe is outside of the computation.

When using transcendental functions, exact numbers are extremely rare.
Except for special cases, all their results are irrational, so the
floating point value will be an approximation.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Raymond Toy on
>>>>> "Barry" == Barry Margolin <barmar(a)alum.mit.edu> writes:

Barry> In article
>> > What makes you think the origial 2.0 and 3.0 are exact? They
>> > came from
>>
>> On the other hand, what makes you think they weren't exact?
>> Any epsilon you want to ascribe is outside of the computation.

Barry> When using transcendental functions, exact numbers are
Barry> extremely rare. Except for special cases, all their
Barry> results are irrational, so the floating point value will be
Barry> an approximation.

Certainly, but saying the result of a transcendental function is a
floating-point approximation doesn't give you freedom to return any
value. There's a certain expectation that some care is taken and the
value is as close to the true value as reasonably possible.

Perhaps I'm being unreasonable. :-)

I agree that the spec doesn't explicitly say what the accuracy of
(expt 2 #c(-2d0 -1d0)) should be. It would be nice if the more
accurate value were returned, since that doesn't violate the spec,
AFAICT.

Ray