From: Barry Margolin on
In article <htg35j$qen$1(a)news.eternal-september.org>,
Tim Bradshaw <tfb(a)tfeb.org> wrote:

> On 2010-05-25 03:02:34 +0100, G�nther Thomsen said:
>
> > Well here you imply that all numbers are exact. Above you implied a
> > precision given by the number of digits presented (see "significance
> > arithmetic" linked to above). What are we talking about?
>
> I'm fairly sure that the standard practice, when writing numbers, is to
> write the number of places of precision. So if I write "the
> temperature was 10.3 degrees", what I mean is "when rounded to three
> places, it is 10.3". If I said "the temperature is 10.3000 degrees"
> what I mean is "when rounded to *6* places, it is 10.3 degrees", which
> is a very different statement.

Real scientists use explicit statements of uncertainty, e.g. 10.3 +/-
..03. Just using a particular number of significant digits implicitly
assumes that the uncertainty is .0....5, which is unlikely to be the
actual case (although if the measuring equipment has a digital readout,
I guess you have to assume that). And when you combine these numbers in
a formula, the uncertainties have to be added, multiplied, etc (so even
if you started with .0....5, it will shift as they're combined).

>
> Of course computers generally do not understand these subtleties, and
> are in the process of deforming people's expectations of what things
> like the above mean in a bad way.

Floating point was designed for efficiency, not precision. I don't
think you're going to find systems that deal with the explicit error
term outside specialized mathematical systems like Macsyma and
Mathematica.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Tamas K Papp on
On Wed, 26 May 2010 02:58:14 -0400, Barry Margolin wrote:

> In article <htg35j$qen$1(a)news.eternal-september.org>,
> Tim Bradshaw <tfb(a)tfeb.org> wrote:
>
> Real scientists use explicit statements of uncertainty, e.g. 10.3 +/-
> .03. Just using a particular number of significant digits implicitly
> assumes that the uncertainty is .0....5, which is unlikely to be the
> actual case (although if the measuring equipment has a digital readout,
> I guess you have to assume that). And when you combine these numbers in
> a formula, the uncertainties have to be added, multiplied, etc (so even
> if you started with .0....5, it will shift as they're combined).

Real scientists use a statistical model for inference. These are my
measurements, this is my model for measurement error, this is my
posterior distribution for the parameters. +/- is just a vague
shorthand for saying that there is some error, but the convention
turns out to be useless beyond simple stuff. For example, does x +/-
y mean Normal(x,y^2), or Uniform(x-y,x+y), or something else? These
things matter a lot, and are left unspecified in this notation. Also,
it is better to treat y as a parameter, instead of fixing it a priori
(but having a prior is of course fine).

>> Of course computers generally do not understand these subtleties, and
>> are in the process of deforming people's expectations of what things
>> like the above mean in a bad way.
>
> Floating point was designed for efficiency, not precision. I don't
> think you're going to find systems that deal with the explicit error
> term outside specialized mathematical systems like Macsyma and
> Mathematica.

The proper way of "dealing with the error term" is not by inventing
some fancy algebra with interval arithmetic, but by statistics.
Thanks to computers, a lot of standard models are prepackaged now, or
can be used as building blocks for more complex ones.

Tamas
From: Tim Bradshaw on
On 2010-05-26 07:58:14 +0100, Barry Margolin said:

> Real scientists use explicit statements of uncertainty, e.g. 10.3 +/-
> .03. Just using a particular number of significant digits implicitly
> assumes that the uncertainty is .0....5, which is unlikely to be the
> actual case (although if the measuring equipment has a digital readout,
> I guess you have to assume that). And when you combine these numbers in
> a formula, the uncertainties have to be added, multiplied, etc (so even
> if you started with .0....5, it will shift as they're combined).

I agree with this - all I really meant is that you *wouldn't* say, for
instance "10.50000 +0.02 -0.03" or something, still less "10.5000" to
mean "10.5 +/- 0.05" - there is information implicit in the number of
digits you quote. I think +/- .5 in last significant place is fairly
common, as you say, with digital instruments like multimeters etc,
though when I used these in serious things, they'd all be calibrated
and typically were less accurate than the precision they could display
(though sometimes there was systematic error for which you could
correct to recover accuracy).

>
> Floating point was designed for efficiency, not precision. I don't
> think you're going to find systems that deal with the explicit error
> term outside specialized mathematical systems like Macsyma and
> Mathematica.

Yes. What I meant was that the exposure to the behaviour of FP systems
(which is completely sensible) might be affecting the way people think
about and report precision and accuracy *in general* in a bad way.

--tim


From: Vend on
On 26 Mag, 08:58, Barry Margolin <bar...(a)alum.mit.edu> wrote:

> Floating point was designed for efficiency, not precision.

I don't think this is correct. Efficiency is useless if you don't
deliver a correct result, and in settings where approximate quantities
and approximate computations are involved, a correct result requires
bounded errors.

From: Tim Bradshaw on
On 2010-05-26 12:13:32 +0100, Vend said:

> I don't think this is correct. Efficiency is useless if you don't
> deliver a correct result, and in settings where approximate quantities
> and approximate computations are involved, a correct result requires
> bounded errors.

That doesn't stop it being true. For instance a system designed for
precision would allow you to say things like "I want this result
accurate to so much" but FP systems don't allow that kind of operation.
There are systems which do allow that kind of thing (for instance in
Mathematica, you can say, for instance "N[Pi]" to get a (single-float?)
approximation to pi, but "N[Pi,100]" gives you an approximation good to
100 significnt figures (not sure what deal is with the last digit, I
guess it is rounded).


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7
Prev: lisp port
Next: Disjoint sets