From: Noqsi on 26 Jun 2010 03:15 On Jun 21, 12:11 am, Richard Fateman <fate...(a)cs.berkeley.edu> wrote: > Noqsi wrote: > > Approximation is often puzzling. The ideological war you wage against > > WRI is unhelpful here. The wise person understands that there are > > multiple points of view. > > If it is required to be wise in the ways of WRI's (unusual) arithmetic > to use WRI software, then that is more than a user interface problem. You must be wise in the ways of the tool to use it. That's always true. There's nothing wrong with being "unusual": Mathematica is unusual in quite a few ways, and that's key to its ability to quickly dispose of problems that are more difficult with other tools. When a different tool is better, just use that and quit carping. > > Your IEEE754-based ideology has its own > > > weaknesses: abuse of the concepts of "rational" and "finite", and weak > > connection to the real number system. Matthew 7:3 applies. > > I'm not sure what you mean by IEEE754-based ideology. The ideologue never understands his own ideology. > Probably the > whole community of numerical error analysts agrees on a model that is > different from WRI's. In other words, your fellow ideologues agree with you. Unanimity in an area as tricky as this is a sure symptom of groupthink: difficult problems demand *multiple* points of view for truly effective understanding. Even in physics, where we think we are all working with a common reality, we have multiple ways of looking at it. Numerical analysis lacks that common reality: it serves a diversity of applications, with a diversity of requirements. Where I fault Mathematica's design here is that "Real" wraps two rather different kinds of objects: fixed-precision machine numbers, and Mathematica's approximate reals. Both are useful, but understanding and controlling which kind you're using is a bit subtle. "Complex" is even more troublesome.
From: Richard Fateman on 27 Jun 2010 04:56 Noqsi wrote: > On Jun 21, 12:11 am, Richard Fateman <fate...(a)cs.berkeley.edu> wrote: >> Noqsi wrote: > >>> Approximation is often puzzling. The ideological war you wage against >>> WRI is unhelpful here. The wise person understands that there are >>> multiple points of view. >> If it is required to be wise in the ways of WRI's (unusual) arithmetic >> to use WRI software, then that is more than a user interface problem. > > You must be wise in the ways of the tool to use it. Very Zen. Or maybe Karate Kid? That's always > true. There's nothing wrong with being "unusual": Mathematica is > unusual in quite a few ways, and that's key to its ability to quickly > dispose of problems that are more difficult with other tools. When a > different tool is better, just use that and quit carping. I suppose you count as "carping", a complaint when the tool disposes of problems with incorrect or misleading answers. > >> Your IEEE754-based ideology has its own >> >>> weaknesses: abuse of the concepts of "rational" and "finite", and weak >>> connection to the real number system. Matthew 7:3 applies. >> I'm not sure what you mean by IEEE754-based ideology. > > The ideologue never understands his own ideology. And it appears that you are unable to explain your words. The IEEE-754 binary standard embodies a good deal of the well-tested wisdom of numerical analysis from the beginning of serious digital computing through the next 40 years or so. There were debates about a few items that are peripheral to these decisions, such as dealing with over/underflow, error handling, traps, signed zeros. >> Probably the >> whole community of numerical error analysts agrees on a model that is >> different from WRI's. > > In other words, your fellow ideologues agree with you. Unanimity in an > area as tricky as this is a sure symptom of groupthink: difficult > problems demand *multiple* points of view for truly effective > understanding. Ah, the Bozo the Clown theory. They laughed at Columbus, but he was right [so it goes... did they really? but anyway...] Hence: They laughed at Me, so I am right. Neglecting the other set of examples, such as They laughed at Bozo the clown ..... Even in physics, where we think we are all working with > a common reality, we have multiple ways of looking at it. Yes, I especially like the flat earth society http://theflatearthsociety.org/cms/ > Numerical > analysis lacks that common reality: it serves a diversity of > applications, with a diversity of requirements. Computing can model mathematics, mathematics can model reality. At least that is the commonly accepted reason we still run computer programs in applications. Good numerical computing tools allows one to build specific applications. For example, one would hope that the computing tools would allow an efficient implementation of (say) interval arithmetic. This is fairly easy with IEEE-754 arithmetic, but much much harder on earlier hardware designs. The basic tool that Mathematica (and many other systems) provides that might be considered a major extension in a particular direction, is the arbitrary precision software. Mathematica has a different take on this though, trying to maintain an indication of precision. None of the other libraries or systems that do arbitrary precision arithmetic have adopted this, so if it is such a good idea, oddly no one else has taken the effort to mimic it. And it is not hard to mimic, so if anyone were to care, it could be done easily. Apparently people do not want some heuristically determined "fuzz" to be mysteriously added to their arithmetic. I do not know how much of the code for internal arithmetic for evaluation of functions in Mathematica is devoted to bypassing these arithmetic features, but based on some examples provided in this newsgroup, I suspect this extra code, which is an attempt to mimic arithmetic without the fuzz inserted by Mathematica, becomes a substantial computational burden, and an intellectual burden on the programmer to undo the significance arithmetic fuzz. > > Where I fault Mathematica's design here is that "Real" wraps two > rather different kinds of objects: fixed-precision machine numbers, > and Mathematica's approximate reals. Both are useful, but > understanding and controlling which kind you're using is a bit subtle. > "Complex" is even more troublesome. I know of 4 systems which provide arbitrary precision numbers that mimic the IEEE-754 arithmetic but with longer fraction and exponent fields. Perhaps that would provide the unity of design concept that you would prefer. One just increases by a factor of 4 (quad double), the other are arbitrary precision. RJF >
From: Richard Fateman on 28 Jun 2010 02:28 danl(a)wolfram.com wrote: >> [...] The IEEE-754 >> binary standard embodies a good deal of the well-tested wisdom of >> numerical analysis from the beginning of serious digital computing >> through the next 40 years or so. There were debates about a few items >> that are peripheral to these decisions, such as dealing with >> over/underflow, error handling, traps, signed zeros. >> > > To what extent is this standard really in conflict with significance > arithmetic? I think the standard is not "in conflict" exactly. The question seems to me to be one of choosing the right basis for building whatever it is you want to build, considering that other people will want to build other things, but collectively you can really have only one basis -- that is, the one that is going to be more-or-less frozen in the standard, and embodied in hardware. You won't be able to change that easily, and some people are going to make this standard run very very fast, perhaps with parallel or pipelined hardware. Furthermore, the standard will be running on every computer, so you want to make good use of it if possible. So for the particular issue here, we ask, how hard is it to build significance arithmetic [assuming you desire that] starting with IEEE-754. Compare that to how hard is it to build IEEE-754, apparently desired by some people, given that you have implemented significance arithmetic. Imagine for example, a hardware model in which the various Precision, Accuracy, and marks like ` are provided, and where the convert-binary-to-decimal , or formatted output, are standardly given as in Mathematica. Thus an answer that prints as "0." might be zero, or might be meaningless noise. Now can you build IEEE-754 style arithmetic on top? Sure, because you have all these flags and setting programs like SetPrecision and SetAccuracy and MaxPrecision and MinPrecision and whatever else there is now. It is just not economical. And since it is built only in Mathematica software, it is portable only to other people who have Mathematica, instead of being portable to anyone who has a computer that implements the standard. > As best I can tell, for many purposes significance arithmetic > sits on top of an underlying "basic" arithmetic (one that does not track > precision). That part comes reasonably close to IEEE, as best I understand > > Well, down at the bottom it has got to run machine instructions, so we know it is either using IEEE-754, or integer arithmetic. > >> Computing can model mathematics, mathematics can model reality. At >> least that is the commonly accepted reason we still run computer >> programs in applications. Good numerical computing tools allows one to >> build specific applications. For example, one would hope that the >> computing tools would allow an efficient implementation of (say) >> interval arithmetic. This is fairly easy with IEEE-754 arithmetic, >> but much much harder on earlier hardware designs. >> > > I'm not convinced interval arithmetic is ever easy. But I can see that > having IEEE to handle directed rounding is certainly helpful. > Doing + * / is not hard. Elementary functions of intervals becomes tedious. etc. > > >> The basic tool that Mathematica (and many other systems) provides that >> might be considered a major extension in a particular direction, is the >> arbitrary precision software. >> Mathematica has a different take on this though, trying to maintain >> an indication of precision. None of the other libraries or systems that >> do arbitrary precision arithmetic have adopted this, so if it is such a >> good idea, oddly no one else has taken the effort to mimic it. And it >> is not hard to mimic, so if anyone were to care, it could be done >> easily. Apparently people do not want some heuristically determined >> "fuzz" to be mysteriously added to their arithmetic. >> > > I'm not sure it is all that easy to code the underlying precision > tracking. But yes, it is certainly not a huge multi-person, multi-year > undertaking. I'd guess most commercial vendors and freeware implementors > of extended precision arithmetic do not see it as worth the investment. Most commercial vendors of math libraries don't have extended precision arithmetic at all; doing "automatic" precision tracking using this is fairly esoteric. > My > take is it makes a considerable amount of hybrid symbolic-numeric > technology more accessible to the implementor. But I cannot say how > important this might be in the grand scheme of things. > > > >> I do not know how much of the code for internal arithmetic for >> evaluation of functions in Mathematica is devoted to bypassing these >> arithmetic features, but based on some examples provided in this >> newsgroup, I suspect this extra code, which is an attempt to mimic >> arithmetic without the fuzz inserted by Mathematica, becomes >> a substantial computational burden, and an intellectual burden on the >> programmer to undo the significance arithmetic fuzz. >> > > In our internal code it is relatively straightforward to bypass. > Effectively there are "off" and "on" switches of one line each. I do not > know how much they get used but imagine it is frequent in some parts of > the numerics code. For disabling in Mathematica code we have the oft-cited > $MinPrecision and $MaxPrecision settings. I believe this is slated to > become simpler in a future release (not sure if it is the next release > though). > > So $MinPrecision=$MaxPrecision = MachinePrecision makes everything run at machine speed? :) > >>> Where I fault Mathematica's design here is that "Real" wraps two >>> rather different kinds of objects: fixed-precision machine numbers, >>> and Mathematica's approximate reals. Both are useful, but >>> understanding and controlling which kind you're using is a bit subtle. >>> "Complex" is even more troublesome. >>> >> I know of 4 systems which provide arbitrary precision numbers that mimic >> the IEEE-754 arithmetic but with longer fraction and exponent fields. >> Perhaps that would provide the unity of design concept that you would >> prefer. One just increases by a factor of 4 (quad double), the other >> are arbitrary precision. >> >> RJF >> > > Mathematica with precision tracking disabled is fairly close to IEEE. I > think what is missing is directed rounding modes. OK, so it is not really very close. But many other languages make it difficult to access these rounding modes too, making it a chicken-and-egg problem. You can't write machine independent code that uses them, so people are tempted to leave them out of language implementations, and if they could, they would even leave them out of the hardware. Or implement them in some major brain-damaged way that slows the machine down. > There might be other > modest digressions from rounding number of bits upward to nearest word > size (that is to say, mantissas will come in chunks of 32 or 64 bits). > At least some of the libraries I'm aware of do the same thing. > For many purposes, the fixed precision arithmetic you advocate is just > fine. Most numerical libraries, even in extended precision, will do things > similar to what one gets from Mathematica, with (usually) well studied > algorithms under the hood that produce results guaranteed up to, or at > least almost to, the last bit. This is good for quadrature/summation, > numerical optimization and root-finding, most numerical linear algebra, > and numerical evaluation of elementary and special functions. > > Where it falls short is in usage in a symbolic programming setting, where > one might really need the precision tracking. There is a choice here of forcing the precision tracking into everyone's arithmetic (unless you set flags to inhibit it) or implementing the precision tracking on top of the algorithm. > My take is it is easier to > have that tracking, and disable it when necessary, than to not have when > you need it. An alternative would be a compiler that takes an ordinary numeric program and re-interprets it to do precision tracking, and emits the code to do so. This is fairly commonplace in the interval arithmetic community, where a (usually slightly modified) FORTRAN program P is run through some 'Interval FORTRAN' translator to produce a program P'. This program P' is compiled and linked with a library of some interval routines, and the final running of the program results in interval results. There are also such compilers for arbitrary precision FORTRAN. There are various disadvantages to these systems (esp. they are not so robust with respect to error reporting) but they do not slow down the arithmetic for everyone. Just for the people doing interval arithmetic. There are major major advantages to taking other peoples' code that does the right thing in terms of reporting an answer and an appropriate bound on the error -- linear algebra code, elementary functions, all written using standard arithmetic running at full speed, and giving state-of-the-art bounds, not something cobbled together by trying to track each addition and multiplication. > Where people with only fixed extended precision at their > disposal try to emulate significance arithmetic in the literature, I > generally see only smallish examples and no indication that the emulation > is at all effective on the sort of problems one would really encounter in > practice. > This assumes that someone (not using Mathematica) is trying to emulate significance arithmetic. I do not expect that anyone seriously interested in doing this would be substantially more or less successful than WRI in implementing it. I'm not sure what you mean by smallish examples. Examples of implementing significance arithmetic, or examples of running an emulation of significance arithmetic. I think the latter interpretation is more salient.. I suspect that in any serious application of significance arithmetic whether in mathematica or some other package , where some particular datum undergoes thousands or millions of operations, occasionally, and perhaps frequently numbers will be computed in which all significance will disappear. As I have pointed out in the past, something like x=2*x-x loses one bit. (in Mathematica 7.0, for sure). Do it n times and for any number x, you get zero. For[x = 1.11111111111111111111, x> 0, Print[x= 2*x- x]] terminates with x=0. I think this is the kind of example, smallish, that shows that a naive user relying on significance arithmetic embedded in an otherwise ordinary program, will sometimes get nonsense. Relying on significance arithmetic for largish examples is hazardous, where values like x above may undergo, sight unseen, many transformations. It is especially hazardous if the naive user is lead to believe that significance arithmetic is his/her friend and protects against wrong answers. Clearly the value for x above has lead to a branch (where x=0), even though x is not, but is, um, something like 0``-0.5162139529488116 RJF
From: danl on 28 Jun 2010 02:34 > [...] The IEEE-754 > binary standard embodies a good deal of the well-tested wisdom of > numerical analysis from the beginning of serious digital computing > through the next 40 years or so. There were debates about a few items > that are peripheral to these decisions, such as dealing with > over/underflow, error handling, traps, signed zeros. To what extent is this standard really in conflict with significance arithmetic? As best I can tell, for many purposes significance arithmetic sits on top of an underlying "basic" arithmetic (one that does not track precision). That part comes reasonably close to IEEE, as best I understand > Computing can model mathematics, mathematics can model reality. At > least that is the commonly accepted reason we still run computer > programs in applications. Good numerical computing tools allows one to > build specific applications. For example, one would hope that the > computing tools would allow an efficient implementation of (say) > interval arithmetic. This is fairly easy with IEEE-754 arithmetic, > but much much harder on earlier hardware designs. I'm not convinced interval arithmetic is ever easy. But I can see that having IEEE to handle directed rounding is certainly helpful. > The basic tool that Mathematica (and many other systems) provides that > might be considered a major extension in a particular direction, is the > arbitrary precision software. > Mathematica has a different take on this though, trying to maintain > an indication of precision. None of the other libraries or systems that > do arbitrary precision arithmetic have adopted this, so if it is such a > good idea, oddly no one else has taken the effort to mimic it. And it > is not hard to mimic, so if anyone were to care, it could be done > easily. Apparently people do not want some heuristically determined > "fuzz" to be mysteriously added to their arithmetic. I'm not sure it is all that easy to code the underlying precision tracking. But yes, it is certainly not a huge multi-person, multi-year undertaking. I'd guess most commercial vendors and freeware implementors of extended precision arithmetic do not see it as worth the investment. My take is it makes a considerable amount of hybrid symbolic-numeric technology more accessible to the implementor. But I cannot say how important this might be in the grand scheme of things. > I do not know how much of the code for internal arithmetic for > evaluation of functions in Mathematica is devoted to bypassing these > arithmetic features, but based on some examples provided in this > newsgroup, I suspect this extra code, which is an attempt to mimic > arithmetic without the fuzz inserted by Mathematica, becomes > a substantial computational burden, and an intellectual burden on the > programmer to undo the significance arithmetic fuzz. In our internal code it is relatively straightforward to bypass. Effectively there are "off" and "on" switches of one line each. I do not know how much they get used but imagine it is frequent in some parts of the numerics code. For disabling in Mathematica code we have the oft-cited $MinPrecision and $MaxPrecision settings. I believe this is slated to become simpler in a future release (not sure if it is the next release though). >> Where I fault Mathematica's design here is that "Real" wraps two >> rather different kinds of objects: fixed-precision machine numbers, >> and Mathematica's approximate reals. Both are useful, but >> understanding and controlling which kind you're using is a bit subtle. >> "Complex" is even more troublesome. > > I know of 4 systems which provide arbitrary precision numbers that mimic > the IEEE-754 arithmetic but with longer fraction and exponent fields. > Perhaps that would provide the unity of design concept that you would > prefer. One just increases by a factor of 4 (quad double), the other > are arbitrary precision. > > RJF Mathematica with precision tracking disabled is fairly close to IEEE. I think what is missing is directed rounding modes. There might be other modest digressions from rounding number of bits upward to nearest word size (that is to say, mantissas will come in chunks of 32 or 64 bits). For many purposes, the fixed precision arithmetic you advocate is just fine. Most numerical libraries, even in extended precision, will do things similar to what one gets from Mathematica, with (usually) well studied algorithms under the hood that produce results guaranteed up to, or at least almost to, the last bit. This is good for quadrature/summation, numerical optimization and root-finding, most numerical linear algebra, and numerical evaluation of elementary and special functions. Where it falls short is in usage in a symbolic programming setting, where one might really need the precision tracking. My take is it is easier to have that tracking, and disable it when necessary, than to not have when you need it. Where people with only fixed extended precision at their disposal try to emulate significance arithmetic in the literature, I generally see only smallish examples and no indication that the emulation is at all effective on the sort of problems one would really encounter in practice. Daniel Lichtblau Wolfram Research
First
|
Prev
|
Pages: 1 2 3 Prev: hexagon tiling demonstration Next: Corruption of formulas after cut-and-paste |