From: Chris Gray on
nmm1(a)cam.ac.uk writes:

> Why was it a problem for graphics co-ordinates? That's one of its
> classic uses.

With an early graphical MUD, I was doing algorithmic drawing of
overhead views of things. The problem with the fixed-point stuff
is that two different ways of getting to the same point didn't
end up at exactly the same point. The result was often jaggy
diagonal lines or boxes that didn't quite touch properly. There
were 2 resolutions I had to deal with, so I had to scale the
numbers for one of them.

Doing pure integer co-ordinates, with the programmer knowing
which resolution was in use, is harder for the programmer,
but made for nicer graphics.

--
Experience should guide us, not rule us.

Chris Gray cg(a)GraySage.COM
http://www.Nalug.ORG/ (Lego)
http://www.GraySage.COM/cg/ (Other)
From: ChrisQ on
Bernd Paysan wrote:

> Knuth explains that quite in detail, and it's not just because FP is
> slow, but also because it may lead to inaccurate results (especially
> with the wildly differing FP implementations back at that time).
>
> AFAIK, you can use FP for glues in TeX, because it's quite easy to
> achieve what Knuth wants (filll >> fill >> fil, just have
> filll=2^128*fill=2^256*fil, and you're done). This doesn't compromise
> on accuracy, because you'll convert the glue into an integer number (the
> basic units to stretch the words) before using it to stretch the words,
> anyway. The point here is that the right side of the rightmost word
> should align to the right side of the document, no matter how much
> spacing you inserted in between, and how many rounding errors you made
> on your way. This is fairly straight-forward in fixed point.
>

Before the days of affordable wp, I used Tex for all documentation,
running first on a 286, uVax and Sun3. It was at once frustrating and
wonderfull when it did what you were expecting. Never became completely
fluent in the syntax, relying on home made templates to do what was
needed. OTOH, DEH's hard back font design book was one of the most
beautifull (if I can use such a word) books I have ever seen. He must
have spent 1000's of hours producing it. A work of art.

Keep meaning to go back and have another look at Tex - the uVaxII
managed around 4 pages per minute, Sun 3, around 12. Would fly on a more
modern machine and I think there's a lot more friendly front ends around
now than in the late 80's...

Regards,

Chris
From: Ulf Samuelsson on
ChrisQ skrev:
> nmm1(a)cam.ac.uk wrote:
>> In article <87iqfpexzx.fsf(a)ami-cg.GraySage.com>,
>> Chris Gray <cg(a)graysage.com> wrote:
>>>> Eh? The reason they switched was NOT because the algorithms weren't
>>>> fixed-point ones, but because their new 'computer science' employees
>>>> didn't have a clue about scaling. Few people under 70 do :-(
>>> Hmmph! I'm not that old, yet. I used fixed point in my AmigaMUD system,
>>> since I wanted to do graphical co-ordinates that way, and the software
>>> floating point on the early Amiga's was way too slow. The format was
>>> a fixed 16.16 representation. You declared the variables as type
>>> "fixed", and wrote constants with a decimal point but no exponent.
>>> The fixed-point worked fine, but it was a bad idea for graphics
>>> co-ordinates.
>>
>> I will bet that you weren't taught how to scale fixed-point numbers
>> for numerical work in a computer science course!
>>
>> Why was it a problem for graphics co-ordinates? That's one of its
>> classic uses.
>>
>>
>> Regards,
>> Nick Maclaren.
>
> For embedded work, I would only consider floating point as a last
> resort, because of the overhead and speed penalty. With fixed point, the
> trick is to scale everything to the expected range of values, which is
> nearly always known and also to the machine word size. Anything outside
> this range is an error. ie, trivial example to scale up:
>


That is because there is not a lot of low cost floating point around.
Things are changing...
Have been pestering Atmel for floating point for a long time, without
success.
I was deeply involved in the definition of the Motor Control
oriented AVR32 family, the AT32UC3Cxxx, and an FPU was very high on
my wishlist, but no success.
Then just when the first silicon arrived, a significant customer
came back and prasied the chip, but wanted to have a version
with floating point.
I again raised the issue, and was told it already *has* a floating point.
Even if it only has single precision, it will work for most customers.

Got my first board last week, now I only need a floating point aware
compiler...
Will go a visit IAR soon and check status.

Best Regards
Ulf Samuelsson




> u32Result = (U32) u16InValue << u8ScaleFactor;
> u32Result /= u16Fraction;
>
> Two lines of C and only one integer divide. Scale up is similar. For
> trig functions, I mainly use lookup tables. ie: 16 + 1 bit sin takes
> only 16k of memory. This sort of thing becomes part of the toolbox for
> embedded work, especially for compute intensive stuff like graphics.
>
> Knuth's Tex package uses fixed point for all it's internal work, iirc,
> probably because the early machines were so slow.
>
> The problem with any generalised solution like float libraries is that
> they will never be as efficient as a problem specific solution. The
> trade off is that you have to do the work yourself. Perhaps powerfull
> modern desktops are making us all lazy :-)...
>
> Regards,
>
> Chris
From: nmm1 on
In article <h8klat$pa5$1(a)aioe.org>, Ulf Samuelsson <ulf(a)atmel.com> wrote:
>ChrisQ skrev:
>>
>> For embedded work, I would only consider floating point as a last
>> resort, because of the overhead and speed penalty. With fixed point, the
>> trick is to scale everything to the expected range of values, which is
>> nearly always known and also to the machine word size. Anything outside
>> this range is an error. ie, trivial example to scale up:
>>
>That is because there is not a lot of low cost floating point around.
>Things are changing...

Actually, things are getting worse. The problem is that floating-point
is increasingly being interpreted as IEEE 754, including every frob,
gizmo and brass knob. And the new version now specifies decimal; if
that takes off, there will be pressure to provide that, often as well
as binary - and there are two variants of decimal, too!

IBM say that it adds only 5% to the amount of logic they need, but they
have a huge floating-point unit in the POWER series. In small chips,
designed for embedding, it's a massive overhead (perhaps a factor of
two for binary and three for decimal?) I should appreciate references
to any hard, detailed information on this.

What is needed is a simplified IEEE 754 binary floating-point, which
would need less logic, be faster and have better RAS properties. It
wouldn't even be hard to do - it's been done, many times :-(


Regards,
Nick Maclaren.
From: Bernd Paysan on
ChrisQ wrote:
> Keep meaning to go back and have another look at Tex - the uVaxII
> managed around 4 pages per minute, Sun 3, around 12. Would fly on a more
> modern machine and I think there's a lot more friendly front ends around
> now than in the late 80's...

Indeed, e.g. LyX, a very friendly front end. Rendering a full book still
takes a bit of time, however mostly because book authors nowadays put so
many tricks into LaTeX that it sometimes requires 6 or 7 runs to get it all
sorted out ;-).

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/