From: glen herrmannsfeldt on 3 Aug 2010 00:03 Manny <mloulah(a)hotmail.com> wrote: > On Aug 3, 2:37�am, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote: >> For FFT, block floating point (one exponent for the whole vector) >> is probably best, but there isn't much support in hardware. > There are plenty of FFT core variants out there. Many do internal > scalings to keep the dynamic range fully utilized in order to > compensate for loss of precision. Some people write software that > generate custom fixed-point designs that are better than your typical > floating point equivalent. And when you have hardware that is > configurable down to the gate level, sky is the limit. And, at least on current FPGAs, floating point isn't very easy to do. Well, maybe a little better with the 6LUTs on newer families. The hard part is the shifter needed for prenormalization and postnormalization. Much bigger than the significand adder. -- glen
From: Vladimir Vassilevsky on 3 Aug 2010 01:11 glen herrmannsfeldt wrote: > Vladimir Vassilevsky <nospam(a)nowhere.com> wrote: >>glen herrmannsfeldt wrote: > >>>Floating point is way overused, especially in DSP. > >>Huh? Try inverting mere 3x3 matrix in a fixed point. > > How often (fraction of the time) does DSP do 3x3 matrix > inversion? It would be good to do it at every sample, and with much bigger matrices. Matrix inversion is main operation in adaptive filtering. >>How about something as simple as sqrt(x^2 + y^2 + z^2) ? > > Not so hard with enough bits in fixed point. At some point > there is a tradeoff between exponent bits and adding > more bits to the fixed point value. It is very labor consuming to work out more or less complicated algorithm in a fixed point avoiding overflows and underflows. It is often impossible to solve the problem by increasing the bit width as the intermediate results could be ridiculously wide in the worst case. Something like x/y or x^n is a killer; you have to think out ways around it. BTW, recently I ran out of 4-byte float scale :-) How about 256+ bit integers ? Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
From: robert bristow-johnson on 3 Aug 2010 01:12 On Aug 3, 12:00 am, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote: > Vladimir Vassilevsky <nos...(a)nowhere.com> wrote: > > glen herrmannsfeldt wrote: > >> Floating point is way overused, especially in DSP. > > > > How about something as simple as sqrt(x^2 + y^2 + z^2) ? > > Not so hard with enough bits in fixed point. At some point > there is a tradeoff between exponent bits and adding > more bits to the fixed point value. actually, i don't think it's a problem at all with a fixed-point processor and a double wide accumulator. you have to settle, at the start, how accurate you want the sqrt() function. if you don't need it bit-accurate, you can use a simple and cheap polynomial approximation. if you need it bit accurate, computing 1/sqrt() using the Newton-Raphson method is still pretty cheap. r b-j
From: Fred Marshall on 3 Aug 2010 14:38
glen herrmannsfeldt wrote: > Manny <mloulah(a)hotmail.com> wrote: >> On Aug 3, 2:37 am, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote: >>> For FFT, block floating point (one exponent for the whole vector) >>> is probably best, but there isn't much support in hardware. > >> There are plenty of FFT core variants out there. Many do internal >> scalings to keep the dynamic range fully utilized in order to >> compensate for loss of precision. Some people write software that >> generate custom fixed-point designs that are better than your typical >> floating point equivalent. And when you have hardware that is >> configurable down to the gate level, sky is the limit. > > And, at least on current FPGAs, floating point isn't very > easy to do. Well, maybe a little better with the 6LUTs on > newer families. The hard part is the shifter needed for > prenormalization and postnormalization. Much bigger than > the significand adder. > > -- glen I had a consultant who showed that FFT noise was better (i.e. lower) with fixed point on a TMS320C80 vs. floating point (probably 32-bit 8.24). I don't remember the assumptions and the word length on the 'C80 was what it was. Fred |