From: Stéphane on 9 Apr 2010 14:44 "Bruno Luong" <b.luong(a)fogale.findmycountry> wrote in message <hpnra9$s8r$1(a)fred.mathworks.com>... > Take a look at this link if you are not familiar with the FPCW: > > http://msdn.microsoft.com/en-us/library/aa289157%28VS.71%29.aspx > > Bruno Yes I was on this page, that's why I told you about fp:fast, fp:strict and fp:precise (strict and precise :) ) in a previous message :) No change though... :( thank you, Stéphane
From: Jan Simon on 9 Apr 2010 14:49 Dear Stéphane! > I am not asking for code, all I want to know is if matlab's algorithm compensates rounding errors amplifications or not. I could not find any clue about it on the web. Matlab uses BLAS for matrix (and array) operations and in consequence no error compensation is applied. You can compile XBLAS and tell Matlab to use this instead. XBLAS uses the TwoSum, FastTwoSum and Split algorithms to calculate error compensated dot products and matrix multiplications. The results of the dot products are as accurate as if they are calculated with quadruple precision (but rounded to doubles finally of course). The BLAS is compiled such that it uses FMAC commands: Fused Multiply Accumulate, which means that t = r + s * q is calculated with a single command and a single rounging error only, see: http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf Kahan states: "A less obvious advantage of a FMAC is slightly more accurate matrix products, which tend to an extra sig. bit of accuracy because they accumulate about half as many rounding errors." A Matlab implementation of error compensated array functions is Rump's INTLAB: http://www.ti3.tu-harburg.de/rump/intlab/ Kind regards, Jan
From: Stéphane on 9 Apr 2010 15:10 "Jan Simon" <matlab.THIS_YEAR(a)nMINUSsimon.de> wrote in message <hpnsr1$mnt$1(a)fred.mathworks.com>... > Dear Stéphane! > > > I am not asking for code, all I want to know is if matlab's algorithm compensates rounding errors amplifications or not. I could not find any clue about it on the web. > > Matlab uses BLAS for matrix (and array) operations and in consequence no error compensation is applied. > You can compile XBLAS and tell Matlab to use this instead. XBLAS uses the TwoSum, FastTwoSum and Split algorithms to calculate error compensated dot products and matrix multiplications. The results of the dot products are as accurate as if they are calculated with quadruple precision (but rounded to doubles finally of course). > > The BLAS is compiled such that it uses FMAC commands: Fused Multiply Accumulate, which means that t = r + s * q is calculated with a single command and a single rounging error only, see: > http://www.cs.berkeley.edu/~wkahan/MxMulEps.pdf > Kahan states: > "A less obvious advantage of a FMAC is slightly more accurate matrix products, which tend to an extra sig. bit of accuracy because they accumulate about half as many rounding errors." > > A Matlab implementation of error compensated array functions is Rump's INTLAB: http://www.ti3.tu-harburg.de/rump/intlab/ > > Kind regards, Jan Thank you very much Jan, but, if there are no compensations applied, why are the results so differents? I mean, if it was only my implementation of Strassen, but the ancient O(n^3) 3-for-loops-matrix-multiplication-algorithm is far from Matlab too... (but very close to my Strassen...). I do not know where I could find an answer now. The compensation was my last idea. It is clearly a rounding error amplification problem though, since for small matrices I am able to achieve the desired precision, in the same test conditions. Stéphane
From: Jan Simon on 9 Apr 2010 16:00 Dear Stéphane! This is in interesting paper of Kahan also: http://www.cs.berkeley.edu/~wkahan/Qdrtcs.pdf He uses a small matrix operations to identify the rounding behaviour of Matlab: x = [1+4.656612873e-10, 1] * [1-4.656612873e-10; -1] ; y = [1, 1+4.656612873e-10] * [-1; 1-4.656612873e-10] ; "If x and y are both zero, MATLAB follows the first style, rounding every arithmetic operation to the same 8-byte precision as is normally used for variables stored in memory. If x and y are both –2–62 » –2.1684e–19 , MATLAB follows the second style when computing scalar products of vectors and of rows and columns of matrices; these are accumulated to 64 sig. bits in 10-byte registers before being rounded back to 53 sig. bits when stored into 8-byte variables. MATLAB 6.5 on PCs follows the first style by default; to enable the second style execute the command system_dependent( 'setprecision', 64 ) A similar command system_dependent( 'setprecision', 53 ) restores the default first style." You can test the sensitivity of your matrix computations by comparing A * B and (A+eps) * B Good luck, Jan
From: James Tursa on 9 Apr 2010 16:05 "Stéphane " <jalisastef(a)yahoo.ca> wrote in message <hpnqll$j8d$1(a)fred.mathworks.com>... > > Here is the matlab code: > > temp1 = rand(2000, 2000); > temp2 = rand(2000, 2000); > > A = 25000 * temp1; > B = 25000 * temp2; > > C = A * B; > > fichier = fopen('A.txt', 'w+'); > fprintf(fichier, '%16f;', A); > fclose(fichier); > > fichier = fopen('B.txt', 'w+'); > fprintf(fichier, '%16f;', B); > fclose(fichier); > > fichier = fopen('C.txt', 'w+'); > fprintf(fichier, '%16f;', C); > fclose(fichier); > > Then, I launch my program on these matrices: A and B to perform the multiplication, which is then compared to result C. For random matrices like these I wouldn't expect anything unusual in the accuracy of the result. I spot checked the MATLAB result against a high precision routine I have and saw the expected relative difference between the two results in the e-16 range. However, to be sure you are comparing apples to apples why don't you write and read a binary file instead of a text file? It may be pointless to talk about rounding modes of the processor if your data transfer via a text file write is introducing rounding differences in the data right up front, and I suspect this is where your real differences are coming from. What kind of relative differences are you seeing in your results vs MATLAB? Also, the 25000 factor isn't really doing anything for this test ... all of the relative sizes of the individual products are the same. Multiplying by 25000 simply has the qualitative effect of changing all the exponents of all the values by the same amount, but since the relative sizes remain the same between them the accuracy of the matrix product will be the same. James Tursa
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 Prev: using mex/fftw for "xcorr", faster than 2010A "xcorr" Next: Fitting ellipses with dispersion |