From: Obaid Mushtaq on 5 Aug 2010 07:00 Hi, Thanks. I still think that the formula resembles to the linear one except for the scaling with square roots. The looped version was on purpose as I had to do this in C++ later on. I didn't know how to use 2 variables in a for loop in ML so I was doing something like that. BR, Obaid
From: Matt J on 5 Aug 2010 10:46 "Obaid Mushtaq" <obaidmushtaq(a)yahoo.com> wrote in message <i3e5k6$ju9$1(a)fred.mathworks.com>... > Thanks. I still think that the formula resembles to the linear one except for the scaling with square roots. ======= Yes, and it's pretty clear what motivated that. The formula we had from before Ycov=T*Zcov*T.' means that the variance vectors Var_Y=diag(Ycov) and Var_Z=diag(Zcov) are specifically related by Var_Y=T.^2 *Var_Z So, if we select T=sqrt(S) where S is linear interpolation than Var_Y=S*Var_Z Meaning that interpolating the signal with sqrt(S) is the same as linearly interpolating the variances Var_Z. Hence, if Var_Z is a constant Zvar than likewise Var_Y=Zvar and you've preserved variance. Clearly, though, you can do this with any uniformity-preserving interpolator (e.g. B-splines)
First
|
Prev
|
Pages: 1 2 3 Prev: Combining two .dat files Next: Moment of Inertia of a machine modelled with SimMechanics |