From: Peter Perkins on
On 5/25/2010 5:04 AM, Michael wrote:
> Although I cannot help you on your actual problem, I want to throw in
> that the usage of lscov to me only seems necessary if you have a priori
> covariance information available. If you just want to solve an
> overdetermined equation system, "\" should be totally sufficient.

That is the origin of LSCOV, yes (thus the name). But if you look at
the help since about R14, you'll find that LSCOV can be called with a
(known) cov matrix, or a (known) weight vector, or with neither. More
importantly, it returns not only regression coefficients, but also std
errs, MSE, and an estimate of the cov matrix of the coef estimates.

So if all you want is the least squares estimates themselves, backslash
is quite easy to type. If you want all the rest, LSCOV is where you
should look. They return the same coefs (as Bruno explained).
From: John D'Errico on
"Flyx" <flyxylf(a)hotmail.com> wrote in message <htg581$5fl$1(a)adenine.netfront.net>...
> "Michael " <michael.schmittNOSPAM(a)bv.tum.de> ha scritto nel messaggio
> news:htg3q5$j8k$1(a)fred.mathworks.com...
> > Although I cannot help you on your actual problem, I want to throw in that
> > the usage of lscov to me only seems necessary if you have a priori
> > covariance information available. If you just want to solve an
> > overdetermined equation system, "\" should be totally sufficient.
>
> Yes, I supposed. But lscov accepts even only the two parameters A and b
> given a result that completely matches to the "linsolve" one. Why does it
> differ from "\" ? It should be theoretically the same.
>

Even though they should be theoretically the same, they
will never be exactly the same in practice. Floating point
arithmetic assures this fact, that any two algorithms for
the same result may give slightly different results in the
least significant bits of the result.

Next, despite the fact that you claim m >> n, so that
the system is strongly overdetermined, this tells us
absolutely NOTHING. I can easily give you a matrix that
is well-scaled, with 1000000 rows and only 2 columns,
yet it is singular. And all bets are off on singular matrices.

A = ones(1e6,2);
b = rand(1e6,1);

A\b
Warning: Rank deficient, rank = 1, tol = 2.2204e-07.
ans =
0.50032124992531
0

lscov(A,b)
Warning: A is rank deficient to within machine precision.
> In lscov at 197
ans =
0.500321249925306
0

Well, at least the two solutions are close, but as you
see, they differ in the LSBs. And this is with a trivial
matrix.

It could be worse, if your problem is poorly scaled,
or poorly conditioned. So tell us, what is the condition
number of your matrix? Don't just tell us the relative
sizes, which is no information at all.

John