From: Bruno Luong on
"Roger Stafford" <ellieandrogerxyzzy(a)mindspring.com.invalid> wrote in message <hscaqo$g36$1(a)fred.mathworks.com>...

>
> I seem to recall that the orthogonal-fit slope wil always lie between that of the ordinary regression line on the one hand and the regression line with x and y interchanged on the other, and that the three slopes can only be equal when the original data is colinear. (I would have to brush off some mental cobwebs to prove this right now.)

Roger, that reminds me a series of academic papers written about those by Christopher Paige, for example "Unifying Least Squares, Total Least Squares and Data Least Squares" where he showed they just belong to the same regression with one hyper-parameter.

Bruno
From: Roger Stafford on
"Bruno Luong" <b.luong(a)fogale.findmycountry> wrote in message <hscbj4$49m$1(a)fred.mathworks.com>...
> Roger, that reminds me a series of academic papers written about those by Christopher Paige, for example "Unifying Least Squares, Total Least Squares and Data Least Squares" where he showed they just belong to the same regression with one hyper-parameter.
>
> Bruno

Yes, I suspect that "hyper-parameter" corresponds to the weights one uses. With weighting all on the side of the y-ccordinates, you get ordinary regression, with weights all on the side of the x-coordinates, one obtains regression with the coordinates reversed, with equal weights, you find the orthogonal best fit, and similarly for any weights. As you vary the weighting, the slopes change monotonically and continuously between the two extreme regression values.

Roger Stafford
From: Matt J on
"Roger Stafford" <ellieandrogerxyzzy(a)mindspring.com.invalid> wrote in message <hscaqo$g36$1(a)fred.mathworks.com>...

> When you write y(x) = m^2*x + b and remove all constraints, the optimization procedure will move toward a solution in which the partial derivatives of the objective function with respect to m and b are zero, since there is no longer a constraint barrier to stop it. If the data is such that it cannot find such a solution with m^2 > 0 - in other words, the natural regression slope would be negative - then it will presumably gravitate (if its search is successful) toward the m = 0 value which will then be the only way it can achieve a zero partial derivative with respect to m. That was the basis of my statement earlier.
===================

Yes, Roger, that's true. I was thinking of the case where both m and b are constrained, e.g., we are solving

min. || m*x+b-y||^2

subject to m>=0 and b>=0

Once you add constraints on both variables, it is no longer trivial to predict how the constraints will affect the sign of the minimizing m (or b).