From: Matt J on
"Jason" <jf203(a)ic.ac.uk> wrote in message <ht4bq9$4t9$1(a)fred.mathworks.com>...

>
> I will try the levenberg-marquardt option, thanks!
>
> As for your inquiry, what do you mean?
>
> The cost function is still the same.
>
> As I said J = min x ( x' Q x).
>
> I am following a principle from computer vision where Q is the matrix representation of a conic (an ellipse). If you set Q = adj(Q) (where adj means adjoint) then x' Q x = 0 means that if this equation is satisfied, then the line parametrized by x is tangent to the ellipse. The additional constraint i have gaven, namely that x(1) and x(2) lie on a unit circle only helps but is not necessary.
=================

You mean x is a vector of homogeneous coordinates!?!

This seems like a really dubious idea. If all this is really just to fit a line, why don't you just use polyfit? It would be much faster and more robust.

At any rate, if you don't constrain homogeneous coordinates in some way, your minimization problem is virtually guaranteed to be ill-conditioned. In the case of the line

x(1)*X+x(2)*Y+x(3)

for any solution x, another solution is c*x for any scalar c. This tends to create problems for optimization algorithms.
From: Jason on
> You mean x is a vector of homogeneous coordinates!?!
>
> This seems like a really dubious idea. If all this is really just to fit a line, why don't you just use polyfit? It would be much faster and more robust.
>
> At any rate, if you don't constrain homogeneous coordinates in some way, your minimization problem is virtually guaranteed to be ill-conditioned. In the case of the line
>
> x(1)*X+x(2)*Y+x(3)
>
> for any solution x, another solution is c*x for any scalar c. This tends to create problems for optimization algorithms.

Yes, x is a vector of homogeneous coordinates!

I agree about what you say: "for any solution x, another solution is c*x for any scalar c." I experienced this, but what can I do about it?

I am reading about how to do this using polyfit, I have no idea how to start, since as I said I want to compute the common tangent to a set of ellipses which is solved if x'Qx = 0.

Say I have a set of ellipses in either matrix form, or of course I can write it out as a polynomial, how do I know if the line I am trying to fit is tangent to these ellipses?
From: Jason on
I have given this some thought, I don't think polyfit is applicable for my purposes.

I want an analytical solution first of all, and secondly I don't know how to compute the common tangent to say 4 ellipses using a method other than the one I have outlined.

Many thanks though.
From: Matt J on
"Jason" <jf203(a)ic.ac.uk> wrote in message <ht4feg$q4b$1(a)fred.mathworks.com>...
> > You mean x is a vector of homogeneous coordinates!?!

> I agree about what you say: "for any solution x, another solution is c*x for any scalar c." I experienced this, but what can I do about it?
>
======================

Well, you've already answered that question. By applying the constraint
x(1)^2+x(2)^2=1 for example, not only do you get rid of the continuous 1D space of non-unique solutions c*x, but also the degenerate solution x=0 is excluded from the search.

I'm emphasizing this because you still seem to think the constraint is unnecessary, when in fact it has very important consequences for the performance of the minimization algorithm.


> I am reading about how to do this using polyfit, I have no idea how to start, since as I said I want to compute the common tangent to a set of ellipses which is solved if x'Qx = 0.
===============

No, this is the first time you've explicitly said anything about multiple ellipses. However, now that I understand the problem, I have a few more options to recommend.

First, you can divide the problem into a minimization over two separate regions

(1) The region where we constrain x(3)=0, i.e. where the tangent line passes through the origin.

(2) The region where we constrain x(3)=1, i.e., the region where the tangent line does not pass through the origin. Note that this is equivalent to minimizing over the entire region x(3)~=0 because in homogeneous coordinates x(3)~=0 and x(3)=1 form an equivalence class.

Once you've found the minimum in each case, you can pick the one with the least value of J.

Once we've constrained x(3) either to 0 or 1, there are a few consequences

(I) The cost function J(x) reduces to a 2D function of x(1) and x(2)

(II) It becomes relatively easy to search the 2D space for the global solution by parametrizing the space as
x(1) = t*cosd(alpha)
x(2) = t*sind(alpha)

For each fixed alpha, the cost function J reduces to a 4-th order 1D polynomial in t, which can therefore be minimized rapidly, analytically, and globally in t. I can do this in a loop over a finely sampled range of angles alpha, say alpha=0:0.1:180, until I find a good approximation of the global minimum over all alpha and t and hence over all
x(1) and x(2). This should be fairly quick, considering how easy it is to minimize 1D polynomials

A related approach, but slightly less robust, would be to code your own version of the coordinate descent algorithm. In this algorithm, you alternatingly minimize over each
x(i) while the other x(j) are fixed. Here again J reduces to a quartic polynomial in x(i) which can be globally/analytically minimized in each iteration. Because you are globally minimizing at each step, you have a better than average chance of not getting stuck in a local min. This approach can be applied either to the original 3D problem or one of the above reductions to 2D, although the latter would be better for reasons already discussed.
From: Jason on
"Matt J " <mattjacREMOVE(a)THISieee.spam> wrote in message <ht4urj$p85$1(a)fred.mathworks.com>...
> "Jason" <jf203(a)ic.ac.uk> wrote in message <ht4feg$q4b$1(a)fred.mathworks.com>...
> > > You mean x is a vector of homogeneous coordinates!?!
>
> > I agree about what you say: "for any solution x, another solution is c*x for any scalar c." I experienced this, but what can I do about it?
> >
> ======================
>
> Well, you've already answered that question. By applying the constraint
> x(1)^2+x(2)^2=1 for example, not only do you get rid of the continuous 1D space of non-unique solutions c*x, but also the degenerate solution x=0 is excluded from the search.
>
> I'm emphasizing this because you still seem to think the constraint is unnecessary, when in fact it has very important consequences for the performance of the minimization algorithm.
>
>
> > I am reading about how to do this using polyfit, I have no idea how to start, since as I said I want to compute the common tangent to a set of ellipses which is solved if x'Qx = 0.
> ===============
>
> No, this is the first time you've explicitly said anything about multiple ellipses. However, now that I understand the problem, I have a few more options to recommend.
>
> First, you can divide the problem into a minimization over two separate regions
>
> (1) The region where we constrain x(3)=0, i.e. where the tangent line passes through the origin.
>
> (2) The region where we constrain x(3)=1, i.e., the region where the tangent line does not pass through the origin. Note that this is equivalent to minimizing over the entire region x(3)~=0 because in homogeneous coordinates x(3)~=0 and x(3)=1 form an equivalence class.
>
> Once you've found the minimum in each case, you can pick the one with the least value of J.
>
> Once we've constrained x(3) either to 0 or 1, there are a few consequences
>
> (I) The cost function J(x) reduces to a 2D function of x(1) and x(2)
>
> (II) It becomes relatively easy to search the 2D space for the global solution by parametrizing the space as
> x(1) = t*cosd(alpha)
> x(2) = t*sind(alpha)
>
> For each fixed alpha, the cost function J reduces to a 4-th order 1D polynomial in t, which can therefore be minimized rapidly, analytically, and globally in t. I can do this in a loop over a finely sampled range of angles alpha, say alpha=0:0.1:180, until I find a good approximation of the global minimum over all alpha and t and hence over all
> x(1) and x(2). This should be fairly quick, considering how easy it is to minimize 1D polynomials
>
> A related approach, but slightly less robust, would be to code your own version of the coordinate descent algorithm. In this algorithm, you alternatingly minimize over each
> x(i) while the other x(j) are fixed. Here again J reduces to a quartic polynomial in x(i) which can be globally/analytically minimized in each iteration. Because you are globally minimizing at each step, you have a better than average chance of not getting stuck in a local min. This approach can be applied either to the original 3D problem or one of the above reductions to 2D, although the latter would be better for reasons already discussed.

Dear Matt,

Your reply was excellent. This is exactly what I have implemented now.

Many thanks.

Kind regards,
Jason