From: Soren on
Hi,

I'm trying to solve a Tikhonov regularization problem described as: T
= ||Ax-J||^2 + alpha ||x||^2

Which has the solution x = (alpha I + A*A)^-1 (A*J). Where I is the
identity matrix, and A* means A transposed.

The above equation gives me a solution, but I have additional
information (the first parameter in x must be zero).. Is there a
simple way to include that constraint in the above solution? (i.e. tie
down the first parameter to zero and then find the optimal solution
from there), or would I need to go to a search algorithm to find a
solution with that constraint?

Best,
Soren
From: Matt J on
Soren <soren.skou.nielsen(a)gmail.com> wrote in message <7d423e3b-cff4-4a44-a0b6-c6b47b28d7ac(a)r1g2000yqj.googlegroups.com>...
> Hi,
>
> I'm trying to solve a Tikhonov regularization problem described as: T
> = ||Ax-J||^2 + alpha ||x||^2
>
> Which has the solution x = (alpha I + A*A)^-1 (A*J). Where I is the
> identity matrix, and A* means A transposed.
>
> The above equation gives me a solution,
===================

First of all, you shouldn't be using this equation. You should be using

N=length(x);

x=[A;alpha*eye(N)]\[J ; zeros(N)] ;


> but I have additional
> information (the first parameter in x must be zero).. Is there a
> simple way to include that constraint in the above solution? (i.e. tie
> down the first parameter to zero and then find the optimal solution
> from there),
=================

Just delete the first column of A and the first column and row of eye(N)
From: Soren on
On Mar 11, 12:25 pm, "Matt J " <mattjacREM...(a)THISieee.spam> wrote:
> Soren <soren.skou.niel...(a)gmail.com> wrote in message <7d423e3b-cff4-4a44-a0b6-c6b47b28d...(a)r1g2000yqj.googlegroups.com>...
> > Hi,
>
> > I'm trying to solve a Tikhonov regularization problem described as: T
> > = ||Ax-J||^2 + alpha ||x||^2
>
> > Which has the solution x = (alpha I + A*A)^-1 (A*J). Where I is the
> > identity matrix, and A* means A transposed.
>
> > The above equation gives me a solution,
>
> ===================
>
> First of all, you shouldn't be using this equation. You should be using
>
> N=length(x);
>
> x=[A;alpha*eye(N)]\[J ; zeros(N)] ;
>
> > but I have additional
> > information (the first parameter in x must be zero).. Is there a
> > simple way to include that constraint in the above solution? (i.e. tie
> > down the first parameter to zero and then find the optimal solution
> > from there),
>
> =================
>
> Just delete the first column of A and the first column and row of eye(N)

Thanks for the answer! But is cutting out the columns and rows the
right way if it still has to be under the smoothness constraint
alpha*||x||^2 ?? ... cutting out the columns and rows of A and eye
will let me force x(0) to zero.. but the resulting function is then
not smooth from x(0) to x(1).. i.e. it is the same as solving x and
without removing the columns and rows.. and then just manually set
x(0) = 0.

Am I doing something wrong.. or Is there another way I can do it that
would still keep the function smooth?

Soren
From: Matt J on
Soren <soren.skou.nielsen(a)gmail.com> wrote in message <d309400b-f1f7-4665-81cc-0e8d12bb5f5c(a)q16g2000yqq.googlegroups.com>...

> > First of all, you shouldn't be using this equation. You should be using
> >
> > N=length(x);
> >
> > x=[A;alpha*eye(N)]\[J ; zeros(N)] ;
=============

I had a mistake here. This should have been

x=[A;sqrt(alpha)*eye(N)]\[ J; zeros(N)] ;



> > > but I have additional
> > > information (the first parameter in x must be zero).. Is there a
> > > simple way to include that constraint in the above solution? (i.e. tie
> > > down the first parameter to zero and then find the optimal solution
> > > from there),
> >
> > =================
> >
> > Just delete the first column of A and the first column and row of eye(N)
>
> Thanks for the answer! But is cutting out the columns and rows the
> right way if it still has to be under the smoothness constraint
> alpha*||x||^2 ?? ... cutting out the columns and rows of A and eye
> will let me force x(0) to zero.. but the resulting function is then
> not smooth from x(0) to x(1)..

The term alpha*||x||^2 is not a smoothness penalty. It is just an energy penalty. A smoothness penalty would look something like alpha*||C*x||^2 where C is a differencing matrix. In this case, though, the approach would be the same. Just delete the first column of C.
From: John D'Errico on
Soren <soren.skou.nielsen(a)gmail.com> wrote in message <d309400b-f1f7-4665-81cc-0e8d12bb5f5c(a)q16g2000yqq.googlegroups.com>...
> On Mar 11, 12:25 pm, "Matt J " <mattjacREM...(a)THISieee.spam> wrote:
> > Soren <soren.skou.niel...(a)gmail.com> wrote in message <7d423e3b-cff4-4a44-a0b6-c6b47b28d...(a)r1g2000yqj.googlegroups.com>...
> > > Hi,
> >
> > > I'm trying to solve a Tikhonov regularization problem described as: T
> > > = ||Ax-J||^2 + alpha ||x||^2
> >
> > > Which has the solution x = (alpha I + A*A)^-1 (A*J). Where I is the
> > > identity matrix, and A* means A transposed.
> >
> > > The above equation gives me a solution,
> >
> > ===================
> >
> > First of all, you shouldn't be using this equation. You should be using
> >
> > N=length(x);
> >
> > x=[A;alpha*eye(N)]\[J ; zeros(N)] ;
> >
> > > but I have additional
> > > information (the first parameter in x must be zero).. Is there a
> > > simple way to include that constraint in the above solution? (i.e. tie
> > > down the first parameter to zero and then find the optimal solution
> > > from there),
> >
> > =================
> >
> > Just delete the first column of A and the first column and row of eye(N)
>
> Thanks for the answer! But is cutting out the columns and rows the
> right way if it still has to be under the smoothness constraint
> alpha*||x||^2 ?? ... cutting out the columns and rows of A and eye
> will let me force x(0) to zero.. but the resulting function is then
> not smooth from x(0) to x(1).. i.e. it is the same as solving x and
> without removing the columns and rows.. and then just manually set
> x(0) = 0.
>
> Am I doing something wrong.. or Is there another way I can do it that
> would still keep the function smooth?
>
> Soren

Matt is entirely correct here.

Dropping out the corresponding column is equivalent
top forcing that terms to zero. Then when you are done,
insert zero back into the result, so there is no question
about whether it is "smooth". If you require that this
terms is zero, then this must be so.

As far as a smoothness penalty goes, Matt is also correct
that your ridge parameter is not a smoothness constraint.
It is a simple bias towards zero for all of the unknowns
in your problem.

You can do a modified ridge estimator where the bias
is designed to increase the smoothness of a specified
model, but this is not what you are doing here. You
would find such a modified ridge estimator used for
smoothing in some of the work I have put on the file
exchange. (I.e., gridfit, SLMtools, etc.)

John
 |  Next  |  Last
Pages: 1 2
Prev: GUI : sharing variable between callback
Next: SVM