Prev: GUI : sharing variable between callback
Next: SVM
From: Soren on 12 Mar 2010 13:47 On Mar 12, 1:42 pm, "John D'Errico" <woodch...(a)rochester.rr.com> wrote: > Soren <soren.skou.niel...(a)gmail.com> wrote in message <d309400b-f1f7-4665-81cc-0e8d12bb5...(a)q16g2000yqq.googlegroups.com>... > > On Mar 11, 12:25 pm, "Matt J " <mattjacREM...(a)THISieee.spam> wrote: > > > Soren <soren.skou.niel...(a)gmail.com> wrote in message <7d423e3b-cff4-4a44-a0b6-c6b47b28d...(a)r1g2000yqj.googlegroups.com>... > > > > Hi, > > > > > I'm trying to solve a Tikhonov regularization problem described as: T > > > > = ||Ax-J||^2 + alpha ||x||^2 > > > > > Which has the solution x = (alpha I + A*A)^-1 (A*J). Where I is the > > > > identity matrix, and A* means A transposed. > > > > > The above equation gives me a solution, > > > > =================== > > > > First of all, you shouldn't be using this equation. You should be using > > > > N=length(x); > > > > x=[A;alpha*eye(N)]\[J ; zeros(N)] ; > > > > > but I have additional > > > > information (the first parameter in x must be zero).. Is there a > > > > simple way to include that constraint in the above solution? (i.e. tie > > > > down the first parameter to zero and then find the optimal solution > > > > from there), > > > > ================= > > > > Just delete the first column of A and the first column and row of eye(N) > > > Thanks for the answer! But is cutting out the columns and rows the > > right way if it still has to be under the smoothness constraint > > alpha*||x||^2 ?? ... cutting out the columns and rows of A and eye > > will let me force x(0) to zero.. but the resulting function is then > > not smooth from x(0) to x(1).. i.e. it is the same as solving x and > > without removing the columns and rows.. and then just manually set > > x(0) = 0. > > > Am I doing something wrong.. or Is there another way I can do it that > > would still keep the function smooth? > > > Soren > > Matt is entirely correct here. > > Dropping out the corresponding column is equivalent > top forcing that terms to zero. Then when you are done, > insert zero back into the result, so there is no question > about whether it is "smooth". If you require that this > terms is zero, then this must be so. > > As far as a smoothness penalty goes, Matt is also correct > that your ridge parameter is not a smoothness constraint. > It is a simple bias towards zero for all of the unknowns > in your problem. > > You can do a modified ridge estimator where the bias > is designed to increase the smoothness of a specified > model, but this is not what you are doing here. You > would find such a modified ridge estimator used for > smoothing in some of the work I have put on the file > exchange. (I.e., gridfit, SLMtools, etc.) > > John Thank you both! I'll have a look into the modified ridge estimator then. Soren
From: John D'Errico on 12 Mar 2010 13:53
Soren <soren.skou.nielsen(a)gmail.com> wrote in message <0d816bee-e11d-4446-8005-21834fc9cef1(a)k17g2000yqb.googlegroups.com>... > Thank you both! I'll have a look into the modified ridge estimator > then. The modified ridge estimators I use are no different in how you estimate them. You still use the same form that Matt suggested, using backslash. The difference is that the matrix is not an identity matrix, but one constructed to bias a curve or surface to be smooth. This is often accomplished by discretization of a Laplacian operator, into a linear system of equations. John |