From: Biswa on
Hi there,
I am trying to minimise an objective function after solving a set o ODEs in each iteration. The parameters that define the ODE has bounds and these are the inputs to the fmincon function. Now, I would like to additionally penalise the objective based on some state variables at the end of my ODE integration. A typical way of doing this is:

new objective = old objective + beta*(error function)

I see that fmincon certainly doesn't adhere to the penalty function. An alternative way is to use additional Lagrange multiplier based to limit the error function <= some bound. Can you suggest me a way to incorporate this.

Thanks,
R
From: Alan Weiss on
On 6/1/2010 9:43 AM, Biswa wrote:
> Hi there,
> I am trying to minimise an objective function after solving a set o ODEs
> in each iteration. The parameters that define the ODE has bounds and
> these are the inputs to the fmincon function. Now, I would like to
> additionally penalise the objective based on some state variables at the
> end of my ODE integration. A typical way of doing this is:
>
> new objective = old objective + beta*(error function)
>
> I see that fmincon certainly doesn't adhere to the penalty function. An
> alternative way is to use additional Lagrange multiplier based to limit
> the error function <= some bound. Can you suggest me a way to
> incorporate this.
>
> Thanks,
> R

I am not at all sure that I understand you. What do you mean "...fmincon
certainly doesn't adhere to the penalty function?"

If you have a nonlinear constraint function that you are trying to bound
(the constraint function might come as a result of solving your ODE), see
http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/brhkghv-11.html#brhkghv-16
for the syntax of writing constraints.

It is possible that you will get better results by setting the
DiffMinChange option to something larger than the default, say 1e-4. You
might also want to adjust TolX to be somewhat larger than its default value.

You might want to avoid solving the ODE twice, once for the objective
function and once for the nonlinear constraint function. To solve the
ODE just once, use the technique described in this thread:
http://www.mathworks.com/matlabcentral/newsreader/view_thread/269936#707304

Alan Weiss
MATLAB mathematical toolbox documentation
From: B on
Dear Alan,

Many thanks for the reply and apologies for my incompetence in explaining the problem. Let me try it again once more - I have a set of ODEs (around 60) with 10 parameters (to be optimized) that I solve at each iteration of fmincon. ODE15s returns me a state vector of size 60 * time. My main objective function is the averaged value of one of the variable from this state vector (say variable 40 of the ODE state vector). If fmincon has to minimize only this objective it works fine BUT the moment I include a constraint for eg. that the averaged value of say variable 50 in the ODE state vector has to be <= 10, then fmincon doesn't fulfil this constraint. I have been incorporating this constraint as:

new objective = old objective (avg. value of variable 40) + big number (if avg value of var 50 > 10).

This is simply a penalty based method and in principle should work, but it doesn't. So, the second option is to have the value of variable 50 in the ODE state vector has to be <= 10 as a constraint. After your email, I declared the average value of variable 50 as a global and then had a separate nonlinear constraint file with <= 10 constraint (also with global scope of variable 50) but this certainly didn't work so far. Please advise.

Thanks, B.
From: Alan Weiss on
OK, I think I understand now.

It is best if you have a smooth constraint. If I understand you, you add
a big number if the average value of a component of your ODE solution
(V) goes above a certain amount (10). Instead, make your nonlinear
constraint function equal to
V - 10
This way, fmincon knows which direction to move to make the constraint
function smaller. If you simply add a big number, it can't tell which
way to go.

The different algorithms in fmincon have different characteristics. I
would certainly try the interior-point algorithm first for your problem.
sqp or active set algorithms are also worth trying, but not at the first
attempt.

Let me reiterate my recommendation that you set the DiffMinChange option
to a larger value than the default, maybe 1e-4 (depending on the scale
of your problem). Solutions of differential equations are sometimes
noisy, and you want to make sure to take a large enough finite
difference step that the noise is overcome. You also might want to use
central finite differences (set option FinDiffType to 'central').

Are you starting from a feasible point, one where the constraints are
satisfied? If not, try starting from various points manually until you
find a feasible starting point.

I would avoid global variables by using nested functions. See
http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/brhkghv-7.html
and the thread
http://www.mathworks.com/matlabcentral/newsreader/view_thread/269936#707304

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation

On 6/3/2010 12:41 AM, B wrote:
> Dear Alan,
>
> Many thanks for the reply and apologies for my incompetence in
> explaining the problem. Let me try it again once more - I have a set of
> ODEs (around 60) with 10 parameters (to be optimized) that I solve at
> each iteration of fmincon. ODE15s returns me a state vector of size 60 *
> time. My main objective function is the averaged value of one of the
> variable from this state vector (say variable 40 of the ODE state
> vector). If fmincon has to minimize only this objective it works fine
> BUT the moment I include a constraint for eg. that the averaged value of
> say variable 50 in the ODE state vector has to be <= 10, then fmincon
> doesn't fulfil this constraint. I have been incorporating this
> constraint as:
>
> new objective = old objective (avg. value of variable 40) + big number
> (if avg value of var 50 > 10).
>
> This is simply a penalty based method and in principle should work, but
> it doesn't. So, the second option is to have the value of variable 50 in
> the ODE state vector has to be <= 10 as a constraint. After your email,
> I declared the average value of variable 50 as a global and then had a
> separate nonlinear constraint file with <= 10 constraint (also with
> global scope of variable 50) but this certainly didn't work so far.
> Please advise.
>
> Thanks, B.