From: Matt J on
"John D'Errico" <woodchips(a)rochester.rr.com> wrote in message <hjl9g0$s8n$1(a)fred.mathworks.com>...

> Don't be silly. Hoping that fmincon can make
> a discontinuous jump on a prayer is a foolish
> way to perform an optimization.

As I told Bruno, we have no idea how bumpy the Lagrangian is in the convex hull of the two given disjoint intervals. So, hoping the algorithm doesn't get stuck is no more a matter of prayer than in the more conventional case where we're minimizing a non-convex objective function over a convex domain.


> Starting with a point inside one subset of the
> domain space, exactly how do you expect
> fmincon to know that there is another part of
> the domain space that will yield feasible results?

Because my formulation of the constraints as a quartic polynomail captured this information.



> Once fmincon finds itself inside a feasible part of
> the domain space, it will not happily jump to
> another.
>
> Optimizers are simple things. They try to walk
> downhill. When they bounce into a wall, they
> try to feel their way along it. They cannot simply
> guess to try a wildly different point that appears
> to be outside of the nonlinear constraints that
> you provide, in the expectation that you did
> something foolish like this.
======================

As I've said a few times already, fmincon does not enforce feasibility at every iteration. So, there are no walls. It can travel between different regions of the constraint set along a feasible path or non-feasible, and in large steps or small ones.

In any case, I don't see why "large steps=wild steps". The Newton direction vector can yield a very large, educated step (certainly in the case where the function is quadratic-like). I don't know if fmincon tests an undamped Newton-step for sufficient decrease, but it seems sensible for it to at least test such a step length in the first few iterations, in case the objective function is quadratic-like.


>
> Try this, as a timing comparison:
>
==================

Just as a reminder, I don't have the Optimization Toolbox, so I cannot try this myself. However, the comparison appears unfair for at least the following reasons.

(1) You haven't passed analytical derivatives to nonlcon. So in Case#2, the algorithm has to do extra unnecessary work to approximate the derivatives numerically.

(2) You're minimizing over a single interval continguous domain, so in Case #1, you're not capturing the handicap of having to run fmincon twice. For a fairer comparison, you would first minimize over the interval x\in[.5 1] and then over x\in[0, .5]
(in Case#1 only!!)

(3) You have a pretty small number of constraints to iterate over. I'm thinking more of situations where the number of constraints would have a much more dominant effect on the required number of iterations, but I can't think right away of a good way to set that up...