From: Matt J on
"John D'Errico" <woodchips(a)rochester.rr.com> wrote in message <hnr0ni$md2$1(a)fred.mathworks.com>...

> > It doesn't require it, but what if you are only interested in solutions in a particular interval?
>
> This is not an argument for fminbnd, but really an
> argument for fzero in this situation.
>
> fzero can happily work in either mode. fminbnd
> REQUIRES a bracket. So fzero is more flexible here.
=====================

No, as I posted initially, if fzero is supplied a bracket, it will work with it happily only if the function values at the bracket boundaries f(x1) and f(x2) are of opposite sign. Otherwise, according to the doc (in R2009b), it will return an error. Conversely, fminbnd does not have this restriction.

This means that if you start with endpoints where sign( f(x1) )=sign( f(x2)
and you insist on using fzero, you must spend some computation searching for a smaller bracket that satifies these requirements.


> IF you are confident that the function truly has a
> multiple root at a point (a root of specifically even
> multiplicity), then use of fminbnd WITHOUT
> squaring the function makes some sense, but only
> in that fairly limited scenario.

There is an additional scenario, which I wouldn't consider all that limited.

Suppose you are asked to analyze an interval [x1,x2] to see if it contains a root and if so, to determine the root's location. If you had f(x1) and f(x2) of opposite sign and you knew the function to be continuous, then you know right away that the interval contains a root and fzero will find it for you.

Conversley, though, if the signs at the boundaries are the same, it's not clear what better option you have than to run fminbnd on f(x)^2
From: Ivan Werning on
Thanks to both of you for all the replies.

I see the issues regarding potential non changing of signs the supplying of bounds, etc, if I didn't know that my function was well behaved. These are all good points and the comments here are useful. But in my current case, I am not that worried with the robustness across problems in this regard. Let's say I am confident that f crosses zero once and changes sign within an interval of interest. Thus, I would call either function with such interval.

I was wondering about the potential speed in finding a "good" solution.

Thanks!

-Ivan
From: Matt J on
"Ivan Werning" <iwerningl(a)yahoo.com> wrote in message <hnrl9n$jp8$1(a)fred.mathworks.com>...

> I was wondering about the potential speed in finding a "good" solution.

That's probably going to take experimentation. Although the journal articles describing each algorithm are referenced in the docs for fminbnd and fzero, and you could read those to get an idea which would be faster (I haven't).

However, just from eyeballing the description there, I would say that fminbnd is going to give faster convergence the more linear is your function on the interval of interest.

In the extreme case where your function was exactly linear, you would be using quadratic interpolation on a quadratic function, which should reach the function in a single interpolation step.

Conversely, the algorithm for fzero uses, according to the doc, a combination of bisection, inverse quadratic interpolation, and secant method steps. The first two for sure wouldn't find the root of a linear function in a single step. A secant step would, but we don't know, without reading the articles, at what point the secant steps kick in...
From: Matt J on
"Matt J " <mattjacREMOVE(a)THISieee.spam> wrote in message <hnrok2$bel$1(a)fred.mathworks.com>...

> In the extreme case where your function was exactly linear, you would be using quadratic interpolation on a quadratic function, which should reach the function in a single interpolation step.
>
> Conversely, the algorithm for fzero uses, according to the doc, a combination of bisection, inverse quadratic interpolation, and secant method steps. The first two for sure wouldn't find the root of a linear function in a single step. A secant step would, but we don't know, without reading the articles, at what point the secant steps kick in...
=======================

I was curious enough to go ahead and test the above theory using the code at the bottom of this post. The results were


>> test('fzero'), test('fminbnd')

ans =

AverageTime: 4.5154e-004
FunctionEvaluations: 3
iterations: 1


ans =

AverageTime: 5.4850e-004
FunctionEvaluations: 6
iterations: 5

So, in contrast to my expectations, it looks like fminbnd was a bit slower here. However, the results are somewhat confusing, the amount by which fminbnd was slower was not at all in proportion either to the number of iterations or the number of function evaluations. It's not clear where the overhead is coming from...



%%%%%%%%%%%%%%TEST CODE%%%%%%%%%%%


function results=test(fname)


Niter=3000;
x0=[0,2];
x1=x0(1);
x2=x0(2);
options.TolX=1e-6;


switch fname

case 'fzero'

fun=(a)line;
tic;
for ii=1:Niter

[x,fval,exitflag,output] = fzero(fun,[0,10],options);

end


results.AverageTime=toc/Niter;
results.FunctionEvaluations=output.funcCount;
results.iterations=output.iterations;

case 'fminbnd'

fun=(a)quadline;
tic;
for ii=1:Niter

[x,fval,exitflag,output] = fminbnd(fun,0,10,options);



end

results.AverageTime=toc/Niter;
results.FunctionEvaluations=output.funcCount;
results.iterations=output.iterations;
end

function f=line(x)

f=x-1;

function f=quadline(x)

f=(x-1).^2;
From: Ivan Werning on
I guess from your example both had the potential to get their in one iteration. However, the fminbnd has to estimate a 2nd derivative, so it has to do more function evaluations. It might have done that with some error, so it had more iterations until it converged.

Perhaps fzero has overhead in deciding what method to use.

Interesting analysis. Thanks.