From: Ceren on
Hi!
Thanks for the help, it was very useful! I have a related question: Is it possible to save the gradient so that I can see it in the workspace, rather than just display it for each iteration? I defined it as an output of outfun by:

function [stop,Y] = outfun(x, optimValues, state)
stop=false;

switch state
case 'iter'
Y(optimValues.iteration+1,:)=optimValues.gradient'
otherwise
end
end

But still could not save it in the workspace. I'd appreciate any help!
Thanks

Marcelo Marazzi <mREMOVEmaALLraCAPITALSzzi(a)mathworks.com> wrote in message <g84pmc$rle$1(a)fred.mathworks.com>...
> There is a built-in plot function that plots the norm of the gradient.
>
> One way to run it is to open the optimtool, and select FIrst Order
> Optimality under Plot Functions. There is a way to run this from the
> command line as well.
>
> I assume you're using the medium-scale algorithm. This algorithm
> does not compute the Hessian, but rather a so-called quasi-Newton
> approximation to the inverse of the Hessian. This matrix is only
> an approximation (often crude), and to the inverse of the Hessian
> (not to the Hessian itself), so it's not made available via the
> output function - typically it's not useful.
>
> The output function gives you access to each iterate x; you can
> compute the Hessian (or an approximation to it via finite differences)
> at x inside the output function.
>
> -Marcelo
>
> David Doria wrote:
> > Ok, now I would like to see the hessian that it calculates
> > at every step... I don't see this in the OptimValues list...
> > is there a way to get at it?
> >
> > Dave