From: Torben on
Hallo Guys out there,

I trying to find out how the gevfit function/ parts of it work.

I know it is based on the max likelihood funtion, which means you do partial
df/dparameter for the distribution f and its paratemeters.
With this knowledge I tried to find out whether this part in the gevfit function does this partial derivatives. If not, what does it do?

function nll = negloglike(parms, data)
% Negative log-likelihood for the GEV (log(sigma) parameterization).
k = parms(1);
lnsigma = parms(2);
sigma = exp(lnsigma);
mu = parms(3);

n = numel(data);
z = (data - mu) ./ sigma;

if abs(k) > eps
t = 1 + k*z;
if min(t) > 0
u = 1 + k.*z;
lnu = log1p(k.*z); % log(1 + k.*z)
t = exp(-(1/k)*lnu); % (1 + k.*z).^(-1/k)
nll = n*lnsigma + sum(t) + (1+1/k)*sum(lnu);

else
% The support of the GEV when is 0 < 1+k*z.
nll = Inf;
end
else % limiting extreme value dist'n as k->0
nll = n*lnsigma + sum(exp(-z) + z);

end

Thanks

sts
From: Peter Perkins on
Torben wrote:

> With this knowledge I tried to find out whether this part in the gevfit function does this partial derivatives. If not, what does it do?
>
> function nll = negloglike(parms, data)
> % Negative log-likelihood for the GEV (log(sigma) parameterization).

Torben, it does just what it says it does: calculates the negative log-likelihood for the GEV. From code earlier in GEVFIT, you can see that it is the objective function in a call to FMINSEARCH:

% Maximize the log-likelihood with respect to k, lnsigma, and mu.
[parmhat,nll,err,output] = fminsearch(@negloglike,parmhat,options,x);
From: Torben on
Peter,

Thanks.
Not yet fully understood that file.

1) Log maximum likelihood is that you do partial derivation for every parameter of the distribution function and then find the places where the partial derivation are zero.

2) However the function tries to find the mimimum of the derivation, doesn't it? Shouldn't it try to find the zero of it?

Certainly everything in the file is right, I am just not getting it why.

May be you can become more precise in your explanation.

Torben
From: Peter Perkins on
Torben wrote:
> Peter,

> 1) Log maximum likelihood is that you do partial derivation for every parameter of the distribution function and then find the places where the partial derivation are zero.

Maximum likelihood estimation is when you find the parameter values (hopefully unique) that maximize the (hopefully bounded) liklihood. The log is a monotinic transformation that often makes the maximization more convenient. Finding the zeros of the gradient of the log-liklihood is one way to find that maximum, but it is not the method used in GEVFIT. GEVFIT maximizes the log-likelihood directly. There are no derivatives.