From: Vladimir Zdorovenin on
I am trying to estimate the conditional moments in an EGARCH model, but I am confused by the discrepancy between the simulated and analytical results that I get for mean and variance.

First, I estimate an EGARCH(2,1) model:

[GARCHCoeff,Errors,LLF,e,s,Summary] = garchfit(Spec, SampleR);

Then I use garchpred() to forecast mean and variance analytically for T periods ahead:

[SigmaForecast,MeanForecast,SigmaTotal] = garchpred(GARCHCoeff,SampleR, T);

Since I estimate the model over log-returns, I calculate the T-period mean and variance as:

m = sum(MeanForecast);
s2 = SigmaTotal(end)^2;

Then I simulate 50 000 return's paths over T periods to estimate the higher moments of the conditional T-periods returns:

[eSim,sSim,RSim0] = garchsim(GARCHCoeff, 50000, T, [], [], [], e,s,SampleR);
RSim = sum(RSim0,2); % sum over 1-period returns to get T-days returns

I presume that since I condition over the same data, I should get the same results (approximately) for mean and variance of the T-periods returns calculated as:

m_sim = mean(RSim);
s2_sim = var(RSim);

However, there is a substatial difference between the analytical and simulated results that is consistent over different sets of data SampleR. Where am I wrong here? Is there anything that I am missing out here?