From: Anna Wozniak on

I am conducting 3-parameter ML estimation on two separate data samples with the intent of comparing the parameters of the different samples.

A colleague suggested that rather than running two separate estimates (one on each sample), I should redo the likelihood to contain 3x2=6 parameters, using indicators for sample membership, and run a single estimate on the pooled sample.

Doesn't this increase the overall computational complexity? If so, approximately by how much? More relevantly, about how much more time would this take?

thank you
From: Peter Perkins on
On 4/13/2010 3:03 AM, Anna Wozniak wrote:
>
> I am conducting 3-parameter ML estimation on two separate data samples
> with the intent of comparing the parameters of the different samples.
>
> A colleague suggested that rather than running two separate estimates
> (one on each sample), I should redo the likelihood to contain 3x2=6
> parameters, using indicators for sample membership, and run a single
> estimate on the pooled sample.
>
> Doesn't this increase the overall computational complexity? If so,
> approximately by how much? More relevantly, about how much more time
> would this take?

If your combined model is really fully stratified, i.e. you are estimating all three parameters differently for each set of data, then you don't really gain anything by combining the data. But your goal is to compare parameters across the two data sets, and that's where the benefit comes in. One way to do that would be to start with a single set of three parameters estimated across both sets of data, i.e. a fully pooled model, and then pick a model that estimates one of the parameters differently for the two data sets. Then use a likelihood ratio test (or AIC, or whatever) between the two models to determine if that parameters should be stratified. And so on.
 | 
Pages: 1
Prev: contour extraction, min of the gradient
Next: hi;)