From: Ryan on
On Dec 2, 5:47 pm, stringplaye...(a)YAHOO.COM (Dale McLerran) wrote:
> Ryan,
>
> I looked at the code from the Guo paper to see how they
> handled the latent class part of their model.  It is really
> pretty straightforward.  For a latent class analysis without
> any predictor variables and with just three binary response
> variables X1, X2, and X3, syntax would be as follows:
>
> proc nlmixed data=mydata tech=quanew lis=2 method=gauss
>              maxiter=1000 gconv=.00000000001 fconv=.00000000001;
>   parms
>     /* Parameter which expresses probability of */
>     /* latent class 1 vs latent class 2         */
>     alpha1 = 0.5
>     /* If there are more latent classes, then need */
>     /* alpha1, alpha2, etc.  Number of alpha parms */
>     /* is one less than number of latent classes.  */
>
>     /* Parameters which express probability of Xi=1 | latent class 1 */
>     bpi11 = 1 bpi21 = 1 bpi31 = 0
>
>     /* Parameters which express probability of Xi=1 | latent class 2 */
>     bpi12 = 0.5 bpi22 = 0.5 bpi32 = -0.5
>
>     /* Need bpi13, bpi23, etc. if three latent classes */
>   ;
>
>   bounds -6 <= bpi11 - bpi52 <= 6;
>
>   **** latent class part;
>   pi11 = 1/(1+exp(-bpi11)); pi12 = 1/(1+exp(-bpi12));
>   pi21 = 1/(1+exp(-bpi21)); pi22 = 1/(1+exp(-bpi22));
>   pi31 = 1/(1+exp(-bpi31)); pi32 = 1/(1+exp(-bpi32));
>   prod11 = (pi11**x1)*(1-pi11)**(1-x1);
>      prod12 = (pi12**x1)*(1-pi12)**(1-x1);
>   prod21 = (pi21**x2)*(1-pi21)**(1-x2);
>      prod22 = (pi22**x2)*(1-pi22)**(1-x2);
>   prod31 = (pi31**x3)*(1-pi31)**(1-x3);
>      prod32 = (pi32**x3)*(1-pi32)**(1-x3);
>
>   /* model probability of each latent class */
>   eta1=exp(alpha1)/(1+exp(alpha1));
>   eta2=1/(1+exp(alpha1));
>
>   /* Construct likelihood and log likelihood */
>   l_latclass=eta1*prod11*prod21*prod31 +
>              eta2*prod12*prod22*prod32;
>   ll_latclass = log(l_latclass);
>
>   /* Maximize the latent class log likelihood */
>   model ll_latclass ˜ general(ll_latclass);
> run;
>
> Note that the initial parameter values in the code above
> were specific to the problem presented in the Guo paper.  It
> is important that at least one of the parameters expressing
> the probability of each variable X1, X2, and X3 given that
> the vector of responses is from latent class 1 (bpi11,
> bpi21, and bpi31) be different from the corresponding
> parameter expressing the probability of a response variable
> given that the vector of responses is from latent class 2
> (bpi12, bpi22, and bpi32).
>
> If you are assuming two latent classes , then you might
> initialize the parameters through the following process:
>
>   1) Compute the probability of each binary response
>      p{i}, i=1, 2, 3
>
>   2) Compute the logit of each probability
>      logit(p{i}) = log( p{i} / (1 - p{i}) )
>
>   3) Assign bpi{i}1 = logit(p{i}) + 0.25
>             bpi{i}2 = logit(p{i}) - 0.25
>
> If the correlation of X{1} with X{j} is negative, then
> reverse the addition/subtraction operations in step 3
> for j=2,3,...,J.
>
> Now, you have random person effects.  I presume that the
> person effects would operate on the latent class
> probability model, and not on the probabilities of each
> observed response.  So we could add random person effects
> as follows:
>
> proc nlmixed data=mydata tech=quanew lis=2 method=gauss
>              maxiter=1000 gconv=.00000000001 fconv=.00000000001;
>   parms
>     /* Parameter which expresses probability of */
>     /* latent class 1 vs latent class 2         */
>     alpha1 = 0.5
>     /* If there are more latent classes, then need */
>     /* alpha1, alpha2, etc.  Number of alpha parms */
>     /* is one less than number of latent classes.  */
>     log_Valpha1 -4
>
>     /* Parameters which express probability of Xi=1 | latent class 1 */
>     bpi11 = 1 bpi21 = 1 bpi31 = 0
>
>     /* Parameters which express probability of Xi=1 | latent class 2 */
>     bpi12 = 0.5 bpi22 = 0.5 bpi32 = -0.5
>
>     /* Need bpi13, bpi23, etc. if three latent classes */
>   ;
>
>   bounds -6 <= bpi11 - bpi52 <= 6;
>
>   **** latent class part;
>   pi11 = 1/(1+exp(-bpi11)); pi12 = 1/(1+exp(-bpi12));
>   pi21 = 1/(1+exp(-bpi21)); pi22 = 1/(1+exp(-bpi22));
>   pi31 = 1/(1+exp(-bpi31)); pi32 = 1/(1+exp(-bpi32));
>   prod11 = (pi11**x1)*(1-pi11)**(1-x1);
>      prod12 = (pi12**x1)*(1-pi12)**(1-x1);
>   prod21 = (pi21**x2)*(1-pi21)**(1-x2);
>      prod22 = (pi22**x2)*(1-pi22)**(1-x2);
>   prod31 = (pi31**x3)*(1-pi31)**(1-x3);
>      prod32 = (pi32**x3)*(1-pi32)**(1-x3);
>
>   /* model probability of each latent class.  */
>   /* Note that we add random effect u1 to alpha1.  U1 is */
>   /* person random effect on probability of latent class */
>   /* 1.  If there are more latent classes, then would    */
>   /* need terms u2, u3, etc.                             */
>   eta1=exp(alpha + u1)/(1+exp(alpha + u1));
>   eta2=1/(1+exp(alpha + u1));
>
>   /* Construct likelihood and log likelihood */
>   l_latclass=eta1*prod11*prod21*prod31 +
>              eta2*prod12*prod22*prod32;
>   ll_latclass = log(l_latclass);
>
>   /* Maximize the latent class log likelihood */
>   model ll_latclass ˜ general(ll_latclass);
>
>   /* Model the variance of the latent class probability */
>   random u1 ~ normal([0], [exp(2*log_Valpha1)]) subject=person;
> run;
>
> Let me know how this goes.  It is an interesting problem.
>
> Dale
>
> ---------------------------------------
> Dale McLerran
> Fred Hutchinson Cancer Research Center
> mailto: dmclerra(a)NO_SPAMfhcrc.org
> Ph:  (206) 667-2926
> Fax: (206) 667-5977
> ---------------------------------------
>
> --- On Tue, 12/1/09, Ryan <ryan.andrew.bl...(a)GMAIL.COM> wrote:
>
>
>
>
>
> > Hi Oliver,
>
> > Thank you for providing even more possibilities. Yes, this
> > might be a
> > good starting point. I also found an article that discussed
> > a fairly
> > more complexLCAmodel (including latent predictors) using
> > NLMIXED
> > that might help me as well. In case anybody's interested,
> > it's located
> > here:
>
> >http://biostatistics.oxfordjournals.org/cgi/content/full/7/1/145
>
> > I have not reviewed this article very closely, but at first
> > glance I
> > notice that the NLMIXED code in the article uses a
> > dependent variable,
> > called "Dummy" which they state is a place holder.
>
> > If I figure this out, I will definitely post back the
> > solution.
>
> > Thanks again!
>
> > Ryan- Hide quoted text -
>
> - Show quoted text -

Hi, Dale:

As always, thank you so much for responding and providing code with
such detailed explanations.

I think I have enough information now to take a stab at coding up the
actual model. The actual model is comprised of several classes and
response variables, along with a random effects variable. It should be
a simple extension of the example you coded up.

I will post my attempt at coding up the actual model in a new thread
tomorrow. I'd be very grateful if you'd consider taking a look at it,
if you have the time.

Thanks again!

Ryan
From: Oliver Kuss on
On 3 Dez., 03:02, Ryan <ryan.andrew.bl...(a)gmail.com> wrote:
> On Dec 2, 5:47 pm, stringplaye...(a)YAHOO.COM (Dale McLerran) wrote:
>
>
>
>
>
> > Ryan,
>
> > I looked at the code from the Guo paper to see how they
> > handled the latent class part of their model.  It is really
> > pretty straightforward.  For a latent class analysis without
> > any predictor variables and with just three binary response
> > variables X1, X2, and X3, syntax would be as follows:
>
> > proc nlmixed data=mydata tech=quanew lis=2 method=gauss
> >              maxiter=1000 gconv=.00000000001 fconv=..00000000001;
> >   parms
> >     /* Parameter which expresses probability of */
> >     /* latent class 1 vs latent class 2         */
> >     alpha1 = 0.5
> >     /* If there are more latent classes, then need */
> >     /* alpha1, alpha2, etc.  Number of alpha parms */
> >     /* is one less than number of latent classes.  */
>
> >     /* Parameters which express probability of Xi=1 | latent class 1 */
> >     bpi11 = 1 bpi21 = 1 bpi31 = 0
>
> >     /* Parameters which express probability of Xi=1 | latent class 2 */
> >     bpi12 = 0.5 bpi22 = 0.5 bpi32 = -0.5
>
> >     /* Need bpi13, bpi23, etc. if three latent classes */
> >   ;
>
> >   bounds -6 <= bpi11 - bpi52 <= 6;
>
> >   **** latent class part;
> >   pi11 = 1/(1+exp(-bpi11)); pi12 = 1/(1+exp(-bpi12));
> >   pi21 = 1/(1+exp(-bpi21)); pi22 = 1/(1+exp(-bpi22));
> >   pi31 = 1/(1+exp(-bpi31)); pi32 = 1/(1+exp(-bpi32));
> >   prod11 = (pi11**x1)*(1-pi11)**(1-x1);
> >      prod12 = (pi12**x1)*(1-pi12)**(1-x1);
> >   prod21 = (pi21**x2)*(1-pi21)**(1-x2);
> >      prod22 = (pi22**x2)*(1-pi22)**(1-x2);
> >   prod31 = (pi31**x3)*(1-pi31)**(1-x3);
> >      prod32 = (pi32**x3)*(1-pi32)**(1-x3);
>
> >   /* model probability of each latent class */
> >   eta1=exp(alpha1)/(1+exp(alpha1));
> >   eta2=1/(1+exp(alpha1));
>
> >   /* Construct likelihood and log likelihood */
> >   l_latclass=eta1*prod11*prod21*prod31 +
> >              eta2*prod12*prod22*prod32;
> >   ll_latclass = log(l_latclass);
>
> >   /* Maximize the latent class log likelihood */
> >   model ll_latclass ˜ general(ll_latclass);
> > run;
>
> > Note that the initial parameter values in the code above
> > were specific to the problem presented in the Guo paper.  It
> > is important that at least one of the parameters expressing
> > the probability of each variable X1, X2, and X3 given that
> > the vector of responses is from latent class 1 (bpi11,
> > bpi21, and bpi31) be different from the corresponding
> > parameter expressing the probability of a response variable
> > given that the vector of responses is from latent class 2
> > (bpi12, bpi22, and bpi32).
>
> > If you are assuming two latent classes , then you might
> > initialize the parameters through the following process:
>
> >   1) Compute the probability of each binary response
> >      p{i}, i=1, 2, 3
>
> >   2) Compute the logit of each probability
> >      logit(p{i}) = log( p{i} / (1 - p{i}) )
>
> >   3) Assign bpi{i}1 = logit(p{i}) + 0.25
> >             bpi{i}2 = logit(p{i}) - 0.25
>
> > If the correlation of X{1} with X{j} is negative, then
> > reverse the addition/subtraction operations in step 3
> > for j=2,3,...,J.
>
> > Now, you have random person effects.  I presume that the
> > person effects would operate on the latent class
> > probability model, and not on the probabilities of each
> > observed response.  So we could add random person effects
> > as follows:
>
> > proc nlmixed data=mydata tech=quanew lis=2 method=gauss
> >              maxiter=1000 gconv=.00000000001 fconv=..00000000001;
> >   parms
> >     /* Parameter which expresses probability of */
> >     /* latent class 1 vs latent class 2         */
> >     alpha1 = 0.5
> >     /* If there are more latent classes, then need */
> >     /* alpha1, alpha2, etc.  Number of alpha parms */
> >     /* is one less than number of latent classes.  */
> >     log_Valpha1 -4
>
> >     /* Parameters which express probability of Xi=1 | latent class 1 */
> >     bpi11 = 1 bpi21 = 1 bpi31 = 0
>
> >     /* Parameters which express probability of Xi=1 | latent class 2 */
> >     bpi12 = 0.5 bpi22 = 0.5 bpi32 = -0.5
>
> >     /* Need bpi13, bpi23, etc. if three latent classes */
> >   ;
>
> >   bounds -6 <= bpi11 - bpi52 <= 6;
>
> >   **** latent class part;
> >   pi11 = 1/(1+exp(-bpi11)); pi12 = 1/(1+exp(-bpi12));
> >   pi21 = 1/(1+exp(-bpi21)); pi22 = 1/(1+exp(-bpi22));
> >   pi31 = 1/(1+exp(-bpi31)); pi32 = 1/(1+exp(-bpi32));
> >   prod11 = (pi11**x1)*(1-pi11)**(1-x1);
> >      prod12 = (pi12**x1)*(1-pi12)**(1-x1);
> >   prod21 = (pi21**x2)*(1-pi21)**(1-x2);
> >      prod22 = (pi22**x2)*(1-pi22)**(1-x2);
> >   prod31 = (pi31**x3)*(1-pi31)**(1-x3);
> >      prod32 = (pi32**x3)*(1-pi32)**(1-x3);
>
> >   /* model probability of each latent class.  */
> >   /* Note that we add random effect u1 to alpha1.  U1 is */
> >   /* person random effect on probability of latent class */
> >   /* 1.  If there are more latent classes, then would    */
> >   /* need terms u2, u3, etc.                             */
> >   eta1=exp(alpha + u1)/(1+exp(alpha + u1));
> >   eta2=1/(1+exp(alpha + u1));
>
> >   /* Construct likelihood and log likelihood */
> >   l_latclass=eta1*prod11*prod21*prod31 +
> >              eta2*prod12*prod22*prod32;
> >   ll_latclass = log(l_latclass);
>
> >   /* Maximize the latent class log likelihood */
> >   model ll_latclass ˜ general(ll_latclass);
>
> >   /* Model the variance of the latent class probability */
> >   random u1 ~ normal([0], [exp(2*log_Valpha1)]) subject=person;
> > run;
>
> > Let me know how this goes.  It is an interesting problem.
>
> > Dale
>
> > ---------------------------------------
> > Dale McLerran
> > Fred Hutchinson Cancer Research Center
> > mailto: dmclerra(a)NO_SPAMfhcrc.org
> > Ph:  (206) 667-2926
> > Fax: (206) 667-5977
> > ---------------------------------------
>
> > --- On Tue, 12/1/09, Ryan <ryan.andrew.bl...(a)GMAIL.COM> wrote:
>
> > > Hi Oliver,
>
> > > Thank you for providing even more possibilities. Yes, this
> > > might be a
> > > good starting point. I also found an article that discussed
> > > a fairly
> > > more complexLCAmodel (including latent predictors) using
> > > NLMIXED
> > > that might help me as well. In case anybody's interested,
> > > it's located
> > > here:
>
> > >http://biostatistics.oxfordjournals.org/cgi/content/full/7/1/145
>
> > > I have not reviewed this article very closely, but at first
> > > glance I
> > > notice that the NLMIXED code in the article uses a
> > > dependent variable,
> > > called "Dummy" which they state is a place holder.
>
> > > If I figure this out, I will definitely post back the
> > > solution.
>
> > > Thanks again!
>
> > > Ryan- Hide quoted text -
>
> > - Show quoted text -
>
> Hi, Dale:
>
> As always, thank you so much for responding and providing code with
> such detailed explanations.
>
> I think I have enough information now to take a stab at coding up the
> actual model. The actual model is comprised of several classes and
> response variables, along with a random effects variable. It should be
> a simple extension of the example you coded up.
>
> I will post my attempt at coding up the actual model in a new thread
> tomorrow. I'd be very grateful if you'd consider taking a look at it,
> if you have the time.
>
> Thanks again!
>
> Ryan- Zitierten Text ausblenden -
>
> - Zitierten Text anzeigen -

Ryan,
Sorry for wasting your time while waiting for Dave's brilliant
code ... ;-))

I checked Dale's code without the random effect (Dale, this was not
for controlling you, just to make sure that I understand what's going
on ;-))) with the first example of PROC LCA and it worked fine. I
didn't even need a BOUNDS-Statement. Moreover, the %LCR-macro of D.
Miglioretti also yielded the same results.

Oliver


From: Dale McLerran on
Ryan,

There are some additional elements that one might want from
a latent class analysis besides knowing the relative
frequencies of each latent class. One might want the
posterior probability that an observation with a certain
manifest variable (x1, x2, x3, etc.) combination is in
latent class 1 vs latent class 2. These posterior
probabilities can be obtained directly in NLMIXED using
ESTIMATE statements.

Just by way of demonstration, I extracted data from John
Uebersax's website (http://www.john-uebersax.com/stat/index.htm
and then click on "Latent Class Analysis FAQ") and ran the
fixed effect latent class model that I supplied yesterday.
I added some ESTIMATE statements to the end of the code.
These ESTIMATE statements provide the posterior probability
that an observation with specified manifest variable
combination is from latent class 1. Note that I have
only fit a model which has two latent classes, so the
probability of an observations with the same manifest
variable combination being in latent class 2 is obtained
directly by subtracting the probability of latent class 1
from 1.

data mydata;
input y1 y2 y3 freq;
x1 = y1-1;
x2 = y2-1;
x3 = y3=1;
cards;
1 1 1 143
1 1 2 58
1 2 1 22
1 2 2 15
2 1 1 12
2 1 2 32
2 2 1 55
2 2 2 245
;



proc nlmixed data=mydata tech=quanew lis=2 method=gauss maxiter=1000 gconv=.00000000001 fconv=.00000000001;
parms
/* Parameter which expresses probability of */
/* latent class 1 vs latent class 2 */
alpha1 = 0.5
/* If there are more latent classes, then need */
/* alpha1, alpha2, etc. Number of alpha parms */
/* is one less than number of latent classes. */

/* Parameters which express probability of Xi=1 | latent class 1 */
bpi11 = 1 bpi21 = 1 bpi31 = 0

/* Parameters which express probability of Xi=1 | latent class 2 */
bpi12 = 0.5 bpi22 = 0.5 bpi32 = -0.5

/* Need bpi13, bpi23, etc. if three latent classes */
;

/* Note that I have commented out the BOUNDS statement which */
/* was specified in the Guo paper. This BOUNDS statement */
/* produced an error. Rather than working to fix the BOUNDS */
/* statement, I have simply commented it out for now. */
* bounds -6 <= bpi11 - bpi52 <= 6;

**** latent class part;
pi11 = 1/(1+exp(-bpi11));
pi12 = 1/(1+exp(-bpi12));
pi21 = 1/(1+exp(-bpi21));
pi22 = 1/(1+exp(-bpi22));
pi31 = 1/(1+exp(-bpi31));
pi32 = 1/(1+exp(-bpi32));
prod11 = (pi11**x1)*(1-pi11)**(1-x1);
prod12 = (pi12**x1)*(1-pi12)**(1-x1);
prod21 = (pi21**x2)*(1-pi21)**(1-x2);
prod22 = (pi22**x2)*(1-pi22)**(1-x2);
prod31 = (pi31**x3)*(1-pi31)**(1-x3);
prod32 = (pi32**x3)*(1-pi32)**(1-x3);

/* model probability of each latent class */
eta1=exp(alpha1)/(1+exp(alpha1));
eta2=1/(1+exp(alpha1));

/* Construct likelihood and log likelihood */
l_latclass=eta1*prod11*prod21*prod31 +
eta2*prod12*prod22*prod32;
ll_latclass = freq*log(l_latclass);

/* Maximize the latent class log likelihood */
model ll_latclass ~ general(ll_latclass);
estimate "P(LC=1)" eta1;
estimate "P(LC=1|x1=0,x2=0,x3=0)"
(eta1*(1-pi11)*(1-pi21)*(1-pi31)) /
(eta1*(1-pi11)*(1-pi21)*(1-pi31) + eta2*(1-pi12)*(1-pi22)*(1-pi32));
estimate "P(LC=1|x1=0,x2=0,x3=1)"
(eta1*(1-pi11)*(1-pi21)*(pi31)) /
(eta1*(1-pi11)*(1-pi21)*(pi31) + eta2*(1-pi12)*(1-pi22)*(pi32));
estimate "P(LC=1|x1=0,x2=1,x3=0)"
(eta1*(1-pi11)*(pi21)*(1-pi31)) /
(eta1*(1-pi11)*(pi21)*(1-pi31) + eta2*(1-pi12)*(pi22)*(1-pi32));
estimate "P(LC=1|x1=0,x2=1,x3=1)"
(eta1*(1-pi11)*(pi21)*(pi31)) /
(eta1*(1-pi11)*(pi21)*(pi31) + eta2*(1-pi12)*(pi22)*(pi32));
estimate "P(LC=1|x1=1,x2=0,x3=0)"
(eta1*(pi11)*(1-pi21)*(1-pi31)) /
(eta1*(pi11)*(1-pi21)*(1-pi31) + eta2*(1-pi12)*(1-pi22)*(1-pi32));
estimate "P(LC=1|x1=1,x2=0,x3=1)"
(eta1*(pi11)*(1-pi21)*(pi31)) /
(eta1*(pi11)*(1-pi21)*(pi31) + eta2*(1-pi12)*(1-pi22)*(pi32));
estimate "P(LC=1|x1=1,x2=1,x3=0)"
(eta1*(pi11)*(pi21)*(1-pi31)) /
(eta1*(pi11)*(pi21)*(1-pi31) + eta2*(1-pi12)*(pi22)*(1-pi32));
estimate "P(LC=1|x1=1,x2=1,x3=1)"
(eta1*(pi11)*(pi21)*(pi31)) /
(eta1*(pi11)*(pi21)*(pi31) + eta2*(1-pi12)*(pi22)*(pi32));
run;

---------------------------------------
Dale McLerran
Fred Hutchinson Cancer Research Center
mailto: dmclerra(a)NO_SPAMfhcrc.org
Ph: (206) 667-2926
Fax: (206) 667-5977
---------------------------------------
From: Ryan on
On Dec 3, 3:44 pm, stringplaye...(a)YAHOO.COM (Dale McLerran) wrote:
> Ryan,
>
> There are some additional elements that one might want from
> a latent class analysis besides knowing the relative
> frequencies of each latent class.  One might want the
> posterior probability that an observation with a certain
> manifest variable (x1, x2, x3, etc.) combination is in
> latent class 1 vs latent class 2.  These posterior
> probabilities can be obtained directly in NLMIXED using
> ESTIMATE statements.
>
> Just by way of demonstration, I extracted data from John
> Uebersax's website (http://www.john-uebersax.com/stat/index.htm
> and then click on "Latent Class Analysis FAQ") and ran the
> fixed effect latent class model that I supplied yesterday.
> I added some ESTIMATE statements to the end of the code.
> These ESTIMATE statements provide the posterior probability
> that an observation with specified manifest variable
> combination is from latent class 1.  Note that I have
> only fit a model which has two latent classes, so the
> probability of an observations with the same manifest
> variable combination being in latent class 2 is obtained
> directly by subtracting the probability of latent class 1
> from 1.
>
> data mydata;
>   input y1 y2 y3 freq;
>   x1 = y1-1;
>   x2 = y2-1;
>   x3 = y3=1;
> cards;
> 1 1 1 143
> 1 1 2  58
> 1 2 1  22
> 1 2 2  15
> 2 1 1  12
> 2 1 2  32
> 2 2 1  55
> 2 2 2 245
> ;
>
> proc nlmixed data=mydata tech=quanew lis=2 method=gauss maxiter=1000 gconv=.00000000001 fconv=.00000000001;
>   parms
>     /* Parameter which expresses probability of */
>     /* latent class 1 vs latent class 2         */
>     alpha1 = 0.5
>     /* If there are more latent classes, then need */
>     /* alpha1, alpha2, etc. Number of alpha parms  */
>     /* is one less than number of latent classes.  */
>
>     /* Parameters which express probability of Xi=1 | latent class 1 */
>     bpi11 = 1 bpi21 = 1 bpi31 = 0
>
>     /* Parameters which express probability of Xi=1 | latent class 2 */
>     bpi12 = 0.5 bpi22 = 0.5 bpi32 = -0.5
>
>     /* Need bpi13, bpi23, etc. if three latent classes */
>     ;
>
>    /* Note that I have commented out the BOUNDS statement which */
>    /* was specified in the Guo paper.  This BOUNDS statement    */
>    /* produced an error.  Rather than working to fix the BOUNDS */
>    /* statement, I have simply commented it out for now.        */
> *  bounds -6 <= bpi11 - bpi52 <= 6;
>
>   **** latent class part;
>   pi11 = 1/(1+exp(-bpi11));
>   pi12 = 1/(1+exp(-bpi12));
>   pi21 = 1/(1+exp(-bpi21));
>   pi22 = 1/(1+exp(-bpi22));
>   pi31 = 1/(1+exp(-bpi31));
>   pi32 = 1/(1+exp(-bpi32));
>   prod11 = (pi11**x1)*(1-pi11)**(1-x1);
>   prod12 = (pi12**x1)*(1-pi12)**(1-x1);
>   prod21 = (pi21**x2)*(1-pi21)**(1-x2);
>   prod22 = (pi22**x2)*(1-pi22)**(1-x2);
>   prod31 = (pi31**x3)*(1-pi31)**(1-x3);
>   prod32 = (pi32**x3)*(1-pi32)**(1-x3);
>
>   /* model probability of each latent class */
>   eta1=exp(alpha1)/(1+exp(alpha1));
>   eta2=1/(1+exp(alpha1));
>
>   /* Construct likelihood and log likelihood */
>   l_latclass=eta1*prod11*prod21*prod31 +
>              eta2*prod12*prod22*prod32;
>   ll_latclass = freq*log(l_latclass);
>
>   /* Maximize the latent class log likelihood */
>   model ll_latclass ~ general(ll_latclass);
>   estimate "P(LC=1)" eta1;
>   estimate "P(LC=1|x1=0,x2=0,x3=0)"
>      (eta1*(1-pi11)*(1-pi21)*(1-pi31)) /
>        (eta1*(1-pi11)*(1-pi21)*(1-pi31) + eta2*(1-pi12)*(1-pi22)*(1-pi32));
>   estimate "P(LC=1|x1=0,x2=0,x3=1)"
>      (eta1*(1-pi11)*(1-pi21)*(pi31)) /
>        (eta1*(1-pi11)*(1-pi21)*(pi31) + eta2*(1-pi12)*(1-pi22)*(pi32));
>   estimate "P(LC=1|x1=0,x2=1,x3=0)"
>      (eta1*(1-pi11)*(pi21)*(1-pi31)) /
>        (eta1*(1-pi11)*(pi21)*(1-pi31) + eta2*(1-pi12)*(pi22)*(1-pi32));
>   estimate "P(LC=1|x1=0,x2=1,x3=1)"
>      (eta1*(1-pi11)*(pi21)*(pi31)) /
>        (eta1*(1-pi11)*(pi21)*(pi31) + eta2*(1-pi12)*(pi22)*(pi32));
>   estimate "P(LC=1|x1=1,x2=0,x3=0)"
>      (eta1*(pi11)*(1-pi21)*(1-pi31)) /
>        (eta1*(pi11)*(1-pi21)*(1-pi31) + eta2*(1-pi12)*(1-pi22)*(1-pi32));
>   estimate "P(LC=1|x1=1,x2=0,x3=1)"
>      (eta1*(pi11)*(1-pi21)*(pi31)) /
>        (eta1*(pi11)*(1-pi21)*(pi31) + eta2*(1-pi12)*(1-pi22)*(pi32));
>   estimate "P(LC=1|x1=1,x2=1,x3=0)"
>      (eta1*(pi11)*(pi21)*(1-pi31)) /
>        (eta1*(pi11)*(pi21)*(1-pi31) + eta2*(1-pi12)*(pi22)*(1-pi32));
>   estimate "P(LC=1|x1=1,x2=1,x3=1)"
>      (eta1*(pi11)*(pi21)*(pi31)) /
>        (eta1*(pi11)*(pi21)*(pi31) + eta2*(1-pi12)*(pi22)*(pi32));
> run;
>
> ---------------------------------------
> Dale McLerran
> Fred Hutchinson Cancer Research Center
> mailto: dmclerra(a)NO_SPAMfhcrc.org
> Ph:  (206) 667-2926
> Fax: (206) 667-5977
> ---------------------------------------

Hi Dale,

Thank you for these estimate statements. I am certainly interested in
obtaining the probabilitiy of each observation falling into each
latent class. I believe I follow the logic of the estimate statements.
I'll try to extend this to my actual model posted in a new thread
today.

This is very helpful!

Ryan
From: Dale McLerran on
--- On Thu, 12/3/09, Oliver Kuss <Oliver.Kuss(a)MEDIZIN.UNI-HALLE.DE> wrote:

> Ryan,
> Sorry for wasting your time while waiting for Dave's
> brilliant code ... ;-))
>
> I checked Dale's code without the random effect (Dale, this was not
> for controlling you, just to make sure that I understand what's going
> on ;-))) with the first example of PROC LCA and it worked fine. I
> didn't even need a BOUNDS-Statement. Moreover, the %LCR-macro of D.
> Miglioretti also yielded the same results.
>
> Oliver
>

Oliver,

I'm glad that you did! I posted without testing, so I could
not guarantee that I got everything correct.

Just out of curiosity, how does the NLMIXED version do as with
respect to computational efficiency? I have not yet gotten
access to the site at PSU to enable me to use PROC LCA.
Thus, if you wouldn't mind reporting on efficiency, that
would be of interest.

Also, I would that computational efficiency can be gained
for problems in which the manifest variables can be
aggregated. I reported again today with code that operated
on aggregated data as found on John Uebersax's website.
Basically, for aggregated data, all that is necessary is
to multiply the log likelihood by the frequency of
occurrence of the manifest vars combination.

Dale

---------------------------------------
Dale McLerran
Fred Hutchinson Cancer Research Center
mailto: dmclerra(a)NO_SPAMfhcrc.org
Ph: (206) 667-2926
Fax: (206) 667-5977
---------------------------------------