Prev: NEWSFLASH! Meami.org finds: As it turns out P.NP-complete problems are a set of problems which any other NP-problem can be reduced to in polynomial time, but which retain the ability to have their solution verified
Next: Collatz-like conjecture using integer square root
From: Robert Israel on 7 Jul 2010 11:07 Ray Vickson <RGVickson(a)shaw.ca> writes: > On Jul 6, 7:02=A0pm, Robert Israel > <isr...(a)math.MyUniversitysInitials.ca> wrote: > > Robert Israel <isr...(a)math.MyUniversitysInitials.ca> writes: > > > WizardOfOzzz <eric_pen...(a)hotmail.com> writes: > > > > > > Hey all, > > > > > > I have a problem where I have two 1D distributions represented with > > > > mean/variance. =A0I need to calculated a third distribution that is > > > > t= > he > > > > minimum of a random choice from both. > > > > > If X and Y are independent random variables with CDF F_X and F_Y > > > respectively, > > > Z =3D min(X,Y) has CDF F_Z(z) =3D 1 - (1 - F_X(z))(1 - F_Y(z)). =A0If > > > X= > and Y > > > have > > > densities f_X and f_Y, then Z has density > > > f_Z(z) =3D (F_Z)'(z) =3D f_X(z) (1 - F_Y(z)) + f_Y(z) (1 - F_X(z)). > > > > > > I also need to represent the resulting distribution using only > > > > mean/variance, even if that isn't the best approximation. > > > > > > Any ideas? =A0I figure it there should be a closed form solution > > > > from= > the > > > > original means/variances to calculate the minimum mean/variance. > > > > > No, the mean and variance of Z do not depend only on the means and > > > variances > > > of X and Y. > > > > > But by "represented with mean/variance" I suppose you're assuming > > > distributions > > > from a specific two-parameter family, e.g. normal distributions. =A0In > > > = > that > > > case > > > the mean m_z and variance v_z of Z are rather complicated functions of > > > the means and variances m_x, v_x, m_y, v_y of X and Y, involving some > > > non-elementary integrals. > > > > For simplicity, suppose m_x =3D 0 and v_x =3D 1 (to which the general > > cas= > e can > > be reduced by linear transformation). =A0Then a series expansion around > > m= > _y =3D 0, > > v_y =3D 1 is > > > > m_z =3D -1/sqrt(pi) - (v_y - 1)/(4 sqrt(pi)) + m_y/2 + (v_y - 1)^2/(32 > > sq= > rt(pi)) > > =A0 =A0 - m_y^2/(4 sqrt(pi)) + m_y^2 (v_y - 1)/(16 sqrt(pi)) > > =A0 =A0 - (v_y - 1)^3/(128 sqrt(pi)) - 3 m_y^2 (v_y - 1)^2/(128 sqrt(pi)) > > =A0 =A0 + 5 (v_y - 1)^4/(2048 sqrt(pi)) + m_y^4/(96 sqrt(pi)) + ... > > > > v_z =3D 1 - 1/pi + (pi - 1)(v_y - 1)/(2 pi) + (pi-2) m_y/(2 sqrt(pi)) > > =A0 =A0 + m_y (v_y - 1)/(2 sqrt(pi)) + m_y (v_y - 1)^2/(8 sqrt(pi)) > > =A0 =A0 - m_y^4/(24 pi) + m_y^3 (v_y - 1)/(24 sqrt(pi)) > > =A0 =A0 - 3 m_y (v_y - 1)^3/(64 sqrt(pi)) + ... > > > > and this should provide a fairly good approximation if m_y is close to 0 > > = > and > > v_y close to 1. > > -- > > How did you get these expressions? Maple 14. with(Statistics): assume(v_y > 0): X:= RandomVariable(Normal(0,1)); Y:= RandomVariable(Normal(mu_y, sqrt(v_y))); Z:= piecewise(X<Y,X,Y); mu_z:= Mean(Z); mtaylor(mu_z, [mu_y=0, v_y = 1]); v_z:= Mean(Z^2) - mu_z^2; mtaylor(v_z, [mu_y=0, v_y=1]); Actually, you can also do a one-variable series expansion in powers of mu_y: map(simplify,series(mu_z,mu_y)); 1/2 1/2 1/2 2 (1 + v_y~) 2 2 - ------------------ + 1/2 mu_y - --------------------- mu_y + 1/2 1/2 1/2 2 Pi 4 Pi (1 + v_y~) 1/2 2 4 6 ---------------------- mu_y + O(mu_y ) 3/2 1/2 48 (1 + v_y~) Pi map(simplify,series(v_z,mu_y)); 1/2 Pi v_y~ - v_y~ + Pi - 1 (v_y~ - 1) 2 -2 + Pi ----------------------- - --------------------- mu_y + ------- 2 Pi 1/2 1/2 4 Pi 2 (1 + v_y~) Pi 1/2 2 (v_y~ - 1) 2 3 1 4 mu_y + ---------------------- mu_y - ---------------- mu_y 3/2 1/2 12 (1 + v_y~) Pi 12 (1 + v_y~) Pi 1/2 (v_y~ - 1) 2 5 6 - ---------------------- mu_y + O(mu_y ) 5/2 1/2 80 (1 + v_y~) Pi -- Robert Israel israel(a)math.MyUniversitysInitials.ca Department of Mathematics http://www.math.ubc.ca/~israel University of British Columbia Vancouver, BC, Canada
From: Ray Vickson on 7 Jul 2010 14:59 On Jul 7, 8:07 am, Robert Israel <isr...(a)math.MyUniversitysInitials.ca> wrote: > Ray Vickson <RGVick...(a)shaw.ca> writes: > > On Jul 6, 7:02=A0pm, Robert Israel > > <isr...(a)math.MyUniversitysInitials.ca> wrote: > > > Robert Israel <isr...(a)math.MyUniversitysInitials.ca> writes: > > > > WizardOfOzzz <eric_pen...(a)hotmail.com> writes: > > > > > > Hey all, > > > > > > I have a problem where I have two 1D distributions represented with > > > > > mean/variance. =A0I need to calculated a third distribution that is > > > > > t= > > he > > > > > minimum of a random choice from both. > > > > > If X and Y are independent random variables with CDF F_X and F_Y > > > > respectively, > > > > Z =3D min(X,Y) has CDF F_Z(z) =3D 1 - (1 - F_X(z))(1 - F_Y(z)). =A0If > > > > X= > > and Y > > > > have > > > > densities f_X and f_Y, then Z has density > > > > f_Z(z) =3D (F_Z)'(z) =3D f_X(z) (1 - F_Y(z)) + f_Y(z) (1 - F_X(z)). > > > > > > I also need to represent the resulting distribution using only > > > > > mean/variance, even if that isn't the best approximation. > > > > > > Any ideas? =A0I figure it there should be a closed form solution > > > > > from= > > the > > > > > original means/variances to calculate the minimum mean/variance. > > > > > No, the mean and variance of Z do not depend only on the means and > > > > variances > > > > of X and Y. > > > > > But by "represented with mean/variance" I suppose you're assuming > > > > distributions > > > > from a specific two-parameter family, e.g. normal distributions. =A0In > > > > = > > that > > > > case > > > > the mean m_z and variance v_z of Z are rather complicated functions of > > > > the means and variances m_x, v_x, m_y, v_y of X and Y, involving some > > > > non-elementary integrals. > > > > For simplicity, suppose m_x =3D 0 and v_x =3D 1 (to which the general > > > cas= > > e can > > > be reduced by linear transformation). =A0Then a series expansion around > > > m= > > _y =3D 0, > > > v_y =3D 1 is > > > > m_z =3D -1/sqrt(pi) - (v_y - 1)/(4 sqrt(pi)) + m_y/2 + (v_y - 1)^2/(32 > > > sq= > > rt(pi)) > > > =A0 =A0 - m_y^2/(4 sqrt(pi)) + m_y^2 (v_y - 1)/(16 sqrt(pi)) > > > =A0 =A0 - (v_y - 1)^3/(128 sqrt(pi)) - 3 m_y^2 (v_y - 1)^2/(128 sqrt(pi)) > > > =A0 =A0 + 5 (v_y - 1)^4/(2048 sqrt(pi)) + m_y^4/(96 sqrt(pi)) + .... > > > > v_z =3D 1 - 1/pi + (pi - 1)(v_y - 1)/(2 pi) + (pi-2) m_y/(2 sqrt(pi)) > > > =A0 =A0 + m_y (v_y - 1)/(2 sqrt(pi)) + m_y (v_y - 1)^2/(8 sqrt(pi)) > > > =A0 =A0 - m_y^4/(24 pi) + m_y^3 (v_y - 1)/(24 sqrt(pi)) > > > =A0 =A0 - 3 m_y (v_y - 1)^3/(64 sqrt(pi)) + ... > > > > and this should provide a fairly good approximation if m_y is close to 0 > > > = > > and > > > v_y close to 1. > > > -- > > > How did you get these expressions? > > Maple 14. > > with(Statistics): assume(v_y > 0): > X:= RandomVariable(Normal(0,1)); > Y:= RandomVariable(Normal(mu_y, sqrt(v_y))); > Z:= piecewise(X<Y,X,Y); > mu_z:= Mean(Z); > mtaylor(mu_z, [mu_y=0, v_y = 1]); > v_z:= Mean(Z^2) - mu_z^2; > mtaylor(v_z, [mu_y=0, v_y=1]); > > Actually, you can also do a one-variable series expansion in powers of mu_y: > > map(simplify,series(mu_z,mu_y)); > > 1/2 1/2 1/2 > 2 (1 + v_y~) 2 2 > - ------------------ + 1/2 mu_y - --------------------- mu_y + > 1/2 1/2 1/2 > 2 Pi 4 Pi (1 + v_y~) > > 1/2 > 2 4 6 > ---------------------- mu_y + O(mu_y ) > 3/2 1/2 > 48 (1 + v_y~) Pi > > map(simplify,series(v_z,mu_y)); > > 1/2 > Pi v_y~ - v_y~ + Pi - 1 (v_y~ - 1) 2 -2 + Pi > ----------------------- - --------------------- mu_y + ------- > 2 Pi 1/2 1/2 4 Pi > 2 (1 + v_y~) Pi > > 1/2 > 2 (v_y~ - 1) 2 3 1 4 > mu_y + ---------------------- mu_y - ---------------- mu_y > 3/2 1/2 12 (1 + v_y~) Pi > 12 (1 + v_y~) Pi > > 1/2 > (v_y~ - 1) 2 5 6 > - ---------------------- mu_y + O(mu_y ) > 5/2 1/2 > 80 (1 + v_y~) Pi > -- > Robert Israel isr...(a)math.MyUniversitysInitials.ca > Department of Mathematics http://www.math.ubc.ca/~israel > University of British Columbia Vancouver, BC, Canada OK: for normally-distributed r.v.s we can do it. However, the OP did not specify any particular distributions (although he/she maybe meant to have normal r.vs without saying so). R.G. Vickson
From: Robert Israel on 7 Jul 2010 15:18 Ray Vickson <RGVickson(a)shaw.ca> writes: | OK: for normally-distributed r.v.s we can do it. However, the OP did | not specify any particular distributions (although he/she maybe meant | to have normal r.vs without saying so). Yes, I did mention "normal distributions", but maybe I didn't make it clear enough that this was what I was calculating it for. -- Robert Israel israel(a)math.MyUniversitysInitials.ca Department of Mathematics http://www.math.ubc.ca/~israel University of British Columbia Vancouver, BC, Canada
First
|
Prev
|
Pages: 1 2 Prev: NEWSFLASH! Meami.org finds: As it turns out P.NP-complete problems are a set of problems which any other NP-problem can be reduced to in polynomial time, but which retain the ability to have their solution verified Next: Collatz-like conjecture using integer square root |