From: Samuel Edwards on 28 Jul 2010 16:52 "Matt J " <mattjacREMOVE(a)THISieee.spam> wrote in message <i2q461$76k$1(a)fred.mathworks.com>... > "Samuel Edwards" <DJeter1234(a)AOL.com> wrote in message <i2pu5h$t78$1(a)fred.mathworks.com>... > > > > > It's a numerical assessment of an 2-d integral defined over 2 regions, where the bounds of the second region are zeros of a function that 1) has a discontinuous derivative and 2) has has a standard normal cumulative density and probability density term involving the coefficients. > =============== > > OK, that does sound pretty hard, but what about my other question? Why write the function in terms of expressions 1/variance? This will make the function insensitive to incremental changes in large variance values. Why not define alternative parameters > > p1,2,3=1/variance1,2,3 > > and express the function in terms of p1,2,3 Sorry if that was unclear, the objective function m-file takes 1/variance as the input, but i define an anonymous function that takes s.d. as an input and feed this to the solver. So fmincon should be taking steps/gradients/hessians in terms of s.d. There may be a small amount of noise in the function because the integral/bounds are only an approximation, although if so it's not visible when i plot the area around a "minimum." However, I think it's probably poor scaling, but because I need this program to run automatically, I can't think of a better way to rescale than minimize sum(([1;1;1;1;1].*previous best guess-actual).^2). I need it to run automatically because the actual data has a somewhat low n, and I'm attempting to see what the optimization recovers from other data-sets that could have been generated by the "true" coefficients.
|
Pages: 1 Prev: Using s-function to process serial data? Next: Static Callback Methods? |