Prev: exporting patch data to Pov-ray
Next: Band Pass Filter
From: Jorge Lagos on 19 Apr 2010 11:54 Hello all! I have been using in my research the fmincon function with the SQP algorithm to successfully optimize a nonlinear function with 3 independent variables and only bound constraints, and I have been asked by a reviewer of my work to explain how exactly does the SQP algorithm deal with nonlinear objective functions. I have read the related MATLAB documentation but the explanations and references therein (besides being well beyond my ECE background) elaborate more on the details of the internal optimization phases, while I believe that my reviewer is only concerned about how the non-linearity of my objective function is addressed. I would be grateful if someone could please shed some light about this particular aspect of the SQP algorithm, or provide any references where I can search for the answer. Thanks in advance for any help. Regards, Jorge. P.S. Wikipedia states: "If the problem is unconstrained, then the [SQP] method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions [...]". ....Does any of the above apply to the case of having only BOUND constraints (as in my problem)?
From: Alan Weiss on 19 Apr 2010 14:33 The simplest explanation is probably this: the SQP algorithm approximates the objective function as a quadratic function and, assuming the quadratic turns out to be positive definite, attempts to go to the minimizing point. In the presence of bound constraints, the algorithm attempts to figure out the bounds that are "active" and uses appropriate Lagrange multipliers to turn the problem into a linear problem (the problem of finding the minimum of a quadratic is linear, since you just have to take the gradient, a linear function, and set it to zero). There are many more details, such as how does the algorithm make a quadratic approximation (it approximates both the local gradient and a Hessian). But for your purposes, perhaps the first paragraph suffices. For more details, see http://www.mathworks.com/access/helpdesk/help/toolbox/optim/ug/brnoxzl.html#bsgppl4 Alan Weiss MATLAB mathematical toolbox documentation On 4/19/2010 11:54 AM, Jorge Lagos wrote: > Hello all! > I have been using in my research the fmincon function with the SQP > algorithm to successfully optimize a nonlinear function with 3 > independent variables and only bound constraints, and I have been asked > by a reviewer of my work to explain how exactly does the SQP algorithm > deal with nonlinear objective functions. I have read the related MATLAB > documentation but the explanations and references therein (besides being > well beyond my ECE background) elaborate more on the details of the > internal optimization phases, while I believe that my reviewer is only > concerned about how the non-linearity of my objective function is > addressed. > I would be grateful if someone could please shed some light about this > particular aspect of the SQP algorithm, or provide any references where > I can search for the answer. Thanks in advance for any help. > > Regards, > > Jorge. > P.S. Wikipedia states: "If the problem is unconstrained, then the [SQP] > method reduces to Newton's method for finding a point where the gradient > of the objective vanishes. If the problem has only equality constraints, > then the method is equivalent to applying Newton's method to the > first-order optimality conditions [...]". > ...Does any of the above apply to the case of having only BOUND > constraints (as in my problem)?
|
Pages: 1 Prev: exporting patch data to Pov-ray Next: Band Pass Filter |