There is a probem in the Optimization package's (nonlinear programming problem) NLPSolve routine which affects multivariate problems given in so-called Operator form. That is, when the objective and constraints are not provided as expressions. Below is a workaround for this.

Usually, an objective (or constraint) gets processed by Optimization using automatic differentiation tools (from codegen or CodeGeneration) to produce an associated gradient (or jacobian) procedure. But those tools are not all-powerful, and when they fail the NLPSolve routine has evident difficulties in numerically computing derivatives on its own.

Below is an example of a workaround in which one supplies one's own gradient routine to work around any problems with NLPSolve's internal numerical derivative computation. This is done with a change, to utilize Optimization's so-called Matrix form of input. The supplied gradient routine uses fdiff to compute the derivatives numerically.

If constraints were also present, which could not be successfuly processed by the automatic differentiation tools, then they too could be handled by instead supplying a similar procedure for the 'constraintjacobian' option.

In the example below, I have deliberately added some irrelevant subs & BesselJ stuff. The only purpose of that is to forcibly make the procedure f too hard for the automatic differentiation tools to handle, so that it can illustrate the general problem.

> f:=proc(a,b) subs(A=a,B=b,BesselJ(0,A+B)); 12-(a-3)^2-(b-3)^2; end proc:

> Optimization:-Maximize(f,initialpoint=[2,2],maximize);
Warning, no iterations performed as initial point satisfies first-order conditions
[ [2.]]
[10., [ ]]
[ [2.]]

That error message above, about first order conditions, is both untrue as well as indicative of this particular problem. (It seems reasonable to suppose that something is incorrectly computing the numerical derivatives as being all-zero, with an erroneous conclusion being made about whether the objective is changing at the point.)

So let's produce routine to handle the numerical derivative, using fdiff. This requires using Optimization's "Matrix form" for the arguments, and so a corresponding new form of the objective is also given.

objf := proc(V::Vector)
f(V[1],V[2]);
end proc:

objfgradient := proc(X::Vector,G::Vector)
G[1] := fdiff( f, [1], [X[1],X[2]] );
G[2] := fdiff( f, [2], [X[1],X[2]] );
NULL;
end proc:

Optimization:-NLPSolve( 2, objf, 'objectivegradient'=objfgradient,
'initialpoint'=Vector([2, 2]), 'method'='sqp', 'maximize' );
[ [3.00000000000000]]
[12., [ ]]
[ [3.00000000000000]]

It really shouldn't be too hard to automate this, by programmatically examining the original objective procedure to count its parameters, etc.

acer


Please Wait...