Question: Help with Optimization Workaround for Operator Form

Hello all,

In reference to Acer's workaround for the operator form of the optimization package, I'm having difficulty applying the method to an optimization problem with 7 variables. I'm hoping you can help me see what I'm doing wrong. This is what I've got coded, where EIG is a fairly long, complex procedure that outputs a float value:

> objf := proc (V::Vector)

EIG(V[1], V[2], V[3], V[4], V[5], V[6], V[7])

end proc;
> objfgradient := proc (X::Vector, G::Vector)

G[1] := fdiff(EIG, [1], [X[1], X[7]]);

G[2] := fdiff(EIG, [2], [X[1], X[7]]);

G[3] := fdiff(EIG, [3], [X[1], X[7]]);

G[4] := fdiff(EIG, [4], [X[1], X[7]]);

G[5] := fdiff(EIG, [5], [X[1], X[7]]);

G[6] := fdiff(EIG, [6], [X[1], X[7]]);

G[7] := fdiff(EIG, [7], [X[1], X[7]]);


end proc;

> Optimization:-NLPSolve(7, objf, objectivegradient = objfgradient, initialpoint = Vector([0.5e-3, 0.5e-3, 0.1e-2, 0.5e-3, .3, .7, 0.8e-3]));
NLPSolve: calling NLP solver
SolveUnconstrained: using method=pcg
SolveUnconstrained: number of problem variables 7
PrintSettings: optimality tolerance set to 0.3256082241e-11
PrintSettings: iteration limit set to 50
SolveUnconstrained: trying evalhf mode
SolveUnconstrained: trying evalf mode
Error, (in Optimization:-NLPSolve) could not store fdiff(EIGEXP, [1], [0.500000000000000012e-3, 0.800000000000000040e-3]) in a floating-point rtable

It seems like I'm doing something wrong with the 'objfgradient' procedure, using fdiff. I admittedly don't understand very well how it works, so I appreciate your help and patience!

Please Wait...