Thank you Robert and Will,
I somehow missed that update possibility... I got a new activation code now and will test how it works.
Thanks again for pointing me there, Will (and also for the more pragmatic approach, Robert! :-)
quantum
You should have a look at Maple's Optimization package, specifically the "nonlinearsimplex" method (=Nelder-Mead) which handles multivariate functions without computing derivatives.
see also the corresponding online help: http://www.maplesoft.com/support/help/view.aspx?path=NLP
Convergence might be better than a genetic algorithm (but I'm no expert...). For more elaborate problems you ight want to have a look at the GlobalOptimization Toolbox for Maple which also provides derivative-free solvers.
Hope this helps to get you started.
So here are some findings from my battle with Maple.
If a wrapper for the external function is needed, then Maple will run into trouble because it cannot include its own code. This is due to a wrong default option in 'define_external' (Maple 12, Win and Linux): see follwing example and note the double quotation mark in the Maple output.
> define_external('COMPILE_OPTIONS'):-INC_PATH;
""C:\Program Files\Maple 12\extern\include\""
Under Linux then things seem to be reasonably fine but of course one has to be careful with the input and output datatypes. For those who are interested about Windows: When compiling with Intel's Visual Fortran Compiler it seems one has to use the "STDCALL, REFERENCE" (/iface:stdref) calling convention. Otherwise Maple raises some unhandled exception (at least with me...)
This is actually not exactly what I want...
I don't want to force the use of this Digits setting. I just want to have it as a default (but which can be changed if needed). I just want to set it once at the beginning (when loading the package).
Therefore I don't want to write it explicitly in all the procedures of the package (around 80 or so...)
I think the break command does what you want. You could insert some if-statement inside the loop which checks some criterion for breaking the loop. Something like this
for i from 1 to 10 do
print(i);
if i = 5 then
break;
end if;
end do;
Thank you! Once more it's a bit embarassing to see your elegant suggestion :-)
I'll give it a try! Even if it should turn out that this method is not suitable for larger problems, your construction for the generation of the constraints was insightful for me.
But if I understand you correctly I cannot expect competitive performance of the general nonlinear solver when it comes to convergence and efficiency (because it is not aware of all those fancy algoithms that the SDP solvers have), right? Thanks again!
Yes, this is clear, at least in principle. But apart from the manual separation of real and imaginary part, it is not so obvious how I should specify the constraint of positive semidefiniteness... All in all this seems to require considerable manual work... which might render the whole enterprise rather inefficient.
To be more specific, my goal is to solve (programmatically) an optimization problem. The problem variables are parameters that are passed on to a lower-level procedure in order to parametrize e.g. a physical density matrix from which in turn some scalar value (the actual objective) is computed (via some other lower-level procedure). Of course, the number of required parameters depends on the size of the density matrix.
Now, the Minimize command seems to tell the number of problem variables from the definition of the objective procedure. At least one cannot specify the number of problem variables explicitly and it is also not sufficient to provide an initial point from which the number of problem variables would be clear.
So if I want to have a procedure that performs the minimization automatically, then it has to generate an objective function (procedure) with correct number of parameters/arguments.
I hope this makes clearer where my problem lies!?
You can use the timelimit command.
If you do some printout during the execution of your procedure (or assign it to some global variable) then you can see/use the results you got so far. Example from the help page:
f := proc()
global i;
for i to 200000 do
2^i
end do
end proc:
timelimit(0.5,f());
I guess what you want is that your procedure looks at some function h1 in the interval [a1,b1] with a fixed step size 0.1 and returns the maximum value that was found. Then your code could look like:
mymax:=proc(h1,a1,b1)
local t1, maxi;
maxi:=0;
t1:=a1;
for t1 from a1 to b1 by 0.1 do
if h1(t1)>=maxi then
maxi := h1(t1);
fi;
od;
maxi;
end;
Note that the assignement to maxi has to be done using ':=' instead of '='. Also note the for...do construction (type ?do at the Maple prompt to get the corresponding help page with more examples).
Using the modified version above works like this:
mymax(x -> -x^2+1, -2, 2);
1.
I hope this was what you wanted!?
Thank you for your comments. I know that my question was/is rather general and therefore hard to answer. In principle I am aware of the following rules of thumb:
- use appropriate datatypes in the matrices, e.g. float[8] or complex[8]
- use programming layer commands of the LinearAlgebra package, i.e. LA_Main:-...
- use cache/remember tables when suitable
- use inlining (typically not possible)
Below I'll show some sample procedures to give some idea...
#################################################################################
#################################################################################
Parametrize_SU_Euler := proc(N::posint, params)
#
# returns a NxN SU(N) unitary matrix as described by the given list of
# parameters (or the keyword "random").
# The procedure follows [Tilma, Sudarshan, J. Phys. A 35 (2002) 10467]
# (see Eq. (19))
#
options hfloat;
local lambda, used_lambdas, temp, i, j, m, l, k, A, alpha, X, param_list, param_ranges, U;
if params::list then
#
# if a list of parameters [alpha[1], ..., alpha[N^2-1]]
# is given they are checked to be within the valid ranges.
#
param_ranges := evalf(Feynman_parameters("SU", "Euler angles", N));
if not nops(param_ranges) = nops(params) then
error(`\: incorrect number of parameters! Expected a list of`, nops(param_ranges),` parameters.`);
end if;
alpha := params;
elif params = "random" then
#
# if no explicit list of numerical parameters is given but
# the key "random" then the necessary parameters are generated
# randomly
#
X := Statistics[RandomVariable](Uniform(0,1000));
alpha := convert(Statistics[Sample](X, N^2-1), list);
else
error(`\: Either a list of numerical parameters is expected as second argument or the keyword "random".`)
end if;
#
# first, a list of the generalized Gell-Mann matrices is needed as
# a hermitian basis of the space of NxN matrices.
#
lambda := Hermitian_basis(N, "float");
#
# define auxiliary function j(m) as in the reference
#
#j := m -> piecewise(m=N, 0, sum(2*(m+l), l=0..N-m-1));
#
# actually the same but easier is the following
j := m -> (N+m-1)*(N-m);
#
# create a list of the lambda matrices that are actually used
# (in the order of later use, including multiple occurrences)
#
used_lambdas := Vector(N^2-1, datatype=integer):
i := 1;
for m from N to 2 by -1 do
for k from 2 to m do
used_lambdas[i] := 3;
used_lambdas[i+1] := (k-1)^2 + 1;
i := i+2;
end do:
end do:
for m from 2 to N do
used_lambdas[i] := m^2-1;
i := i+1;
end do:
temp := LinearAlgebra:-LA_Main:-MatrixScalarMultiply(
lambda[used_lambdas[1]], I*alpha[1],
inplace=false, outputoptions=[datatype=complex[8]]);
U := LinearAlgebra:-LA_Main:-MatrixFunction(
temp, exp(dummy), dummy, outputoptions=[datatype=complex[8]]);
for k from 2 to op(1,used_lambdas) do
temp := LinearAlgebra:-LA_Main:-MatrixScalarMultiply(
lambda[used_lambdas[k]], I*alpha[k],
inplace=false, outputoptions=[]);
U := LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(
U, LinearAlgebra:-LA_Main:-MatrixFunction(
temp, exp(dummy), dummy, outputoptions=[]),
inplace=false, outputoptions=[]);
end do:
end proc:
#################################################################################
#################################################################################
#################################################################################
#################################################################################
Partial_trace := proc(rho::'Matrix'(square), d::list(posint), trace_list::list(posint))
#
# returns the partial trace of a square (density) matrix with respect
# to one or more subsystems. 'd' is the list of the dimensions of each
# subspace, 'trace_list' is the list of the subsystem indices which are
# traced out.
# A matrix is returned
#
local rho_dim, i, j, k, d_in, d_out, in_list, out_list, U, reordered_rho,
new_rho;
#
# check if dimension of the given matrix is compatible with the specified subspace
# dimensions
#
rho_dim := mul(d[x], x=1..nops(d));
if not LinearAlgebra[RowDimension](rho) = rho_dim then
error(`\: The given matrix does not match the specified subspace dimensions.`);
end if;
if max(op(trace_list)) > nops(d) then
error(`\: One (or more) of the specified target subspaces is invalid.`);
end if;
#
# reorder the subsystems so that the surviving ones (in ascending order)
# come first and the one to be traced out are last
#
out_list, in_list := selectremove(has, [seq(1..nops(d))], trace_list);
d_in := mul(d[i], i=in_list); # dimension of the remaining subspace
d_out := mul(d[i], i=out_list); # dimension of the subspace to be traced out
if not [op(in_list), op(out_list)] = [seq(i,i=1..nops(d))] then
if rho[1,1]::complex[8] then
U := Matrix(rho_dim, rho_dim, Permutation_matrix([op(in_list), op(out_list)], d), datatype=complex[8]);
new_rho := Matrix(d_in, d_in, datatype=complex[8]);
else
U := Feynman_permutation_matrix([op(in_list), op(out_list)], d);
new_rho := Matrix(d_in, d_in);
end if;
reordered_rho := LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(LinearAlgebra:-LA_Main:-MatrixMatrixMultiply(U, rho, inplace=false, outputoptions=[]), LinearAlgebra:-LA_Main:-HermitianTranspose(U, inplace=false, outputoptions=[]), inplace=false, outputoptions=[]);
else
reordered_rho := rho;
if rho::'Matrix'(complex[8]) then
new_rho := Matrix(d_in, d_in, datatype=complex[8]);
else
new_rho := Matrix(d_in, d_in);
end if;
end if;
for i from 1 to d_in do
for j from 1 to d_in do
new_rho[i,j] := add(reordered_rho[(i-1)*d_out+k, (j-1)*d_out+k], k=1..d_out);
end do;
end do;
return(new_rho);
end proc:
#################################################################################
#################################################################################
Problems become apparent, e.g. when I combine several of such procedures, often involving eigenvalues of matrices, in one higher-level command. When such a procedure is used e.g. in connection with the Optimization package or the Global Optimization Toolbox, then evalhf is typically not possible (as most commands involving matrices). In such scenarios, some procedures are are called very often. Then the bottlenecks really kick in...
Thanks for your reply!
Obviously, your suggestion works. But as I want to use the local optimization inside a program I would prefer the operator form as input form. If I understand the help page correctly my syntax above should then be equivalent to yours...!? What is wrong with my syntax?