Sergey Moiseev

## 413 Reputation

16 years, 133 days
Sergey N. Moiseev received M.S., Ph.D. and Dr.Sc. degrees in radio physics from the Voronezh State University, Voronezh, Russia in 1986, 1993 and 2003, respectively. From 1984 to 2003 the topics of his research have included theory and methods of signal processing, nonlinear optimization, decision-making theory, time series prediction, statistical radio physics, ionosphere sporadic channel models. He is currently a principal scientist in the JSC Kodofon, Voronezh, Russia. His current research interests are wide spread in the area of the communications.

## isolve...

@hirnyk For integer solution your proposal to use isolve command is optimal.

## isolve...

@hirnyk For integer solution your proposal to use isolve command is optimal.

@hirnyk Thanks Markiyan. Your and other people discussion about old package version on exponenta.ru site helps to improve the package.

## Standard optimization notation...

@Preben Alsholm I cite from the Search command Help page: "If f is a procedure or name of procedure the problem variables are the names of Required Positional Parameters (see  ?parameter_classes ) which can be returned by op(1, eval(f)) command. If you want to use another type of procedure parameters or you use "The End of Parameters Marker, \$", you should use option variables to explicitly give the list of the problem variable names."

Search(sin, [x=1..2]);
is correct according to the Help page, but your first example
Search(sin, [y=1..2]);
isn't correct according to the Help page because y isn't the name of Required Positional Parameter of sin procedure.

I agree that "the function x->x^2 is exactly the same as the function y -> y^2" but only when there aren't constraints.
If there are the constraints, for example x>0, then notation (
x->x^2, x>0) is naturally looked as function and constraint, but (y->y^2, x>0) is looked as function of variable y and inequality with another variable x.

The one of the standard optimization notations for constrained optimization tasks in scientific papers is as the following:
f: x E Rn->R
subject to
g(x)>=0, h(x)=0.

So the Search can almost exactly reproduce this notation.

## It works correctly...

Thank acer.

1. As for initial point. Of course smartness of package algorithms can be improved. But it will require more and more additional code.

2. As for constraints given as expressions when objective function given as procedure. It isn’t error or bug. Such case was included purposely. Why? Because it works correctly. For example:

﻿with(DirectSearch):
f:=(x,y)->abs(x)+abs(y);
wr:=usewarning=false:

s:=op(1,eval(f));
is(x=s[1]),is(y=s[2]);

Search(f,[x>=5],wr);
Search(f,[y>=5],wr);
Search(f,[x>=5,y>=5],wr);

Of course, it is always reliable to include option variables with explicit variable names, when objective function and constraints have different types.

## Thank you...

Thank you, it is great site. It is pity that site exponenta.ru hasn't link to this site.

## @Ninetrees You can get info about _EnvLe...

@Ninetrees You can get info about _EnvLegendreCut by command ?LegendreP. Also Legendre series is also the particular case of JacobiSeries (see help of  ?JacobiSeries; ).

## @Ninetrees You can get info about _EnvLe...

@Ninetrees You can get info about _EnvLegendreCut by command ?LegendreP. Also Legendre series is also the particular case of JacobiSeries (see help of  ?JacobiSeries; ).

## Negative values for zeros...

@hirnyk You get negative values for zeros because pointrange=[a=0..2000] set only random start points for search in the interval [0..2000]. Solutions may be out of this interval. See plot(f(0,a), a=-2000..2000); For solutions only in interval [0..2000] one should add constraint:

zeros:=GlobalSearch(f(0,a), {a>=0, a<=2000}, pointrange=[a=0..2000]);

## Negative values for zeros...

@hirnyk You get negative values for zeros because pointrange=[a=0..2000] set only random start points for search in the interval [0..2000]. Solutions may be out of this interval. See plot(f(0,a), a=-2000..2000); For solutions only in interval [0..2000] one should add constraint:

zeros:=GlobalSearch(f(0,a), {a>=0, a<=2000}, pointrange=[a=0..2000]);

## Procedure must returns float...

@vie_charlie Your procedure PEffSQ must always returns float value for any parameters egx and egy. But when Egy>Egx it return NULL. You should specify returned value in the case Egy>Egx. If you cannot set float value when Egy>Egx then set infeasible complex value, for example I.

So you corrected procedure will be:

PEffSQ:=proc(Egx,Egy)
if Egy<=Egx then
100*evalf(subs(subs(Eg1 = Egx, Eg2 = Egy, EV), SX(Egx, Egy), PCellMax/PInc));
else
I;
end if;
end proc:

Another very sample way is to include inequality condition Eg1>=Eg2 in constraints explicitly.

## Procedure must returns float...

@vie_charlie Your procedure PEffSQ must always returns float value for any parameters egx and egy. But when Egy>Egx it return NULL. You should specify returned value in the case Egy>Egx. If you cannot set float value when Egy>Egx then set infeasible complex value, for example I.

So you corrected procedure will be:

PEffSQ:=proc(Egx,Egy)
if Egy<=Egx then
100*evalf(subs(subs(Eg1 = Egx, Eg2 = Egy, EV), SX(Egx, Egy), PCellMax/PInc));
else
I;
end if;
end proc:

Another very sample way is to include inequality condition Eg1>=Eg2 in constraints explicitly.

## Use DirectSearch package...

For maximization you can use command

Search(PEffSQ, {2 >= eg2, 3 >= eg1, 1 <= eg2, 1.5 <= eg1}, initialpoint={eg1 = 2.3, eg2 = 1.4}, variables = [eg1, eg2], maximize);

from DirectSearch package:

http://www.maplesoft.com/applications/view.aspx?SID=87637

## Use DirectSearch package...

For maximization you can use command

Search(PEffSQ, {2 >= eg2, 3 >= eg1, 1 <= eg2, 1.5 <= eg1}, initialpoint={eg1 = 2.3, eg2 = 1.4}, variables = [eg1, eg2], maximize);

from DirectSearch package:

http://www.maplesoft.com/applications/view.aspx?SID=87637

 1 2 3 4 5 Page 4 of 5
﻿