The zero of the derivative is not necessarily an extremum for the original function [consider the case of f(x)=x^3]. However, if there is a differentiable extremum, its derivative has to be zero. Furthermore, the zero derivative itself doesn’t tell us if this is a maximum, a minimum or nothing at all. The possible extremum points can be also searched in the domain’s boundaries. However, we don’t know the exact form of the function, or as Alex pointed ‘if you also want to find the value of t for which this value is attained, it seems you need to analyze numerical solutions to the system of differential equations’.
I guess that my providing code didn’t help you much, since you have already given the original values of the parameters to your system. Why don’t you give us the parameters’ values and the initial conditions so we can have a look at the original system?
The zero of the derivative is not necessarily an extremum for the original function [consider the case of f(x)=x^3]. However, if there is a differentiable extremum, its derivative has to be zero. Furthermore, the zero derivative itself doesn’t tell us if this is a maximum, a minimum or nothing at all. The possible extremum points can be also searched in the domain’s boundaries. However, we don’t know the exact form of the function, or as Alex pointed ‘if you also want to find the value of t for which this value is attained, it seems you need to analyze numerical solutions to the system of differential equations’.
I guess that my providing code didn’t help you much, since you have already given the original values of the parameters to your system. Why don’t you give us the parameters’ values and the initial conditions so we can have a look at the original system?
Very convenient and straightforward. Thank you very much.
pantole
Very convenient and straightforward. Thank you very much.
pantole
Thanks for the correction. The built-in [Optimization] package is indeed more straightforward than the GOT, which I didn't test. What I had in mind is a
previous post about an integral, which I briefly tested using the Optimization package.
Now I have to reconsider also my previous entry...
pantole
Thanks for the correction. The built-in [Optimization] package is indeed more straightforward than the GOT, which I didn't test. What I had in mind is a
previous post about an integral, which I briefly tested using the Optimization package.
Now I have to reconsider also my previous entry...
pantole
If it is a matter of evaluating numerically the integral, then you may test it using GlobalOptimization package (using branchandbound option), as DJKeenan suggested:
> restart:
> integral:=proc(k)
if type (k, numeric) then
int(2.000000000*10^8*(.51-1.428285686*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2)*cos(0.2473600e-2*Pi^2/lambda)+(10^(-0.1600000000e-1*Pi))^2*(1-k^2))/((1-.7141428429*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2))^2*(1+2.856571372*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2)*sin(0.1236800e-2*Pi^2/lambda)^2/(1-.7141428429*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2))^2)), lambda = 6.30*10^(-7) .. 6.35*10^(-7))
else 'integral'(k) end if; end proc:
> with(GlobalOptimization): infolevel[GlobalOptimization]:=3:
> GlobalSolve(integral(k),k=0..1, maximize = true, timelimit=57600, method=branchandbound);
The answer returned after 25 minutes (in a 1 GHz processor) is:
GlobalSolve: calling NLP solver
SolveGeneral: calling global optimization solver
SolveGeneral: number of problem variables 1
SolveGeneral: number of nonlinear inequality constraints 0
SolveGeneral: number of nonlinear equality constraints 0
SolveGeneral: method branchandbound
SolveGeneral: merit function evaluation limit 1000
SolveGeneral: non-improving merit function evaluation limit 200
SolveGeneral: constraint penalty multiplier 100.0
SolveGeneral: target merit function value 0.10e11
SolveGeneral: local search target objective function value 0.10e11
SolveGeneral: local search feasibility tolerance 0.10e-5
SolveGeneral: local search optimality tolerance 0.10e-5
SolveGeneral: time limit in seconds 57600
SolveGeneral: trying evalhf mode
SolveGeneral: trying evalf mode
LGORUN: total number of function evaluations 295
LGORUN: runtime in external solver 1583.
LGORUN: maximum constraint infeasibility 0.
[0.829292352751139994, [k = 0.]]
That is, GlobalSolve returns the lower bound. If theoretical considerations must be included, then you should refer to Robert's answer and compare P(0)/P(1) [having P function of k] or something like that.
I understand that few people have the Global Optimization toolbox, and it is rather inconvenient to post an example with that. However, I am posting the code because I believe that using the add-on package [Optimization] is not so straightforward as the above.
Hope that Maplesoft will consider the possibility of integrating GOT in the forthcoming versions of Maple.
pantole
If it is a matter of evaluating numerically the integral, then you may test it using GlobalOptimization package (using branchandbound option), as DJKeenan suggested:
> restart:
> integral:=proc(k)
if type (k, numeric) then
int(2.000000000*10^8*(.51-1.428285686*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2)*cos(0.2473600e-2*Pi^2/lambda)+(10^(-0.1600000000e-1*Pi))^2*(1-k^2))/((1-.7141428429*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2))^2*(1+2.856571372*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2)*sin(0.1236800e-2*Pi^2/lambda)^2/(1-.7141428429*10^(-0.1600000000e-1*Pi)*sqrt(1-k^2))^2)), lambda = 6.30*10^(-7) .. 6.35*10^(-7))
else 'integral'(k) end if; end proc:
> with(GlobalOptimization): infolevel[GlobalOptimization]:=3:
> GlobalSolve(integral(k),k=0..1, maximize = true, timelimit=57600, method=branchandbound);
The answer returned after 25 minutes (in a 1 GHz processor) is:
GlobalSolve: calling NLP solver
SolveGeneral: calling global optimization solver
SolveGeneral: number of problem variables 1
SolveGeneral: number of nonlinear inequality constraints 0
SolveGeneral: number of nonlinear equality constraints 0
SolveGeneral: method branchandbound
SolveGeneral: merit function evaluation limit 1000
SolveGeneral: non-improving merit function evaluation limit 200
SolveGeneral: constraint penalty multiplier 100.0
SolveGeneral: target merit function value 0.10e11
SolveGeneral: local search target objective function value 0.10e11
SolveGeneral: local search feasibility tolerance 0.10e-5
SolveGeneral: local search optimality tolerance 0.10e-5
SolveGeneral: time limit in seconds 57600
SolveGeneral: trying evalhf mode
SolveGeneral: trying evalf mode
LGORUN: total number of function evaluations 295
LGORUN: runtime in external solver 1583.
LGORUN: maximum constraint infeasibility 0.
[0.829292352751139994, [k = 0.]]
That is, GlobalSolve returns the lower bound. If theoretical considerations must be included, then you should refer to Robert's answer and compare P(0)/P(1) [having P function of k] or something like that.
I understand that few people have the Global Optimization toolbox, and it is rather inconvenient to post an example with that. However, I am posting the code because I believe that using the add-on package [Optimization] is not so straightforward as the above.
Hope that Maplesoft will consider the possibility of integrating GOT in the forthcoming versions of Maple.
pantole
Inspired by
Doug's answer and using the Introductory Programming Guide of Maple 10 (it comes with the cd), in pages 210-211 there is the procedure for the Fibbonacci numbers.
Copied below (without the remember option):
> restart: F := (n::nonnegint,x) ->
> if n<2 then n
> else simplify(x*F(n-1,x)+F(n-2,x))
> end if:
> Fib(10,x);
which returns x^9+8*x^7+21*x^5+20*x^3+5*x.
I guess that the best way one to show the advantages of the transformation of a non-convex problem into a convex one is to give an example in Maple. While we are lacking the principles of convex programming, examples can be most intuitive.
Can all non-convex problems be transformed to convex? Then I guess that global optimization collapses (Theory: if a local minimum exists, then it is a global minimum [for convex programming]). Is this so?
Can we make our life easier finding algorithms for the said transformation? The link proposed
above mentions: “Nevertheless, there remains a significant impediment to the more widespread adoption of convex programming: the high level of expertise required to use it” – by M. Grant, S. Boyd, and Y. Ye.
I would be fortunate to discover convex programming advantages or ways to optimize a procedure by the best method provided. I am not an expert in optimization, but I am a good listener and a quite fair reader.
A lot of researchers deal almost in a daily basis with optimization problems (fitting, parameter finding, etc.). A lot of them are nonlinear. In these cases you aren’t sure if the problem you are involved is non-convex (meaning there are more than one local optima), unless you plot the function and see the behaviour in the whole domain. But what happens if you have to find 3 or more parameters? Can you plot the function? I guess not.
Then global optimization is a MUST. You have to be sure that you reached the best answer, or, as Paulina says, "the best answer until the termination criteria are met".
Another difficult task is when you don’t have a function, but rather a procedure (e.g. you want to find an upper limit of an integral with the latter equaling a value, or you have a matrix/vector of experimental data and a set of ODEs which describe its behaviour, so you want to find parameters by reducing the error between experimental and calculated values). These algorithms are difficult to find, or they are difficult to be implemented, which result to the same thing: you can’t deal with these problems.
So, there is a need not only for global optimization, but for examples, procedures and other stuff, supportive to the implementation.
Do you need sth. like that?
> restart:
> p[1] := 1;
> p[2] := 4;
> EBO:='EBO': EBO:=array(1..10,1..2):
> for n to 2 do
> for s to 10 do
> EBO[s, n] := sum((x-s)*p[n]^x*evalf(exp(-p[n]))/factorial(x), x = s+1 .. infinity)
> od od:
> print(EBO);
The output is an 'elegant' table, but is what you need? Note that I replaced p[n] with n in 7th line, because such table cannot be defined (I think).
pantole
If you write the command print, Maple returns a table (I don't know if it's one you expect).
restart:
p[1] := 1;
p[2] := 4;
for n to 2 do
for s to 10 do
EBO[s, p[n]] := sum((x-s)*p[n]^x*evalf(exp(-p[n]))/factorial(x), x = s+1 .. infinity)
od od:
print(EBO);
Output: table([(8, 4) = 0.33626988e-1, (10, 4) = 0.4131312e-2, (5, 4) = .410304195, (3, 1) = 0.23336927e-1, (4, 4) = .7814672593, (7, 1) = 0.11498e-4, (6, 1) = 0.94738e-4, (5, 1) = 0.688923e-3, (7, 4) = 0.84760603e-1, (1, 4) = 3.018315639, (6, 4) = .195434582, (10, 1) = 0.1200000000e-7, (2, 1) = .1036383241, (3, 4) = 1.347997139, (9, 4) = 0.12263553e-1, (8, 1) = 0.1250e-5, (1, 1) = .3678794412, (2, 4) = 2.109893833, (4, 1) = 0.4348769e-2, (9, 1) = 0.1230000000e-6])
I think that the question of finding “all the values of mu and T that satisfy the integral equaling” a certain value is ill-posed. Maybe, the problem has got many local solutions, but only ONE global optimum (in a certain range). Using the Global Optimization Toolbox, and with some coding, we can end up to the following (mu=a, T=b):
> restart:
integral:=proc(a,b)
if type (a, numeric) and type (b, numeric) then
int(sqrt(x)/(exp((x-a)/b)+1),x=0..infinity)
else 'integral'(a,b) end if; end proc;
> objf:=proc(a,b,value)
if type (a, numeric) and type (b, numeric) then
(integral(a,b)-value)^2
else 'objf'(a,b,value) end if;
end proc;
> value:=9.270205270:
> with(GlobalOptimization): infolevel[GlobalOptimization]:=3:
GlobalSolve(objf(a,b,value),a=1..1.5, b=4..6, timelimit=57600, method=branchandbound);
The answer is: [0., [a=1.387412, b=4.949096], quite near to the values given by JacquesC. The answer was derived after approximately 5 minutes in a 3.4 GHz processor (Maple 10). The embedded package of Maple [Optimization] gives local solutions [NLPSolve seems to give global solutions “in limited situations”, which I didn’t check].
Concerning the limits of integral, you can use the identity [given in Numerical Recipes in FORTRAN, 2nd Ed., 1992, by Press et al., p.140]:
int(f(x),x=a..infinity)=int(f(-ln(t))*1/t,t=0..exp(-a))
Maybe one ends up to a simpler integral by using this transformation.
pantole
I think that the question of finding “all the values of mu and T that satisfy the integral equaling” a certain value is ill-posed. Maybe, the problem has got many local solutions, but only ONE global optimum (in a certain range). Using the Global Optimization Toolbox, and with some coding, we can end up to the following (mu=a, T=b):
> restart:
integral:=proc(a,b)
if type (a, numeric) and type (b, numeric) then
int(sqrt(x)/(exp((x-a)/b)+1),x=0..infinity)
else 'integral'(a,b) end if; end proc;
> objf:=proc(a,b,value)
if type (a, numeric) and type (b, numeric) then
(integral(a,b)-value)^2
else 'objf'(a,b,value) end if;
end proc;
> value:=9.270205270:
> with(GlobalOptimization): infolevel[GlobalOptimization]:=3:
GlobalSolve(objf(a,b,value),a=1..1.5, b=4..6, timelimit=57600, method=branchandbound);
The answer is: [0., [a=1.387412, b=4.949096], quite near to the values given by JacquesC. The answer was derived after approximately 5 minutes in a 3.4 GHz processor (Maple 10). The embedded package of Maple [Optimization] gives local solutions [NLPSolve seems to give global solutions “in limited situations”, which I didn’t check].
Concerning the limits of integral, you can use the identity [given in Numerical Recipes in FORTRAN, 2nd Ed., 1992, by Press et al., p.140]:
int(f(x),x=a..infinity)=int(f(-ln(t))*1/t,t=0..exp(-a))
Maybe one ends up to a simpler integral by using this transformation.
pantole