Preben Alsholm

13728 Reputation

22 Badges

20 years, 249 days

MaplePrimes Activity


These are replies submitted by Preben Alsholm

If the result Y found above is close to being a solution, then the solution is not unique. The Y found is not symmetric with respect to x=L/2. But it is immediately seen that if y(x) is a solution to the bvp problem then so is y(L-x). Unless therefore y(L-x) = y(x) for all x in 0..L the problem has more than one solution.

Instead of using fsolve it seems to be a good idea to minimize y(L)^2 + y'(L)^2 using LSSolve from the Optimization package. It is fast and gives great results:

with(Optimization);
res:=LSSolve([p1,p2]);
dsolpar(parameters=convert(res[2],list));
Y(L),Y1(L);
plots:-odeplot(dsolpar,[x,y(x)],0..L);
#Using as initialpoint for LSSolve the result found by fsolve:
res:=LSSolve([p1,p2],initialpoint=[0.7898528584778143356e-2, -0.1372579321779395876e-3]);
dsolpar(parameters=convert(res[2],list));
Y(L),Y1(L);
plots:-odeplot(dsolpar,[x,y(x)],0..L);

If the result Y found above is close to being a solution, then the solution is not unique. The Y found is not symmetric with respect to x=L/2. But it is immediately seen that if y(x) is a solution to the bvp problem then so is y(L-x). Unless therefore y(L-x) = y(x) for all x in 0..L the problem has more than one solution.

Instead of using fsolve it seems to be a good idea to minimize y(L)^2 + y'(L)^2 using LSSolve from the Optimization package. It is fast and gives great results:

with(Optimization);
res:=LSSolve([p1,p2]);
dsolpar(parameters=convert(res[2],list));
Y(L),Y1(L);
plots:-odeplot(dsolpar,[x,y(x)],0..L);
#Using as initialpoint for LSSolve the result found by fsolve:
res:=LSSolve([p1,p2],initialpoint=[0.7898528584778143356e-2, -0.1372579321779395876e-3]);
dsolpar(parameters=convert(res[2],list));
Y(L),Y1(L);
plots:-odeplot(dsolpar,[x,y(x)],0..L);

You could use :-line. By the command with(plottools); the variable line has been defined to be plottools:-line.
Thus you would write
DEplot(deq1,x(t),t=-.5..4,x=-0.2..2,IC,linecolor=black,thickness=3,arrows=:-line);

@JJames _Z1 (printed with a trailing tilde meaning assumptions made on _Z1) stands for an arbitrary integer. So if 'res' above contains _Z1 you can replace it with any integer e.g. 1. 'res' actually also contains a name _C3 (without a trailing tilde). A name like that stands for an arbitrary complex constant.

eval(res,{_Z1=1,_C3=I/2});

As in solving the eigenvalue problem for a matrix A: Find the numbers lambda s.t. Av = lambda*v has a nontrivial solution, there may be ways of finding lambda before actually finding the corresponding v's. In this case we solve the characteristic equation det(A-lambda*Id) = 0.

Above we found the possible values for lambda (actually assuming that they were positive). There were infinitely many, and I chose to use a simpler name n instead of _Z1 or _Z2 or what have you (Maple changes the number if you do it again without a restart).

lambda = (a^2+Pi^2*n^2)^(3/2)/a.

After having found the possible values of lambda you can find all the corresponding eigenfunctions:

dsolve(eval({ode1,ode2,bcs},lambda=(a^2+n^2*Pi^2)^(3/2)/a)) assuming n::integer;

However, Maple can (in principle) solve the whole problem in one command as I noticed in the corrected version of my first answer to your question:

dsolve({ode1,ode2,bcs},{alpha(z),H(z),lambda});

The output, however, is rather unwieldy, but hopefully correct.

With a little more work it can be seen that the corresponding sequence of negative values
lambda= - (a^2+n^2*Pi^2)^(3/2)/a)
are also eigenvalues (n<>0 though!).

@JJames _Z1 (printed with a trailing tilde meaning assumptions made on _Z1) stands for an arbitrary integer. So if 'res' above contains _Z1 you can replace it with any integer e.g. 1. 'res' actually also contains a name _C3 (without a trailing tilde). A name like that stands for an arbitrary complex constant.

eval(res,{_Z1=1,_C3=I/2});

As in solving the eigenvalue problem for a matrix A: Find the numbers lambda s.t. Av = lambda*v has a nontrivial solution, there may be ways of finding lambda before actually finding the corresponding v's. In this case we solve the characteristic equation det(A-lambda*Id) = 0.

Above we found the possible values for lambda (actually assuming that they were positive). There were infinitely many, and I chose to use a simpler name n instead of _Z1 or _Z2 or what have you (Maple changes the number if you do it again without a restart).

lambda = (a^2+Pi^2*n^2)^(3/2)/a.

After having found the possible values of lambda you can find all the corresponding eigenfunctions:

dsolve(eval({ode1,ode2,bcs},lambda=(a^2+n^2*Pi^2)^(3/2)/a)) assuming n::integer;

However, Maple can (in principle) solve the whole problem in one command as I noticed in the corrected version of my first answer to your question:

dsolve({ode1,ode2,bcs},{alpha(z),H(z),lambda});

The output, however, is rather unwieldy, but hopefully correct.

With a little more work it can be seen that the corresponding sequence of negative values
lambda= - (a^2+n^2*Pi^2)^(3/2)/a)
are also eigenvalues (n<>0 though!).

@JohnJames There is no ready made procedure, so yes, you will hve to do it manually. The individual manual steps could, however, be done in Maple, but you will most likely find it much easier to do them on paper.

To illustrate that last point I can give you a few steps, which will probably convince you:

res1:=dsolve(odes[1]) assuming _c[1]<0; #if that is the relevant assumption
res2:=dsolve(odes[2]);
eval(IBC,f=unapply(rhs(op(1,res)),x1,x2));
#We want a nontrivial solution, so
eval(%,{_F1(x1)=1,_F2(x2)=1});
diff(res1,x1);
eval(%,x1=0);
eval(rhs(%%),{x1=L1,_C1=0});
solve(%=0,{_c[1]},AllSolutions);

etc.

In other words, you will have to know what to do yourself at each step. Doing it in Maple just adds to the difficulty in this case.

@JohnJames There is no ready made procedure, so yes, you will hve to do it manually. The individual manual steps could, however, be done in Maple, but you will most likely find it much easier to do them on paper.

To illustrate that last point I can give you a few steps, which will probably convince you:

res1:=dsolve(odes[1]) assuming _c[1]<0; #if that is the relevant assumption
res2:=dsolve(odes[2]);
eval(IBC,f=unapply(rhs(op(1,res)),x1,x2));
#We want a nontrivial solution, so
eval(%,{_F1(x1)=1,_F2(x2)=1});
diff(res1,x1);
eval(%,x1=0);
eval(rhs(%%),{x1=L1,_C1=0});
solve(%=0,{_c[1]},AllSolutions);

etc.

In other words, you will have to know what to do yourself at each step. Doing it in Maple just adds to the difficulty in this case.

Relying on the order and form of the output from convertsys you could define vn by using op:

vn:=op([2,1,2,1],Deq);

As for eval(%,dat[2]); I don't know what your intention is (in view of the next line with remove).

Relying on the order and form of the output from convertsys you could define vn by using op:

vn:=op([2,1,2,1],Deq);

As for eval(%,dat[2]); I don't know what your intention is (in view of the next line with remove).

@herclau Yes, I have experienced several times that copying from a worksheet to MaplePrimes fails to include everything. This, unfortunately, makes it necessary to read the whole code after copying, something that I clearly failed to do with Dsolve2.

@herclau Yes, I have experienced several times that copying from a worksheet to MaplePrimes fails to include everything. This, unfortunately, makes it necessary to read the whole code after copying, something that I clearly failed to do with Dsolve2.

@herclau
I don't get any email notifications either anymore!


Whereas LinearAlgebra is a module, DEtools is not, which explains the difference in their treatment below.

I have made a few changes in your procedure besides making it take different input (and output).

Dsolve2:=proc (eqs::{list,set}(equation),ics::{list,set}(equation),vars::{set,list}(function(name)))
local t,dat,A,b,v,vi,ic,Xt,X,x,sys;
uses LinearAlgebra;
t:=op(op~(vars));
dat:=DEtools[convertsys](eqs,ics,vars,t,x);
A,b:=GenerateMatrix(rhs~(dat[1]),lhs~(dat[2]));
Xt:=applyop~(apply,1,dat[2],t);
X:=lhs~();
v:=diff~(X,t)=A.X-b;
vi:=eval(X,t=dat[3])=;
sys:=convert(Equate(lhs(v),rhs(v)),set) union convert(Equate(lhs(vi),rhs(vi)),set);
userinfo(2,'procname',print(v,vi,));
eval(dsolve(sys,_rest),Xt);
end proc:

infolevel[Dsolve2]:=2:
Dsolve2({deq},{u(0)=2,D(u)(0)=-1},{u(t)});
Dsolve2({deq},{u(0)=2,D(u)(0)=-1},{u(t)},numeric,output=Array([0,0.25,0.5,0.75,1]));

#A comment: Your procedure as well as my modified version obviously only handles linear systems.

@herclau
I don't get any email notifications either anymore!


Whereas LinearAlgebra is a module, DEtools is not, which explains the difference in their treatment below.

I have made a few changes in your procedure besides making it take different input (and output).

Dsolve2:=proc (eqs::{list,set}(equation),ics::{list,set}(equation),vars::{set,list}(function(name)))
local t,dat,A,b,v,vi,ic,Xt,X,x,sys;
uses LinearAlgebra;
t:=op(op~(vars));
dat:=DEtools[convertsys](eqs,ics,vars,t,x);
A,b:=GenerateMatrix(rhs~(dat[1]),lhs~(dat[2]));
Xt:=applyop~(apply,1,dat[2],t);
X:=lhs~();
v:=diff~(X,t)=A.X-b;
vi:=eval(X,t=dat[3])=;
sys:=convert(Equate(lhs(v),rhs(v)),set) union convert(Equate(lhs(vi),rhs(vi)),set);
userinfo(2,'procname',print(v,vi,));
eval(dsolve(sys,_rest),Xt);
end proc:

infolevel[Dsolve2]:=2:
Dsolve2({deq},{u(0)=2,D(u)(0)=-1},{u(t)});
Dsolve2({deq},{u(0)=2,D(u)(0)=-1},{u(t)},numeric,output=Array([0,0.25,0.5,0.75,1]));

#A comment: Your procedure as well as my modified version obviously only handles linear systems.

There is no infinite series, so there is no question of convergence:
The idea in Gantmacher is to replace f by a polynomial r which has the "same values on the spectrum" as f. By the same values is meant not only r(L) = f(L) for every eigenvalue but also equality for the derivatives up to one less than their individual multiplicities in the minimal polynomial of the matrix.
With your new matrix:
G := Matrix(3, 3, {(1, 1) = 218/29, (1, 2) = 96/29, (1, 3) = 0, (2, 1) = 120/29, (2, 2) = 130/29, (2, 3) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 5})

Eigenvalues(G);
MinimalPolynomial(G,lambda); #in this case (as for J) the same as the characteristic poly
factor(%);
#The 3 multiplicities for 2, 5, and 10 are 1:
#m[L]=1, for L=2,5,10. Thus m[L]-1 = 0 for all three (see eqs below).
r:=unapply(add(a[k]*lambda^k,k=0..2),lambda);
f:=x->ln(1+x);
#The set of equations in this case. In the inner seq k runs from 0 to m[L]-1 (in this case 0):
eqs:=op~({seq(seq([(D@@k)(r)(L)=(D@@k)(f)(L)],k=0..),L=[2,5,10])});
solve(%);
R:=eval(r(lambda),%);
M:=eval(R,lambda=G);
#Check:
MatrixExponential(M)-1;

There is no infinite series, so there is no question of convergence:
The idea in Gantmacher is to replace f by a polynomial r which has the "same values on the spectrum" as f. By the same values is meant not only r(L) = f(L) for every eigenvalue but also equality for the derivatives up to one less than their individual multiplicities in the minimal polynomial of the matrix.
With your new matrix:
G := Matrix(3, 3, {(1, 1) = 218/29, (1, 2) = 96/29, (1, 3) = 0, (2, 1) = 120/29, (2, 2) = 130/29, (2, 3) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 5})

Eigenvalues(G);
MinimalPolynomial(G,lambda); #in this case (as for J) the same as the characteristic poly
factor(%);
#The 3 multiplicities for 2, 5, and 10 are 1:
#m[L]=1, for L=2,5,10. Thus m[L]-1 = 0 for all three (see eqs below).
r:=unapply(add(a[k]*lambda^k,k=0..2),lambda);
f:=x->ln(1+x);
#The set of equations in this case. In the inner seq k runs from 0 to m[L]-1 (in this case 0):
eqs:=op~({seq(seq([(D@@k)(r)(L)=(D@@k)(f)(L)],k=0..),L=[2,5,10])});
solve(%);
R:=eval(r(lambda),%);
M:=eval(R,lambda=G);
#Check:
MatrixExponential(M)-1;

First 196 197 198 199 200 201 202 Last Page 198 of 230