Preben Alsholm

13728 Reputation

22 Badges

20 years, 253 days

MaplePrimes Activity


These are replies submitted by Preben Alsholm

I think this example is simpler and seems to get rather unexpected results from indets: The ones labelled 'Surprise'.
My understanding is that in those 'Surprise' cases indets ought to return {diff(u(x),x),D(u)(x),Diff(u(x),x)}, i.e. all three types of derivatives. But it doesn't.

restart;
expr:=diff(u(x),x)+D(u)(x)+Diff(u(x),x);
convert(expr,diff); #OK
convert(expr,D); #OK
convert(expr,Diff); #OK
indets(expr,specfunc(D(u))); #OK
indets(expr,specfunc(diff)); #OK
indets(expr,specfunc(Diff)); #OK
indets(expr, specfunc(D(u)) &under (convert, D)); # Surprise
indets(expr, specfunc(diff) &under (convert, diff)); # Surprise
indets(expr, specfunc(Diff) &under (convert, Diff)); #Surprise
type(diff(u(x),x),specfunc(D(u)) &under (convert,D)); #OK
hastype(expr,specfunc(diff) &under (convert,D)); #OK
#######acer's remedy given in his answer below works fine:
indets(expr, specfunc(D(u)) &under (convert, compose, D, `global`)); #OK
indets(expr, specfunc(diff) &under (convert, compose, diff, `global`)); # OK
indets(expr, specfunc(Diff) &under (convert, compose, Diff, `global`)); # OK
############
The example I first looked at may still be interesting:

expr2:=diff(u(x),x)+D(u)(x);
indets(expr2, specfunc(D(u)) &under (convert, D)); # Surprise: The empty set
indets(expr2, specfunc(diff) &under (convert, diff)); # Surprise
indets(expr2, specfunc(Diff) &under (convert, Diff)); #OK




Isn't the problem that when temporarily converting diff(u(x,t),x,t) to D (the &under-part) the expression becomes convert(expr), thus the two versions of the mixed second derivative of u are treated as the same and represented as 2*D[1, 2](u)(x, t).
If that is actually the case then the retreat from the temporary conversion is impossible, it would result in just one of the versions, in this case the diff-version.

Since the fact that the same unexpected results come up in my examples below if the expressions expr and expr2 are changed to lists as in
expr:=[diff(u(x),x),D(u)(x),Diff(u(x),x)];
my explanation above doesn't tell the whole story.

@Al86 My first version of "Another approach" was flawed: Using U:= x->1+add(c[i]*x^i,i=1..n) means that U(0) = 1. However, x = 0 corresponds to t = infinity.
I have replaced the flawed version with another. The new one doesn't make any change of variable.
I have tried yet another appoach using y(t) = 1+h*y1(t)+h^2*y2(t)+...
Inserting this into the integral equation and equating powers of h gives y1(t), y2(t), etc.
Using just y(t)=1+h*y1(t) gives results that are quite similar to the collocation approach above.
That is encouraging.

It may be worth pointing out that now (at least from Maple 18 and on) dsolve can solve systems given on matrix form.
Simple examples.
restart;
M:=Matrix([[1,1,0],[0,1,3],[0,0,2]]);
b:=<7,9,13>;
X:=<seq(x[i](t),i=1..3)>;
sys:=diff~(X,t)=M.X+b;
dsolve(sys); #General solution
dsolve({sys,x[1](0)=0,x[2](0)=3,x[3](0)=0}); #Initial values given
dsolve({sys,eval(X,t=0)=<0,3,0>}); #An alternative formulation of the ivp
##### B a matrix:
B:=<b|<1,2,3>|<-1,8,-9>>;
X:=Matrix(3,(i,j)->x[i,j](t));
sys:=diff~(X,t)=M.X+B;
dsolve(sys);
dsolve({sys,eval(X,t=0)=<<0,3,0>|<4,-5,7>|<0,0,0>>});
## A final note: You can get the output on vector or matrix form in all the cases above by doing the following right after the dsolve command:
X=subs(%,X);

@Axel Vogt Why is the given integral equation not
y(t) = 1-h*Int(JacobiTheta3(0,exp(-Pi^2*s))*y(t-s)^4,s = 0 .. t) ?
That seems to me to be implied by his recursion.
The discussion in the previous question posed by Al86 left me somewhat confused, so maybe I'm missing something.


@tomleslie I apologize for editing my comment many times within about one hour yesterday.
If you read the comment as it ended up yesterday after my last edit, I think the reason for the weird behavior is the following:
When using U(x):=uuu; #Where uuu is some expression, e.g. x/(1+s^2).
the entry U(x)=uuu is written in the memory table of U.
When using
inttrans:-invlaplace(U(x),s,t) assuming x>0;
the first that happens (before evaluation of U(x)) is that x is replaced by another (local) name (think of it as x~).
After that evaluation of U(x~) takes place. First the memory table is searched, the index x~ is not found. Thus since no (general) definition of U has been done, U(x~) is just itself, i.e. evaluates to U(x~).
After that invlaplace gets to work resulting in U(x~)*Dirac(t).
Just before exiting `assuming` the local name x~ is replaced by the global x resulting in the output U(x)*Dirac(t).
Notice that if you evaluate that by doing just
%;
you get uuu*Diract(t).
###
So is this a bug? In my view it is not. It is due to the deliberate design of `assuming`.
As I also pointed out (and you mention) the mess could easily have been avoided by using U or Ux (or whatever name) instead of U(x).

I tried the following using worksheet interface and Maple input ( as always):

restart;
U:=-(s^2/(s^2+1)+1/(s^2+1)-1)*exp(sqrt(s)*x/sqrt(phi))/(s^2+1)+exp(-sqrt(s)*x/sqrt(phi))/(s^2+1):
inttrans:-invlaplace(U,s,t) assuming phi>0,x>0;
                       
#Whereas the following, where U(x) is a remember table assignment to the function U:
restart;
U(x):=-(s^2/(s^2+1)+1/(s^2+1)-1)*exp(sqrt(s)*x/sqrt(phi))/(s^2+1)+exp(-sqrt(s)*x/sqrt(phi))/(s^2+1):
inttrans:-invlaplace(U(x),s,t) assuming phi>0,x>0;
                         U(x) Dirac(t)
# I should point out that the latter answer makes no sense since U(x) depends on s.
#Even using simplify on U(x) (as tomleslie does) doesn't change the behavior when done like this:
inttrans:-invlaplace(simplify(U(x)),s,t) assuming phi>0,x>0;
###################
Simpler example:
restart;
U(x):=x/(s^2+1);
inttrans:-invlaplace(U(x),s,t) assuming x>0 ;
                        U(x) Dirac(t)
%;
                        exp(sqrt(s))*Dirac(t)/(s^2+1)
inttrans:-invlaplace(U(x),s,t);
                        x*sin(t)
##The first part of following is OK
restart;
assume(x>0);
U(x):=x/(s^2+1):
inttrans:-invlaplace(U(x),s,t);
                            x~sin(t)
inttrans:-invlaplace(U(x),s,t) assuming x>0; #Again nonsense:
                           x~Dirac(t)
                           ----------
                              2      
                             s  + 1  

# When using assuming (as opposed to assume) I think the x in U(x) is temporaily given another name x~ (or some such thing) and U(x~) is not found in the remember table for U. Thus U(x~) is just an unknown quantity apparently not dependent on s, so the output will first be U(x~)*Dirac(t) but as it exits `assuming` x~ is replaced by x. 

#The comment above has been edited along the way.


@Markiyan Hirnyk I fail to see the relevance to the present situation in the link you give.

@Markiyan Hirnyk You may look up the Picard-Lindelõf Theorem in the excellent book by Philip Hartman, Ordinary Differential Equations, 1964, or any decent book past the most elementary level. Alternatively, take a look at
https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem

@Markiyan Hirnyk I shall make an attempt at an explanation.

The original problem is
ode1:=diff(y(t),t) = f(t,y(t),z(t)); # (1)
eq:=g(t,y(t),z(t)) = 0; # (2)
with y(0)=y0.
For mu>0 consider with h(t) = g(t,y(t),z(t)) the ode
ode3:=-mu*diff(h(t),t) = h(t); # (3)

Suppose that (1) and (3) with y(0)=y0 and z(0) = z0 has a solution on the interval [0,T), where T > 0. This is certainly the case under mild smothness assumptions. In some cases we may take T = infinity, but not necessarily (see example below).
We then have from ode3 (which is linear in h) that h(t) = h(0)*exp(-t/mu) for all t in the interval [0,T).
In the case where we may take T = infinity we see that h(t) -> 0 exponentially. For small values of mu h(t) is soon almost zero, so that the constraint equation eq is (almost) satisfied. Even if T < infinity we still have that h(t) decreases in absolute value. For a proper choice of mu it may become acceptably small fast enough.

Simple example of system where T cannot be taken to be infinity:
restart;
ode:=diff(y(t),t)=y(t)^2+z(t);
eq:=y(t)-z(t)=0;
dsolve({subs(z=y,ode),y(0)=1}); #Maximal T below is going to be near ln(2)
res:=dsolve({ode,eq,y(0)=1},numeric,method=rkf45_dae); #Just using dae-method in Maple
plots:-odeplot(res,[t,y(t)],0..0.69);
#Taking a somewhat small mu:
mu:=.01:
h:=lhs(eq);
ode2:=-mu*diff(h,t)=h;
res2:=dsolve({ode,ode2,y(0)=1,z(0)=0.8},numeric);
res2(.1); #Constraint (y=z) roughly satisfied at t = .1
plots:-odeplot(res2,[t,h],0..0.7);
h0:=eval(h,res2(0));
plots:-odeplot(res2,[t,h-h0*exp(-t/mu)],0..0.7);


@Markiyan Hirnyk I agree that there is no point (or difficult to see the point) in using a range like t=0..10^300.

Thanks for testing this in Mathematica. I wonder how NDSolve manages to escape the problem of x(t) becoming negative.
(Actually the same holds for Maple's dsolve with method=dver78 or taylorseries: see below):
It would be cheating if NDSolve actually ignored Abs, so that the equation became diff(x(t),t)=-x(t). For that equation there is no problem, since the equilibrium solution x(t) = 0 is asymptotically stable. The problem for the original equation diff(x(t),t)=-abs(x(t)) is that the equilibrium solution x(t) = 0 is unstable. More precisely stable from the right (x>0), but unstable from the left (x<0) in such a way that if x(t1) < 0 for some t1, then x(t) -> infinity for t -> infinity.

######
Briefly I tried the various methods in Maple (with default settings otherwise):

M:=[rkf45,ck45,rosenbrock,dverk78,lsode,gear,taylorseries]:
T:=100;
for m in M do
  try
    res:=dsolve({ode,x(0)=1},numeric,method=m);
    print(m,subs(res(T),x(t)));
  catch:
    print(lasterror)
  end try
end do;
#We see that for dverk78 and taylorseries x(t) remains positive when T=100.
The result for dverk78 is pretty good: Very close to evalf(exp(-100)).
Actually we can go higher for those two methods.


@maple fan Since the numerical integrator (with the default method = rkf45 and in Maple) starts getting negative values at time roughly t=34.4 the fact that dsolve doesn't want to continue beyond t=739.5991 is just because the value of x(t) is huge (and negative):
restart;
ode:=diff(x(t),t)=-abs(x(t));

res:=dsolve({ode,x(0)=1},numeric);
res(1000);
res(739.59941); #  x(t) = -1.35673646011395*10^304
## Using events for finding the "zero" of x(t):
resE:=dsolve({ode,x(0)=1,T(0)=0},numeric,discrete_variables=[T(t)::float],events=[[x(t)=0,T(t)=t]]);
resE(1000);
resE(770.28981);
###
Does Mathematica get into negative values just as Maple? In other words is the value found for t = 10^308 (or just before) huge and negative or is it incredibly small and positive? That I think is a more interesting question than when either system decides to call it quits.


In your pdf-file you mention in the beginning of Example 3 that Maple has problems with that equation.
However, as mentioned by acer in the link you give to MaplePrimes in your post above:
http://www.mapleprimes.com/questions/203096-Solving-Nonlinear-ODE#answer211682

dsolve handles the equation if the method is specified as (e.g.) rkf45_dae and a consistent initial condition is given for D(y1)(0):

eq1:=diff(y1(t),t)^2+diff(y1(t),t)*(y1(t)+1)+y1(t)=cos(diff(y1(t),t));
subs(diff(y1(t),t)=dydt0,y1(t)=0,eq1);
z0:=fsolve(%,dydt0);
solx:=dsolve({eq1,y1(0)=0,D(y1)(0)=z0},numeric,method=rkf45_dae);

I'm wondering if you are tacitly assuming that your system is autonomous, or that at least the algebraic equations are autonomous, i.e. don't have any explicit t-dependence.
I'm thinking about the initial phase where the switch is 0 so y is constant which supposedly gives z time to find a value consistent with the y-value. This appears to me to be problematic if g(t,y,z) actually depends explicitly on t.
Your examples are all autonomous.

@Markiyan Hirnyk The word 'bifurcate' has a meaning in ordinary nontechnical language. It means "to cause to divide into two branches or parts", see e.g. http://www.merriam-webster.com/dictionary/bifurcate
What happens for the simple system considered for t=Pi/2 where y=1 and z=0 is that locally the solution y(t)=sin(t), z(t)=cos(t) can be continued in two ways on any interval [Pi/2,Pi/2+epsilon], epsilon > 0. Either as the same analytic expressions or as y(t)=1, z(t)=0.
In the ordinary language use of the word 'bifurcate' it is indeed a bifurcation: A dividing into two branches.

Above I used the word 'locally'. As my previously given example illustrated there are actually infinitely many solutions passing through (y,z) = (1,0) (and also with the restriction that y(0)=0, z(0)=1).

First 111 112 113 114 115 116 117 Last Page 113 of 230