Robert Israel

6577 Reputation

21 Badges

18 years, 208 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are replies submitted by Robert Israel

Technically, unbounded functions are not Riemann integrable (improper Riemann integrals are not actually Riemann integrals). For examples of functions that are not Riemann integrable because they are not almost everywhere continuous, you can use functions defined by series. For example, try f(x)=sum(exp(-4*(x-sin(n))^2*n^4),n=1..infinity) This is Lebesgue integrable, with int(f(x),x=-infinity..infinity)=Pi^(5/2)/12 (interchange the sum and integral: Maple won't do that by itself though). Note that f(x) >= 1 on a dense subset of [-1,1], but since the integral is less than 2 this must be discontinuous on a set of positive measure.
Technically, unbounded functions are not Riemann integrable (improper Riemann integrals are not actually Riemann integrals). For examples of functions that are not Riemann integrable because they are not almost everywhere continuous, you can use functions defined by series. For example, try f(x)=sum(exp(-4*(x-sin(n))^2*n^4),n=1..infinity) This is Lebesgue integrable, with int(f(x),x=-infinity..infinity)=Pi^(5/2)/12 (interchange the sum and integral: Maple won't do that by itself though). Note that f(x) >= 1 on a dense subset of [-1,1], but since the integral is less than 2 this must be discontinuous on a set of positive measure.
Once you have the integral in the form int(V(mu+s*sigma)*f(s),s=-infinity..infinity) the rest can be justified by the Lebesgue Dominated Convergence Theorem as long as (1) V is continuous at mu, so limit(V(mu+s*sigma),sigma=0)=V(mu) (2) There is some function g(s), integrable on the real line, such that abs(V(mu+s*sigma)*f(s)) <= g(s) for all s and all sufficiently small sigma. For example, in this case with f(s)=sqrt(2)/(2*sqrt(Pi))*exp(-s^2/2) it suffices to have abs(V(t)) <= exp(k*t^2) for some constant k. [why are "Maple Math" tags not working for my inequalities?]
Once you have the integral in the form int(V(mu+s*sigma)*f(s),s=-infinity..infinity) the rest can be justified by the Lebesgue Dominated Convergence Theorem as long as (1) V is continuous at mu, so limit(V(mu+s*sigma),sigma=0)=V(mu) (2) There is some function g(s), integrable on the real line, such that abs(V(mu+s*sigma)*f(s)) <= g(s) for all s and all sufficiently small sigma. For example, in this case with f(s)=sqrt(2)/(2*sqrt(Pi))*exp(-s^2/2) it suffices to have abs(V(t)) <= exp(k*t^2) for some constant k. [why are "Maple Math" tags not working for my inequalities?]
The first answer may look sensible, but it's wrong. If u=sqrt(1+v^2-2*v*mu), F(u,v) should become F(sqrt(1+v^2-2*v*mu), v), not F(mu,v). That's supposedly a feature, not a bug, because it's mentioned on the help page: you have to include the option "known = F" to have that substitution done. I suppose the idea is that if the original F(u,v) just stands for "an arbitrary function of u and v", then after the change of variables you still have an arbitrary function of mu and v. It may make some sense in the context of PDE's, I think, but less so in the context of integrals. Now look at the F in your second answer. Somehow it's acquired an extra v. Apparently at some stage there were two different variables named v, one that had the assumption v > 0 placed on it and one that didn't, so both v's were placed in the F. Apparently simplify is given an expression containing both v's, one of which is a variable of integration while the other is a factor of the integrand. Since that factor is not the variable of integration, simplify moves it outside of the integral. This effect is not seen when the calculation is done in two separate stages, even if you use "assuming v > 0" in the first stage as well, because at the end of the first stage assuming removes the temporary assumption v > 0, giving you just one v again. Of course the assumption v > 0 should be irrelevant in this case, since v is a dummy variable of integration over the interval 0..1.
Actually there's a fair amount of classical theory that impinges on this example. First of all, why should the singularities be poles rather than branch points or essential singularities? Well, this is a Riccati differential equation, and Riccati differential equations are of Painleve type, i.e. their only movable singularities are poles. In fact, Fuchs showed in 1884 that these are the only equations of the form diff(y(t),t)=F(y(t),t) with F rational y in and locally analytic in t whose only movable singularities are poles. Now a Riccati differential equation
 > Rde:= diff(y(t),t) = a(t)+b(t)*y(t)+c(t)*y(t)^2;
Rde:=diff(y(t),t) = a(t)+b(t)*y(t)+c(t)*y(t)^2 can be transformed into a second order linear ODE. Maple uses _a for the dependent variable in this equation: I'll change it to u.
 
> Lde, transf:= eval(convert(Rde,linearODE), _a = u):
  Lde; transf; 
diff(u(t),`$`(t,2)) = (b(t)*c(t)+diff(c(t),t))/c(t)*diff(u(t),t)-a(t)*c(t)*u(t) {y(t) = -diff(u(t),t)/c(t)/u(t)} Apart from zeros or singularities of c(t) (which would be fixed rather than movable singularities), the poles of y(t) will correspond to zeros of u(t). Thus to find the poles of y(1,w) where y(0,w) = w, we could solve the initial value problem consisting of the differential equation for u with u(1) = 0 and u'(1) = 1, and the only pole will be at w = -`u'`(0)/(c(0)*u(0)) For example, in the case we were considering:
 > case1:= { a(t) = 1, b(t) = 0, c(t) = 1 }:
  dsolve({eval(Lde, case1), u(1) = 0, D(u)(1) = 1});
u(t) = 1/(cos(1)^2+sin(1)^2)*cos(1)*sin(t)-1/(cos(1)^2+sin(1)^2)*sin(1)*cos(t)
 > pole := eval(-D(u)(0)/u(0),u=unapply(rhs(%),t));
pole := 1/sin(1)*cos(1) In a less simple case, this could be done using dsolve(...,numeric):
 > dsolve({eval(Lde, case1), u(1) = 0, D(u)(1) = 1}, 
          numeric)(0);
[t = 0., u(t) = -.841471136025837896, diff(u(t),t) = .540302304342763717]
 > eval(-diff(u(t),t)/u(t), %);
.6420924987
Actually there's a fair amount of classical theory that impinges on this example. First of all, why should the singularities be poles rather than branch points or essential singularities? Well, this is a Riccati differential equation, and Riccati differential equations are of Painleve type, i.e. their only movable singularities are poles. In fact, Fuchs showed in 1884 that these are the only equations of the form diff(y(t),t)=F(y(t),t) with F rational y in and locally analytic in t whose only movable singularities are poles. Now a Riccati differential equation
 > Rde:= diff(y(t),t) = a(t)+b(t)*y(t)+c(t)*y(t)^2;
Rde:=diff(y(t),t) = a(t)+b(t)*y(t)+c(t)*y(t)^2 can be transformed into a second order linear ODE. Maple uses _a for the dependent variable in this equation: I'll change it to u.
 
> Lde, transf:= eval(convert(Rde,linearODE), _a = u):
  Lde; transf; 
diff(u(t),`$`(t,2)) = (b(t)*c(t)+diff(c(t),t))/c(t)*diff(u(t),t)-a(t)*c(t)*u(t) {y(t) = -diff(u(t),t)/c(t)/u(t)} Apart from zeros or singularities of c(t) (which would be fixed rather than movable singularities), the poles of y(t) will correspond to zeros of u(t). Thus to find the poles of y(1,w) where y(0,w) = w, we could solve the initial value problem consisting of the differential equation for u with u(1) = 0 and u'(1) = 1, and the only pole will be at w = -`u'`(0)/(c(0)*u(0)) For example, in the case we were considering:
 > case1:= { a(t) = 1, b(t) = 0, c(t) = 1 }:
  dsolve({eval(Lde, case1), u(1) = 0, D(u)(1) = 1});
u(t) = 1/(cos(1)^2+sin(1)^2)*cos(1)*sin(t)-1/(cos(1)^2+sin(1)^2)*sin(1)*cos(t)
 > pole := eval(-D(u)(0)/u(0),u=unapply(rhs(%),t));
pole := 1/sin(1)*cos(1) In a less simple case, this could be done using dsolve(...,numeric):
 > dsolve({eval(Lde, case1), u(1) = 0, D(u)(1) = 1}, 
          numeric)(0);
[t = 0., u(t) = -.841471136025837896, diff(u(t),t) = .540302304342763717]
 > eval(-diff(u(t),t)/u(t), %);
.6420924987
I think you misunderstood. The "compact=false" only applies if you are constructing the Matrix using the ToeplitzMatrix command: it prevents ToeplitzMatrix from using a special data structure that saves memory but doesn't preserve the fact that the Matrix is symmetric. If you're using ImportMatrix, that is not a consideration because ImportMatrix doesn't know your Matrix is Toeplitz. On the other hand, if you want to get a Matrix on which the numerical LinearAlgebra routines will be efficient, one thing you can do in the ImportMatrix command is set datatype to float[8]. AFAIK ImportMatrix can't set the shape to symmetric, but if your Matrix is symmetric you can do that with the Matrix command. Thus:
> A := ImportMatrix("myfile", source=Matlab,  
     format=rectangular, datatype=float[8]);
  A := Matrix(A, datatype=float[8],shape=symmetric);
I think you misunderstood. The "compact=false" only applies if you are constructing the Matrix using the ToeplitzMatrix command: it prevents ToeplitzMatrix from using a special data structure that saves memory but doesn't preserve the fact that the Matrix is symmetric. If you're using ImportMatrix, that is not a consideration because ImportMatrix doesn't know your Matrix is Toeplitz. On the other hand, if you want to get a Matrix on which the numerical LinearAlgebra routines will be efficient, one thing you can do in the ImportMatrix command is set datatype to float[8]. AFAIK ImportMatrix can't set the shape to symmetric, but if your Matrix is symmetric you can do that with the Matrix command. Thus:
> A := ImportMatrix("myfile", source=Matlab,  
     format=rectangular, datatype=float[8]);
  A := Matrix(A, datatype=float[8],shape=symmetric);
Maple is many things to many people. As a programming language, it has many very nice features. It can be used for an introduction to programming. I have a book, "Introduction to Scientific Programming" by Joseph L. Zachary, that does just that: its first half uses Maple and its second half uses C. But if your main goal is C, I suspect you might be better off starting with C. If you start with Maple, you'll never want to leave it for C.
OK, looking at the paper, I see first that he takes the differential equation (10) diff(beta,s) = sqrt(N/EI)*sqrt(1-k^2*sin(beta)^2) with boundary condition beta(0)=0 from (11) to get s/r=1/sqrt(conjugate(N))*F(beta,k) with conjugate(N)=N*r^2/EI Now (though dsolve for some reason gets a more complicated solution) this must come by treating this as a separable DE:
> J:= Int(1/sqrt(1-k^2*sin(beta)^2),beta):
  J = simplify(value(J)) assuming beta > 0, beta < Pi/2;
Int(1/sqrt(1-k^2*sin(beta)^2), beta) = EllipticF(sin(beta), k) So it seems the authors are using a different convention from both Maple and Mathematica if they call that F(beta,k). I checked with www.integrals.com for the Mathematica answer, and confirmed that Mathematica would have F(beta*`|`*k^2) for this integral. Anyway, in Maple's notation formula (12) should be s/r = 1/sqrt(conjugate(N))*EllipticF(sin(beta),k) Now from (9) and (10) sin(phi/2) = - k* sin(beta) so (3) says diff(x,s) = cos(phi) `` = 1 - 2*sin(phi/2)^2 `` = 1 - 2*k^2*sin(beta)^2 I think the E(beta,k) in (14) should be EllipticE(sin(beta),k) in Maple's notation. To confirm that, we differentiate (and use the differential equation (10))
> Q:= 1/sqrt(conjugate(N))*(2*EllipticE(sin(beta),k)
-EllipticF(sin(beta),k)):
  simplify(diff(Q,beta)*sqrt(conjugate(N))/r
    *sqrt(1-k^2*sin(beta)^2)) assuming beta>0, beta < Pi/2;
(1-2*k^2+2*k^2*cos(beta)^2)/r
> combine(subs(cos(beta)^2=1-sin(beta)^2,   
     sin(beta)=-sin(phi/2)/k, %), trig);
cos(phi)/r So it looks like the actual equation to plot (in Maple's notation) is 2*EllipticE(sin(beta),k)-EllipticF(sin(beta),k)=tan(beta)* sqrt(1-k^2*sin(beta)^2) and for the intervals you suggested we have
> with(plots): Digits:= 15: 
# since evalhf(Digits) gives 14 on my machine
> implicitplot(2*EllipticE(sin(beta),k)
  -EllipticF(sin(beta),k)=tan(beta)*sqrt(1-k^2*sin(beta)^2),
  k = 0 .. 1, beta = 3.5 .. 5.5, view = [0..1, 3.5 .. 5.5]); 
It does indeed show beta as nearly constant over most of this region, although taking a sharp turn downward as k approaches 1.
OK, looking at the paper, I see first that he takes the differential equation (10) diff(beta,s) = sqrt(N/EI)*sqrt(1-k^2*sin(beta)^2) with boundary condition beta(0)=0 from (11) to get s/r=1/sqrt(conjugate(N))*F(beta,k) with conjugate(N)=N*r^2/EI Now (though dsolve for some reason gets a more complicated solution) this must come by treating this as a separable DE:
> J:= Int(1/sqrt(1-k^2*sin(beta)^2),beta):
  J = simplify(value(J)) assuming beta > 0, beta < Pi/2;
Int(1/sqrt(1-k^2*sin(beta)^2), beta) = EllipticF(sin(beta), k) So it seems the authors are using a different convention from both Maple and Mathematica if they call that F(beta,k). I checked with www.integrals.com for the Mathematica answer, and confirmed that Mathematica would have F(beta*`|`*k^2) for this integral. Anyway, in Maple's notation formula (12) should be s/r = 1/sqrt(conjugate(N))*EllipticF(sin(beta),k) Now from (9) and (10) sin(phi/2) = - k* sin(beta) so (3) says diff(x,s) = cos(phi) `` = 1 - 2*sin(phi/2)^2 `` = 1 - 2*k^2*sin(beta)^2 I think the E(beta,k) in (14) should be EllipticE(sin(beta),k) in Maple's notation. To confirm that, we differentiate (and use the differential equation (10))
> Q:= 1/sqrt(conjugate(N))*(2*EllipticE(sin(beta),k)
-EllipticF(sin(beta),k)):
  simplify(diff(Q,beta)*sqrt(conjugate(N))/r
    *sqrt(1-k^2*sin(beta)^2)) assuming beta>0, beta < Pi/2;
(1-2*k^2+2*k^2*cos(beta)^2)/r
> combine(subs(cos(beta)^2=1-sin(beta)^2,   
     sin(beta)=-sin(phi/2)/k, %), trig);
cos(phi)/r So it looks like the actual equation to plot (in Maple's notation) is 2*EllipticE(sin(beta),k)-EllipticF(sin(beta),k)=tan(beta)* sqrt(1-k^2*sin(beta)^2) and for the intervals you suggested we have
> with(plots): Digits:= 15: 
# since evalhf(Digits) gives 14 on my machine
> implicitplot(2*EllipticE(sin(beta),k)
  -EllipticF(sin(beta),k)=tan(beta)*sqrt(1-k^2*sin(beta)^2),
  k = 0 .. 1, beta = 3.5 .. 5.5, view = [0..1, 3.5 .. 5.5]); 
It does indeed show beta as nearly constant over most of this region, although taking a sharp turn downward as k approaches 1.
Which file is that? Can you post a link to it? EllipticE(beta,k) and EllipticF(beta,k) are complex, not real, for |beta| > 1, so there won't be a curve where your equation is true, just a couple of points. However, I suspect there's confusion here between different conventions for the elliptic functions. Mathematica uses EllipticE[x,y] for what Maple would call EllipticE(sin(x),sqrt(y)). If your formula is using the Mathematica conventions, then what you want is something like this:
> Eq16:= 2*EllipticE(sin(beta),sqrt(k)) 
    -EllipticF(sin(beta),sqrt(k))      
      =tan(beta)*sqrt(1-k^2*sin(beta)^2);
 contourplot(lhs(Eq16)-rhs(Eq16),beta=3.5 .. 5.5,k=0..1,
    contours=[0]);
Which file is that? Can you post a link to it? EllipticE(beta,k) and EllipticF(beta,k) are complex, not real, for |beta| > 1, so there won't be a curve where your equation is true, just a couple of points. However, I suspect there's confusion here between different conventions for the elliptic functions. Mathematica uses EllipticE[x,y] for what Maple would call EllipticE(sin(x),sqrt(y)). If your formula is using the Mathematica conventions, then what you want is something like this:
> Eq16:= 2*EllipticE(sin(beta),sqrt(k)) 
    -EllipticF(sin(beta),sqrt(k))      
      =tan(beta)*sqrt(1-k^2*sin(beta)^2);
 contourplot(lhs(Eq16)-rhs(Eq16),beta=3.5 .. 5.5,k=0..1,
    contours=[0]);
Smarter, you say? Then I hope the result of coulditbe(mU > 1-u) is false, not true... Actually, it is; but if you've already tried coulditbe(mU > 1-u) before setting _EnvTry to hard, you need
> forget(coulditbe);
otherwise it will just remember the FAIL value. BTW, _EnvTry is documented in the help page ?assume:
The environment variable _EnvTry can be used to specify the intensity of the testing by the is and coulditbe routines. Currently _EnvTry can be set to normal (the default) or hard. If _EnvTry is set to hard, is and coulditbe calls can take exponential time.
First 176 177 178 179 180 181 182 Last Page 178 of 187