That is exactly what I had in mind -- I just did not have the time to code it up before I went off on vacation.
You should turn that post into a 'blog' post, or even into something to put into the 'collaborative book', as this is a really nice use of many different parts of Maple (and mathematics) together to give a solution. This method ought to be better known.
[Also, it would be interesting to figure out why the coefficients of the Pade approximants drop off fast to 10^(-7) level and then rise again!]
The case with T=0 can be done directly:
assume(b>0, T>0, a>0, w>0);
ii := int(exp(-b*sqrt(w))/sqrt(a^2+w^2), w = T .. infinity):
simplify(eval(ii,T=0));
will return
1/4*1/Pi^2*2^(1/2)*MeijerG([[1/2], []],[[3/4, 1/2, 1/4, 0, 0], []],1/256*b^4*a^2)
The best simplification I can come up with is
i2 := value(student[intparts](ii,exp(-b*sqrt(w)))):
i3 := student[changevar](sqrt(w)=z,i2,z);
which gives
i3 := (-ln(T+(a^2+T^2)^(1/2))+ln(a))/exp(T^(1/2)*b)-int(-b*exp(-b*z)*arcsinh(1/a*z^2),z = T^(1/2) .. infinity)
Of course, one can shift this to be from 0..infinity via another change of variables, but that does not seem to help.
For non-linear ODEs, in general the problem is extremely complex, and I am not sure if anyone has really thought about it a lot. The case which is most complex is of course when the singularities are so-called ``moving singularities'', in that they depend on the initial conditions as well as the ODE itself.
I will assume you only have non-linear ODEs -- for linear ones, the singularities are fairly easy to find, even symbolically (they are the zeroes of the leading coefficient, and some of the zeroes of trailing coefficient).
Empirically then, the one method which I can think of is to find a Taylor series expansion for your ODE (this may be done symbolically even for rather complicated ODEs). From the Taylor series, you can then find what looks like the closest singularity. There are various methods for the latter, but Pade approximants (probably in a Chebyshev basis) seem to be a good idea -- the numapprox package in Maple has the required functionality.
As far as finding particular singularities in a particular direction, the only thing I can think of is to do a conformal transformation so that the ones you are looking for will be closest to zero.
I don't know if this is the best solution, but it ought to work. One way is to rescale t so that 0..infinity becomes 0..1 using (say) an arctan or a x/(1-x^2) type of transformation. Then you can give your conditions as conditions at 0 and 1. PDEtools[changevar] can do the change of variables in ODEs.
With Firefox, and a large enough window in which to do the editing, I have never witnessed this kind of weird behaviour. My first suspicions would be that your browser is odd (ie not standards compliant), or that your window is too small.
For this sort of problem, Maplesoft technical support will be in a much better position to help you.
The Fortran output is Fortran 77, as you noticed. It would make a lot of sense to have 90 and 95 options too. Actually, defaulting to Fortran 90 these days might be wiser than to 77.
In fact, I believe that the CodeGeneration package is written in such a way and with sufficient documentation that it might be possible for an enterprising 3rd party to do this themselves.
The problem appears to be that singular only checks for ``surface singularities'' and not also for points of undefinedness of the inner pieces. Actually, to be precise the bug is in the case of expressions of the form (expr)^y where y is known to be less then 0, where singular does not recurse on its inputs, but only relies on solve to find singularities, which is wrong.
Using the simplest equations, I get
> assume(N>0,El>0,k>0);de := diff(beta(s),s) = sqrt(N/El)*sqrt(1-k^2*sin(beta(s))^2):
> dsolve(de);
/
| / N \1/2 1/2 1/2
1/2 |2 s |----| sin(beta(s)) %1 + (2 - 2 cos(2 beta(s)))
| \ El /
\
/ 2 \1/2
/ %1 \1/2 | k |
|- -------| EllipticF(cos(beta(s)), |-------| )
| 2| | 2|
\ -1 + k / \-1 + k /
\
/ N \1/2 1/2| / // N \1/2
+ 2 _C1 |----| sin(beta(s)) %1 | / ||----|
\ El / | / \\ El /
/
1/2\
sin(beta(s)) %1 | = 0
/
2 2
%1 := 4 - 2 k + 2 k cos(2 beta(s))
> dsolve({de,beta(0)=0},beta(s));
beta(s) = RootOf(
/ N \1/2 2 2 1/2
2 s |----| sin(_Z) (4 - 2 k + 2 k cos(2 _Z)) +
\ El /
/ 2 2 \1/2
1/2 | 4 - 2 k + 2 k cos(2 _Z)|
(2 - 2 cos(2 _Z)) |- -------------------------|
\ (k - 1) (k + 1) /
/ 2 \1/2
| k |
EllipticF(cos(_Z), |---------------| ) - 2 sin(_Z)
\(k - 1) (k + 1)/
2 2 1/2 / 1 \1/2
(4 - 2 k + 2 k cos(2 _Z)) |- ---------------|
\ (k - 1) (k + 1)/
/ 2 \1/2
| k |
EllipticK(|---------------| ))
\(k - 1) (k + 1)/
which contain elliptic integrals, as required. Now, the 'form' of these integrals is not best, but that is known. See
this paper which explains how to deal with a similar problem leading to elliptic integrals and Maple weaknesses with handling it directly.
If you want to execute a 'next' on several levels of loops, your best bet is to use try-catch and coded errors. That way you can "escape" a nest of loops to whatever level you want, and the catch portion of the loop can simply be a 'next'.
I am not aware of such systematic bugs that would cause blanket wrong answers all over. It must be due to the specifics of the problems you were trying. Without further details, I doubt anyone will be able to really help. So what about posting some code that we can experiment with?
When you run a lot of computations in a long interactive session, you will notice that Maple allocates more and more memory. Some of it is used for computations, which is fine. Some is used for storing results of previous computations, some is occupied by parts of the Maple library being loaded up, some by remember tables.
Maple fundamentally uses hash tables for everything internally. Which means that, when Maple works well, Maple objects are distributed uniformly throughout all memory. When memory allocated gets large enough, with objects distributed throughout, then large computations causes massive paging; first, in-and-out of the cache, a slowdown of a factor of 100, and then eventually to disk, another factor of 100 on top of that.
My guess is you are witnessing this first factor of 100, where most computations cause cache stalls, which means that 15 minute computations can be turned into 1500 minute computations (25 hours!). At this time, there is enough memory allocated for the computation (thus your witnessing the "steady state"), so it is all CPU time and cache stalls.
Actually, it is not even that simple. Things get even more complex once you throw in garbage collection (which is triggered mostly by memory *used*, rather than memory allocated, but takes time proportional to memory allocated [assuming you are doing mostly symbolic computations, things get even more complex when you have large numeric rtables].
Note that you don't need to reboot your PC. All you need to do is to quit Maple and relaunch it, that should be enough.
Of course, that all assumes that what is going on is due to the computations (in the kernel), which is consistent with what you describe. If your computations keep doing incremental output, then the issue could be with the GUI. To test this, you should try the same computations in Classic (or the TTY) and see if the same thing happens.
First, do not use 0.5 to mean 1/2. To Maple, 1/2 is exact, and 0.5 is some weird floating point approximation - and these mean completely different things.
Second, gamma means something to Maple, use another name instead (gam below).
Third, since you know your exponent is positive, you only really need the expansion of the inside, so you can do:
restart;
assume(v>0, g>0, sigma2>0, t>0, c>0, a>0, eps>0, xbar>0);
> u := c+g*(-eps+I*v):
> gam:=1/2*sqrt(a^2+2*u*sigma2):
> CCP1 := (gam*exp(1/2*a*t)/(gam*cosh(gam*t)+1/2*a*sinh(gam*t)))^(2*a*xbar/sigma2):
> ans := MultiSeries[asympt](evalc(op(1,CCP1)), v, 2): # op(1, ... ) to ignore the exponent
> ans2 := evalc(MultiSeries:-asympt(evalc(ans), v,1)):
> map(simplify, ans2);
1/2 1/2 1/2
sigma2~ g~ t~ v~ / a~ t~
exp(- -------------------------) |2 cos(%1) cosh(-----)
2 \ 2
a~ t~ /
+ 2 cos(%1) sinh(-----) + O(-1/2 sin(%1) |
2 \
a~ t~
2 sinh(-----) eps~ g~ sigma2~ t~
2
a~ t~ a~ t~ 2
- 2 cosh(-----) c~ sigma2~ t~ - cosh(-----) a~ t~
2 2
a~ t~ a~ t~
+ 2 cosh(-----) eps~ g~ sigma2~ t~ - 4 sinh(-----) a~
2 2
a~ t~ a~ t~
- 2 sinh(-----) c~ sigma2~ t~ - 4 cosh(-----) a~
2 2
a~ t~ 2 \ / 1/2 1/2 1/2 \
- sinh(-----) a~ t~| / (v~ sigma2~ g~ ))| - 2 I
2 / / /
1/2 1/2 1/2
sigma2~ g~ t~ v~
exp(- -------------------------) sin(%1)
2
/ a~ t~ a~ t~ \
|cosh(-----) + sinh(-----)|
\ 2 2 /
1/2 1/2 1/2
sigma2~ g~ t~ v~
%1 := -------------------------
2
The leading term is split into 2 pieces in the above. Do
eval(%, O=0)
to see it more plainly.
Nevertheless, even when you use 1/2 and gam, the errors about sorting exponents is still there, even though it is easy to see which of the two exponents is smaller -- it seems that MultiSeries uses too weak a check for ordering.
[One puzzling thing: when Classic displays the above formula, it does not use %1, but when I cut and paste, it does. That is really strange. I am guessing that there are 2 separate pretty-printers at work there!]
Sounds like you need the (really, really, really) old command called 'writeto', which does just that. It is usually used as
writeto('myoutfile');
some_big_computation();
writeto(terminal);
'terminal' is a special keyword to redirect output back to the worksheet.
You really ought to try asympt first, assuming you have an expression. That will give you the exp part. Then you can eliminate that and probably use Pade approximation to get a good handle on the rational part.