The NAG quadrature solvers such as _d01amc have hard-coded weights that are only suitable for handling a double-precision approximation of each evaluation of the integrand. That's why evalf/Int won't let one use those methods with Digits greater than evalhf(Digits), or with the option `digits=d` and `d` that high.

And on the surface it may seem like this always means an insurmountable wall for integrands with subexpressions which may evaluate to outside the double-precision range, or for integrands which suffer from great roundoff error.

Also, when these methods were first introduced the evaluations of the integrand (a callback from NAG to Maple) were only allowed in `evalhf` mode.

But it is possible (since I forget when... Maple 9?, 10?) to use a non-evalhf'able procedure as the integrand with the three NAG univariate quadrature methods. Such a procedure is free to manage Digits internal to itself.

So two things could then happen, for such a procedure. Roundoff error in the individual integrand approximations might be reigned in. And values outside the usual 1e-308 to 1e308 range might be handled.

Having said that, it's often a good idea to keep the working precision of the rest of the solver as high as it can go (option digits=15) while allowing the tolerance (epsilon) to be relaxed optionally.

This seems enough to be able to get quick results for the various parameter pair values given. It's not going to solve all other numerically hard problems, but it seems to be adequate here.

The following results appear to match what others found (even with alternate, slow methods). But they compute quickly.

restart:
expr := exp(-x)^j*(1-exp(-x))^(4-j)*beta^alpha
/GAMMA(alpha)*x^(alpha-1)*exp(-beta*x):
G:=proc(A, B, J, ee, {eps::float:=1e-14})
local f;
f:=subs(__DUMMY=eval(ee,[alpha=A,beta=B,j=J]),
proc(x) Digits=50; []; __DUMMY;
end proc);
forget(evalf); forget(`evalf/int`);
evalf(Int(f, 0..infinity, ':-digits'=15, ':-epsilon'=eps,
':-method'=':-_d01amc'));
end proc:
CodeTools:-Usage(
seq( 'G'( 900, 50, j, expr ), j=0..4 )
);
memory used=1.97KiB, alloc change=0 bytes, cpu time=0ns, real time=1000.00us, gc time=0ns
-8 -21
0.999999927237869, 1.81905311687842 10 , 4.25712142901189 10 ,
-27 -33
2.19976264687971 10 , 1.13667316831278 10
CodeTools:-Usage(
seq( 'G'( 45.2895968537810, 6.34934081467530, j, expr ), j=0..4 )
);
memory used=2.22KiB, alloc change=0 bytes, cpu time=0ns, real time=0ns, gc time=0ns
0.994712351662338, 0.00131579418350684, 0.00000406276781217926,
-8 -10
2.42436962707272 10 , 2.45623513060466 10
CodeTools:-Usage(
seq( 'G'( 0.1, 50, j, expr, eps=1e-9 ), j=0..4 )
);
memory used=2.29KiB, alloc change=0 bytes, cpu time=0ns, real time=0ns, gc time=0ns
-8
9.76825547933535 10 , 0.00000158951838518320,
0.0000389398475682795, 0.00185661752266041, 0.992333435120739
## This will force 100 calls to G, taking 11.75 sec total on my i7
#CodeTools:-Usage(
# plot( 'G'( a, 50, 2, expr ), a=0.1..100, adaptive=false, numpoints=100 )
# );
## ## This will force 100 calls to G, taking 9.27 sec total on my i7
#CodeTools:-Usage(
# plot( 'G'( a, 6.35, 2, expr ), a=40..50, adaptive=false, numpoints=100 )
# );

The only fly in the ointment I saw was the case of high j=4 in the original example or alpha=0.1, beta=50. I found that case could be handled quickly, even at epsilon=1e-10, by a change of variables turning the range into t=0..1.

Lastly, I hard-coded Digits=50 into that procedure `G`. But it could alternatively be made an option of `G`, if it needed increasing.

acer