Axel Vogt

5936 Reputation

20 Badges

20 years, 259 days
Munich, Bavaria, Germany

MaplePrimes Activity


These are replies submitted by Axel Vogt

without stepping into your physical language i will see what can be done with your a in higher precision

Then you have something as I guessed, it can occure if you zoom quite close to a range

This is a nice picture for ln( 1 - x) / x:  www.peterstone.name/Maplepgs/images/Jagged_graph.gif

without stepping into your physical language i will see what can be done with your a in higher precision

Then you have something as I guessed, it can occure if you zoom quite close to a range

This is a nice picture for ln( 1 - x) / x:  www.peterstone.name/Maplepgs/images/Jagged_graph.gif

difficult to see / recognize, but for me those pictures (jumping between levels) typically ocure, if making computations at the limit of double precision and then they are scaled by some factor(s) afterwards

may be I find some time that evening, after work

difficult to see / recognize, but for me those pictures (jumping between levels) typically ocure, if making computations at the limit of double precision and then they are scaled by some factor(s) afterwards

may be I find some time that evening, after work

Hm ... I said that, since this (in your case) should give you the results
with a *relative* error smaller than the number of Digits-1 and in a quite
fast way.

And I am not used to Matlab at all, so I can not judge about the quality of
32 decimal places (beyond speed) and always thought, that it usually is for
double precision.

Maple switches an this point and has a messy weakness in some cases. 

There are several ways.

You can choose a special method, look up evalf ---> int (there you find the
stuff I used to enforce compiled integration routines from NAG libraries).

In your case you have something like pdf * power as integrand (which may
need splitting in extreme cases, as I 'said'). 

This can or should be done with the double exponential method (see the help,
you will find it at the same location).

If you would not be on Unix, but on Windows (which I guees from your '15'
for Digits), then you also could use a solution I once posted to do all
that with ~ 100 Digits (it uses a Windows DLL for that, double exponential
method). Or calling Pari (even with variable precision).

But I would try it in Maple first, certainly it is doable and I would start
with Int( fct, x= ..., method = _Dexp); evalf[n](%); and do that using the
reduced form I posted to reduce computational effort (let me know, if I
should rephrase it to have the formula handy).

What is the range of your parameter a?

And: why you want it in higher precision? Seems I missed that ...

Axel
Hm ... I said that, since this (in your case) should give you the results
with a *relative* error smaller than the number of Digits-1 and in a quite
fast way.

And I am not used to Matlab at all, so I can not judge about the quality of
32 decimal places (beyond speed) and always thought, that it usually is for
double precision.

Maple switches an this point and has a messy weakness in some cases. 

There are several ways.

You can choose a special method, look up evalf ---> int (there you find the
stuff I used to enforce compiled integration routines from NAG libraries).

In your case you have something like pdf * power as integrand (which may
need splitting in extreme cases, as I 'said'). 

This can or should be done with the double exponential method (see the help,
you will find it at the same location).

If you would not be on Unix, but on Windows (which I guees from your '15'
for Digits), then you also could use a solution I once posted to do all
that with ~ 100 Digits (it uses a Windows DLL for that, double exponential
method). Or calling Pari (even with variable precision).

But I would try it in Maple first, certainly it is doable and I would start
with Int( fct, x= ..., method = _Dexp); evalf[n](%); and do that using the
reduced form I posted to reduce computational effort (let me know, if I
should rephrase it to have the formula handy).

What is the range of your parameter a?

And: why you want it in higher precision? Seems I missed that ...

Axel
One can code Fibonacci numbers (in double precision) with some 'cheating' using
a remember table (they are only a small number of values):

  G := proc(a::Vector(datatype=integer[4]),b::Vector(datatype=float[8]),n,
    rememberTable::Vector(1600,datatype=float[8]), rememberFlag::integer[4])
  option autocompile; 
  local phi,t5,i, k,r;
  t5:=1.0/sqrt(5.0); phi:=(sqrt(5.0)+1.0)/2.0;
  
  if(rememberFlag = 1 or 1000 < n) then
    for k from 1 to 1476 do
      rememberTable[k]:=phi^k*t5 + 0.5;
      rememberTable[k]:= rememberTable[k] - frac(rememberTable[k]);
    end do;
  end if;

  for i from 1 to n do
    b[i]:=rememberTable[a[i]];
  end do;
  NULL;
  end proc: maplemint(%);

Then one gains 'speed' again (comparing only after compiling
and generating all the data arrays):

  N:= 10^7: # that is smaller as maximal int ~ 2*10^9 in C
  A:= LinearAlgebra:-RandomVector(N,generator=1..1000, # start in 1, not in 100
        outputoptions=[datatype=integer[4]]):
  B:= Vector(N,datatype=float[8]):
  
  #forget(f, reinitialize = true); 
  gc(); t,tr:=time(),time[real]():
  f(A,B,N):
  min(time()-t,time[real]()-tr);
  
  gc(); t,tr:=time(),time[real]():
  f1(A,B,N):
  min(time()-t,time[real]()-tr);
  
  gc(); t,tr:=time(),time[real]():
  g(A,B,N):
  min(time()-t,time[real]()-tr);
  
  arr:= Array( 0 .. 11, 0., datatype = float[ 8 ] ): 
  # for local storing binary representations
  gc(); t,tr:=time(),time[real]():
  h(A,B,N, arr):
  min(time()-t,time[real]()-tr);
  
  remTab:= Vector(1600, 0., datatype = float[ 8 ] ):
  # remember table for all the Fibonacci as doubles
  gc(); t,tr:=time(),time[real]():
  G(A,B,N, remTab, 1):
  min(time()-t,time[real]()-tr); 


                                2.531
                                1.953
                                1.813
                                9.734
                                0.297

However in last case (with a remember table) this seems slow to me.
Could it be that this is caused by the way the generated DLL does its
communication with the 'array' defined in Maple (type checking or so)?

Or is just my 'feeling' false: accessing a table in G is only ~ 6 times
faster that 1 powering + 1 division + 2 additions + 1 function call (for
the fractional part), as it is done in g, and all the time is consumed
by the long loop?
That seems to depend on the machine quite heavily, mine is a bit oldish,
Win XP SP2 (= 32 bit), AMD Athlon 3500++ at 2.2 GHz (=single CPU) and
1MB memory where I use Maple 12 (classic interface as standard of course).

Acer's code (i.e. Binet's formula) needs 3.219 sec.

I modified it a bit (to avoid one powering)

  phii:=phi^a[i];
  b[i]:=(phii + (2*irem(a[i],2) -1)/phii)/s5; 

which results in 2.531 sec ( n -> (-1)*(2*irem(n,2) -1) is certainly not
the fastest way to write (-1)^n as +-1).

Accidentially I played last week with it. It is known, that the 2nd term
in Binet's formula can be ignored: it becomes very small and it is enough
to take floor() of the first term (with a little tweak if we are beyond
the range of compiled integers):

  b[i]:=phi^a[i]/s5 + 0.5;
  b[i]:= b[i] - frac(b[i]); # instead of floor

Then it is a little bit faster, needing 2.375 sec.


I also played with Maple's code (I think that is a recursion formula due
to Dijkstra), which showed a bug in Compiler, the following works:

> h := proc(a::Vector(datatype=integer[4]),b::Vector, k, 
>   bin :: Array(0 .. 11, datatype = float[ 8 ] ))
> option autocompile; 
> local j, n,
>   i::integer, p::integer, f1::float, f2::float, s1::float, s2::float, s3::float;
> for j from 1 to k do
>   n:=a[j];
> ############# using Fibo from above ... or code it
>   if n = 0 then
>     return 0
>   end if;
>   p := n;
>   for i from 0 do
>     if( p=0) then break end if;
>     bin[i]:= irem(p,2);
>     p := iquo(p,2);
>   end do;
>   f1 := 0;
>   f2 := 1;
>   for i from i-2 by -1 to 0 do
>     s1 := f2^2;
>     s2 := 2*f2*f1;
>     f1 := s1+f1^2;
>     f2 := s1+s2;
>     if bin[i] = 1 then
>       s3 := f2+f1;
>       f1 := f2;
>       f2 := s3
>     end if
>   end do;
>   f2;
> #############
>   b[j]:= f2;
>   #  b[i]:=(phi^a[i]-(-phi)^(-a[i]))/s5;
> end do;
> NULL;
> end proc: maplemint(%);

The middle part is, what one can extract (with modifications) from Maple.

Now take arr:= Array( 0 .. 11, 0., datatype = float[ 8 ] ) as working array
(1476 is the maximal integer where fibonacci is a IEEE double), then the
according call needs 10.7 sec on my machine:

  t,tr:=time(),time[real]():
  N:=10^7:
  A := LinearAlgebra:-RandomVector(N,generator=100..1000,
                  outputoptions=[datatype=integer[4]]):
  B:=Vector(N,datatype=float[8]):
  
  h(A,B,N, arr): # results are in B
  min(time()-t,time[real]()-tr);

However one should note, that Dijkstra's approach returns *exact* values,
while a compiled version will never do that for n beyond 80 (in IEEE it
would be smaller, depending on the mantissa and FPU usage).

So this is a good example, where original Maple code acn not beat compiled
solution - but only w.r.t. to CPU time, not w.r.t. exactness.
What you have been shown for my taste is not so much about Maple and its
evalf, it is more about numerics: presentation and errors.

sin((2*10^k*Pi+y)) and sin(y) are the same, if you can work modulo 2*Pi for
the input of the sinus, i.e. in a symbolic setting.

But (2*10^k*Pi+y) and evalf( (2*10^k*Pi+y) ) are not the same and if you
want modulo 2*Pi you need extras digits:

Say we work with Digits = 10. Then your y actually does not mean 0.2, but
0.2/1.0 = 0.2000000000 and want sin(y) with 10 Digits after the decimal pt.

Now 2*10^k*Pi has 6 Digits before the decimal point taking length(round(.))
of it and if you set Digits:= Digits + 6 then you have enough to do what
you desire (in Maple I think 1 less would be enough).

You can do your own experiment by choosing y:=1e-5 and try it.

Mainly it is like that, y:=1e-5:

For 10 Digits you have

xxxxxx.aaaa # = 628318.5308 = 2*10^k*Pi+y
xxxxxx.aaaa # = 628318.5308 = 2*10^k*Pi

So substracting gives 0, but you expect your y.

Now switching to the suggested Digts = 16:

xxxxxx.aaaa279586 # = 2*10^k*Pi+y
xxxxxx.aaaa179586 # = 2*10^k*Pi

Now enough places survive (note also, that the last decimal place
is not guaranteed to be 'correctly' rounded in lower precision as
one can always increase Digits).

Using sin on that is just obfuscating that fact (in the input is
small, then sin more or less the identity (to first order) and it
is only modifying the problem a bit).

Do the same for y := evalf[10](exp(1))*1e-5.

You get the 10 correct decimal places after the decimal point (but
it is not correct to 10 places, do not get trapped by the leading
4 zeros in the representation ...)

In general if you substract to terms of 'equal size' you have to
double the digits if you really want to sure, that their difference
has an error which is smaller the 1/10^Digits.

If you play with that and different precisions then it is likely
that you have to restart Maple to compare results, since the system
remembers results it has already computed with better precision.
Using my notation 'J' from above the integration range for b=20,c=2500,f0=215
is ~ 0.95 ... 1.05 (for the change of variables).

After playing a *lot* the following should work for usual numerics:

  Change(J, tau = 1/x^(1/50), x): simplify(%); 
  J50:=select(has,%, Int);

for which now the range is ~ 0.1 ... 13 (and there it looks like a part
of some probability density for a=0, and for larger a one would split to
care for peaks or such stuff).

The integrand can be re-written (after starring a longer time at it)

  'x^(-a)/(x^6* pow(x,7/50)-5*x^4+111*x^2-333/2+555/2/(2*x^2+1))';
  theIntegrand:=eval(%, pow=`^`);
  'op(1, J50)' = %; is(%);

                                 true

That integrand behaves good (but I have no symbolic solution).

I have no suggestion where to split *), but using the NAG routines by

  Int( theIntegrand, x=..., method = _d01ajc) 

should work, since it does its job adaptively (except your 'a' makes it
quite painfull) and it is fast (but use Digits:=evalhf(Digits) ).


*) splitting can be guessed by x^a*diff(theIntegrand,x), taking the
numerator. Then MultiSeries:-series(%, x=0,3); convert(%, polynom) is
an approximation, since splitting may be close to 0. This equation in
x^2 can be solved and for a rough guess one can take a=1 (since for
a ~ 0 there seem to be not much problem) by choosing the positive ones,
x ~ 0.37 and x ~ 1.11.
Using my notation 'J' from above the integration range for b=20,c=2500,f0=215
is ~ 0.95 ... 1.05 (for the change of variables).

After playing a *lot* the following should work for usual numerics:

  Change(J, tau = 1/x^(1/50), x): simplify(%); 
  J50:=select(has,%, Int);

for which now the range is ~ 0.1 ... 13 (and there it looks like a part
of some probability density for a=0, and for larger a one would split to
care for peaks or such stuff).

The integrand can be re-written (after starring a longer time at it)

  'x^(-a)/(x^6* pow(x,7/50)-5*x^4+111*x^2-333/2+555/2/(2*x^2+1))';
  theIntegrand:=eval(%, pow=`^`);
  'op(1, J50)' = %; is(%);

                                 true

That integrand behaves good (but I have no symbolic solution).

I have no suggestion where to split *), but using the NAG routines by

  Int( theIntegrand, x=..., method = _d01ajc) 

should work, since it does its job adaptively (except your 'a' makes it
quite painfull) and it is fast (but use Digits:=evalhf(Digits) ).


*) splitting can be guessed by x^a*diff(theIntegrand,x), taking the
numerator. Then MultiSeries:-series(%, x=0,3); convert(%, polynom) is
an approximation, since splitting may be close to 0. This equation in
x^2 can be solved and for a rough guess one can take a=1 (since for
a ~ 0 there seem to be not much problem) by choosing the positive ones,
x ~ 0.37 and x ~ 1.11.

A nice repair :-)

May be it was intended to depend on the 'Order' (used for series development) and forgotten to make variable?

A nice repair :-)

May be it was intended to depend on the 'Order' (used for series development) and forgotten to make variable?

Seems my writing is too cryptic ...

I guess your integration bounds are positive (as you said), else you would run into
imaginary values.

Digits:=14;

S:=proc(t)
  local T;
  T:=t/f0: # f0 = 215
  (T^(-414/100) -5*T^(-2)+ 111*(1-T^2+T^4/2)/(1+T^2/2));
  simplify(%);
end proc:

assume(0 < f0);

'Int(t^a/S(t), t)';
Change(%, t = tau^(50)*f0, tau);
expand(%): combine(%, power): 
simplify(%)  assuming 0<tau;

J:=select(has,%,Int); # only that is of interest: no need for f0 = 215

op(1,J):
A:= numer(%);
B:= denom(%%);
                          (256 + 50 a)         100
                  A := tau             (2 + tau   )

              100         107          207          307          407
  B := 2 + tau    - 10 tau    + 217 tau    - 222 tau    + 111 tau


# now J = Int( A/B, tau ); 

fsolve(B=0, tau=-0.98);
                          -0.98082572991283

As far as I see, that is the only real root of B.

Note that partial fractions need all the roots of B and they are
all distinct:

'discrim(B, tau)'; evalf(%); # is not zero

Thus one has 1 real root and 203 conjugated pairs of complex roots.

Seems my writing is too cryptic ...

I guess your integration bounds are positive (as you said), else you would run into
imaginary values.

Digits:=14;

S:=proc(t)
  local T;
  T:=t/f0: # f0 = 215
  (T^(-414/100) -5*T^(-2)+ 111*(1-T^2+T^4/2)/(1+T^2/2));
  simplify(%);
end proc:

assume(0 < f0);

'Int(t^a/S(t), t)';
Change(%, t = tau^(50)*f0, tau);
expand(%): combine(%, power): 
simplify(%)  assuming 0<tau;

J:=select(has,%,Int); # only that is of interest: no need for f0 = 215

op(1,J):
A:= numer(%);
B:= denom(%%);
                          (256 + 50 a)         100
                  A := tau             (2 + tau   )

              100         107          207          307          407
  B := 2 + tau    - 10 tau    + 217 tau    - 222 tau    + 111 tau


# now J = Int( A/B, tau ); 

fsolve(B=0, tau=-0.98);
                          -0.98082572991283

As far as I see, that is the only real root of B.

Note that partial fractions need all the roots of B and they are
all distinct:

'discrim(B, tau)'; evalf(%); # is not zero

Thus one has 1 real root and 203 conjugated pairs of complex roots.
First 141 142 143 144 145 146 147 Last Page 143 of 209