Axel Vogt

5936 Reputation

20 Badges

20 years, 258 days
Munich, Bavaria, Germany

MaplePrimes Activity


These are replies submitted by Axel Vogt

Can not answer your question in the sheet:
"nonoptimized, rounded and cached - previous version. Still best. Why?"

May be, one of the Seniors has an idea.

 

peace and glory? it is too early to fall back to the Bible :-)

And instead of the adaptive NAG routine I used the Legendre working horse (with
some risk). I think you almost savely can set KAsymptIntValueFunc to zero (and
avoid its expensive Gamma function (it means taking cutoff = Pi - 10^(-10) ).

The reduction is correct even for non-integer k.

For the 'asymptotics with Maple':

  MultiSeries:-series(K, phi=Pi, 1) assuming 0 < s, 2 < k:
  convert(%, polynom):
  simplify(%) assuming 0 < s, 2 < k:
  combine(%) assuming 0 < phi, phi<Pi;

which is almost your manual result.

For v <= 0.5 I suggest the recursion above in the last argument of LerchPhi.

I would use the following in your routine L, because that is faster than using
something like Int(f(x), x= ...) for numerical evaluation

  local f;
  f:=unapply( eval(K, [x=pz,s=pv-1.0,k=pa]), phi);
  evalf( Int(f, 0.0 .. Pi, method = _d01ajc) );


For timings: on my PC yours is ~ 3.7 sec (and with the suggestion ~ 2.5) while
Maple needs ~ 11.5, using Maple 15.01 in classic mode (same in standard mode).

Not sure, why that difference to your timings ( ~ 3.5 and ~ 52.5 sec), may be
the different PCs (mine is on Win XP SP3, 32 bit and 1 GB memory, CPU = AMD)

@icegood 

1. That is the Gauss–Legendre quadrature for 16 points, a stable working horse,
the constants are for that rule, http://en.wikipedia.org/wiki/Gaussian_quadrature
or http://people.math.sfu.ca/~cbm/aands/toc.htm, page 916 as some reference
(sorry, do not have a full link at hand).

2. Splitting is done, because the the integrand can be very peaked towards zero
but flattens like a power towards Pi. There are other methods which would care
for that, but they are more 'expensive' and I was too lazy to use them (the key
word is 'adaptive' - they have error control; Maple uses such stuff). I just
wanted to keep it short and fast, may be not perfect

3. The integrator is not that perfect as it seems: if x is very close to 1, say
like 1 - 1e-6, then the errors increase. Currently I do not have a solution that
convinces me *and* is short. It simply is not a final solution.

4. I have not written down all conditions in a explicit way. To apply Gausss'
theorem to write it as integral one needs, that v=s+1 is larger than one.

To understand the problem with a smaller s let us look at the integrand, it is

  K(x,s,phi) = -(1/2*cos(phi)+1/2)^s * sin(phi)/ (-2+x*cos(phi)+x)

The powering by s of a term < 1 becomes zero - but only if s is larger than 1.
And for your mentioned probelm that is not the case.

Thus I think the cutoff is correct (the code has no checks: one should check
that the input for the arccos is a) real and b) strictly between 0 and 1, but
I think that this is automatically satisfied)

5. But that is not a difficult problem, because there is a recursion for LerchPhi:

  LerchPhi(a,b,z-1): expand(%): %%=collect(%, LerchPhi): combine(%);

                                                           (-b)
      LerchPhi(a, b, z - 1) = a LerchPhi(a, b, z) + (z - 1)

BTW: This recursion is used by the above mentioned series acceleration by Jentschura.
But for large z (say 200 or even more) it is expensive and I am not sure, whether it is
numerical stable.

6. Now for n = 1,2,3 ... have to think about it. I do not have a Gauss theorem
at hand for 3F2 resp 4F3 (which is the translation for the according LerchPhi).
May be one better goes the way to write them as integral involving exponentials,
see FunctionAdvisor(LerchPhi), and go from there.

Or trying to get some inspiration looking in Maple's code.

And I would not directly say, that the developers did a non-complete job, that
very special case is however not covered by fast Numerics, yes.

@acer 

Thx, that works and gives another speed up by ~ 50%, so overall one gets a factor
of almost 15 (without doing further optimization, which may not provide that much
more improvements). Have not done some 'all-in-one' solution + compiling.

Though 'burning in' makes it quite unreadable/maintainable.

What I am missing (to avoid the expensive construction of the function) is some
preprocessor construction like '#define newexpr oldexpr' as it is possible in C,
to be used for the parametric integrand.

In many cases that allows to write code, *as if one would have a function*, but
since the expression is 'expanded', one actually avoids to define/construct it
and the need for function calls.

Any idea? Such a macro would be handy and cool.


For one of the open points in the code: passing to peak .. cutoff/2 (instead of
cutoff/4) and then using the result up to there as magnitude to define a new
eps for a new cut off gives me a desired relative error according to working
precision used (without much cost), but not in all cases.

Since icegood has not signaled interest I guess it is already fine for him.

The appended solution is about 10 times faster to compute LerchPhi(x,1,v),
though I have not spent deeper care for its limitations and validity.

If one wants to improve speed even more and make it evalhf'able, then the
first way I would go is to make the integrations routines local to the according
routine (called L).

Maple sends the task to evaluate it as 2F1 (use showstat(`evalf/LerchPhi/real`))
and since the parameters are "exceptionel" for argument reduction that is a very
sound but expensive way.

I have not tried the acceleration method from above (I always hesitate to use
such: I do not understand them and have the feeling the work by incidence
or not).

Lerch1_as_integra.mws

You may wish to upload something, taht can be read, either as text or as work sheet.

As far as I can see you have non ASCII signs (e´), try to avoid that).

Also note, that sqrt(-1) in Maple is denoted by I, upper case (guess you mean that)

As the error message indicates you may perhaps need a different formulation for
your model

Concerning LerchPhi performance:

Evaluating the mentioned integral using a evalhf'ed version of the double
exponential integration is ~ 3 times faster than calling evalf(LerchPhi(..))

Have not tried a compiled version (assuming, that there will be no astonishing
difference).

A different approach may be to work on the series presentation directly
using certain acceleration techniques for that slow series and that is what
is done in the link below (but have not tested it), may it fits your needs:

http://web.mst.edu/~jentschurau/downloads.html

@icegood 

There are 2 calls to LerchPhi, one is LerchPhi(exp(-2*lpu)*lpw/(lpw+1),1,lpv)
and the other is without the exp-term (or just setting u=0 in the above), the
exp-term is a kind of 'damping factor' between 0 and 1.

The expression lpw/(lpw+1) is larger then 1/2 and approaches 1 from the left.

So you want a fast LerchPhi(x,1,v) where 0 < x=exp(-2*lpu)*lpw/(lpw+1) < 1 and
x possibly very close to 1 (may approach 0 because of damping by exp and 0 < lpu,
yes?), where I shortend notation to v:=lpv.

LerchPhi(x,1,v) can be written as some 2F1 hypergeometric:

  'LerchPhi(x,1,v)': '%'=convert(%, hypergeom);
 
                              hypergeom([1, v], [1 + v], x)
          LerchPhi(x, 1, v) = -----------------------------
                                            v

Now if 1 < v (as your data suggest, so you want more generally 0 < v) one can use
Euler's integral

  hypergeom([a,b],[c],z) =  GAMMA(c)/(GAMMA(c-b)*GAMMA(b))*
    Int(t^(b-1)*(1-t)^(c-b-1)/(1-z*t)^(a),t=0..1);
  0 < Re(b) and Re(b) < Re(c);

After inserting the parameters, evaluating and some manipulation one gets
a formula for 1 < v:

  LerchPhi(x,1,v) = Int(t^(v-1)/(1-x*t),t = 0 .. 1);

                                        1
                                       /   (v - 1)
                                      |   t
                 LerchPhi(x, 1, v) =  |   -------- dt
                                      |   1 - x t
                                     /
                                       0

A cross check can be done by

  subs(x=exp(-2*lpu)*lpw/(lpw+1), v= lpv, % );
  eval(%, [lpu=100,lpv=100,lpw=1]); evalf(%);  # or lpv=200


Instead of using complicated theory for 2F1 one may perhaps try to find an
approximation for that integral (as you [for now] do not need very good
exact values) and may be in numerical books there some recipes for that
type of moments of rationals.

If it is not to long one can code and use Compiler:-Compile (or in a more
intrinsic way: evalhf'ing that).

But that is just an *idea* or suggestion so far (without performance test).


And it still may not guarantee, that through the complicated evalaution of
FMain the may be a lot of cancellation problems ...

@icegood 

Now I got it.

So besides the precision problem with evalhf you
would appreciate a fast numerical LerchPhi?

And if it is so: what are the (typical) ranges for
the 3 inputs?

More or less (but vaguely) I understand: 

NonEvalhfCacheWithRemember is a kind of test-evaluation for evalhf.

I do not have a feeling, where those problem may come from the code or how
to improve it, since I guess your FMain is an example (otherwise you would
have treated the 2 occurrencies of LerchPhi in a different way.

And for that example I would have tried with 'option hfloat'.

I guess it is part of your efforts to use evalhf for non-evalhf'able functions
in a seemingless way?

Edited (about 1 hour later) for the following:

I think the 'correct' value for lpu=200,lpv=100,lpw=1 is
-.130870138625117831910143506853e-7

For that I used Digits=30, took the 'body' in your FMain to get the
final expression, converted to rational and evaluated the input
with Digits=300 (and the usual LerchPhi without the construction)

So there is a cancellation problem in both evalf and evalhf, if the
digits are too low (may be towards your guess on rationals and
differences).


I do not understand your sheet in detail, but before the final section
you say the error comes by LDBL_MIN, LDBL_MAX
Note, that evalhf is not even IEEE compliant. Using Digits:=18 and Maple 15: 
DBL_MIN; evalhf(%);
                                               -307
                         1.00000010000000001 10    

DBL_MAX; evalhf(%);
                                               307
                          9.9999999000000002 10   

The first is 5060056838688399/9007199254740992*pow(2,-1019),
and the last 5010420849918223/9007199254740992*pow(2,+1024)

But DBL_EPSILON is correct (have not checked other constants.

I think there have been posts by 'acer' about that.

1) It needs 'endless' much of time to load the post (though it looks not
perfect, despite the effort it certainly needed to produce it): might it be
possible to reduce it to essentials and skip the rest to the appended ws
which would do much more pretty?

2) Why can I do have it the intuitive way like in http://en.wikipedia.org/wiki/Directional_derivative#In_differential_geometry
for example: provide vector and fct ---> result (ok ... but look at the MMA
page http://mathworld.wolfram.com/DirectionalDerivative.html and that
is what Torre says (I guess): usage for cartesians should be simple.

After your keyword 'residues' I was looking at it this way and tan(t/2) = y leads
to the according integral over rational functions (applied to the original task,
which was over the 'full circle').

However it also needs some discussion of cases to treat the possible poles on the
real axis (can be done) and finally leads to the unpleasant task, to find out and
select the poles in one of the half planes (there will be 3 of them, no?), which
I did not.

Though residues are simple to calculate with Maple - given the pole - it still will
be ugly to give the poles itselfs.

And I guess, the *overall* formula may not be much more clear.

So I did not try to go that route for the jump at z=1/2.



Question (a positive answer may be worth to spread to a new thread):

Are you aware of some general residue formula, where the Int(rational over 0 .. 2*Pi)
is of form rational = rational in sin (or cos) only, and not the conventional form
polynom(sin)/polynom(cos) ?
After your keyword 'residues' I was looking at it this way and tan(t/2) = y leads
to the according integral over rational functions (applied to the original task,
which was over the 'full circle').

However it also needs some discussion of cases to treat the possible poles on the
real axis (can be done) and finally leads to the unpleasant task, to find out and
select the poles in one of the half planes (there will be 3 of them, no?), which
I did not.

Though residues are simple to calculate with Maple - given the pole - it still will
be ugly to give the poles itselfs.

And I guess, the *overall* formula may not be much more clear.

So I did not try to go that route for the jump at z=1/2.



Question (a positive answer may be worth to spread to a new thread):

Are you aware of some general residue formula, where the Int(rational over 0 .. 2*Pi)
is of form rational = rational in sin (or cos) only, and not the conventional form
polynom(sin)/polynom(cos) ?

you may want to tag it as bug

First 102 103 104 105 106 107 108 Last Page 104 of 209