## dharr

Dr. David Harrington

## 6245 Reputation

19 years, 345 days
University of Victoria
Professor or university staff
Victoria, British Columbia, Canada

## Social Networks and Content at Maplesoft.com

I am a professor of chemistry at the University of Victoria, BC, Canada, where my research areas are electrochemistry and surface science. I have been a user of Maple since about 1990.

## no abs...

@C_R Yes, I forgot the abs; it works OK with it. I only used it without abs for a single "period" where I knew the sign, when I was playing around with it. I agree that the integral to infinity without abs is not relevant to @Mariusz Iwaniuk's integral, but I'm not sure whether or not the answer is wrong.

## change of variables...

@Rouben Rostamian  PDEtools:-dchange gives the simpler result that you found, but comparing this with the IntegrationTools:-Change result shows that they are equivalent, so this is not a bug.

```I1:=Int(sin(x^4)/(sqrt(x)+x^2),x=0..infinity);
PDEtools:-dchange(x=u^(1/4),I1);
```

dchange.mw

## warnlevel...

warnlevel = 0 works for me in Maple 2023.2.1

warnlevel.mw

## drawing arcs...

@jalal To draw an arc representing the angle A in the triangle ABS, you can first find the cross product of the vector AB and AS to use as a rotation axis. Then find the rotation matrix tor rotation about this axis by arbitrary angle t. Apply this to a shortened version of vector AB, and its components (offset by A) are a parametric curve for the arc with parameter t, where t runs from zero to the angle A.

```triangle(ABS, [A, B, S]);
angleSAB := FindAngle(ABS, A);
vAB := Vector(coordinates(B) - coordinates(A));
vAS := Vector(coordinates(S) - coordinates(A));
normalA := LinearAlgebra:-CrossProduct(vAB, vAS);
arcBAS := convert(0.2*((Student:-LinearAlgebra:-RotationMatrix(t, normalA)) . vAB), list);
plotarcBAS := spacecurve(coordinates(A) + arcBAS, t = 0 .. angleSAB, color = red, thickness = 2);
display(draw(ABS), plotarcBAS);```

The code is at the end of the document.

rotation.mw

## strange...

@MaPal93 Not sure why that wasn't working. I added simplify and it worked.

[But now I can delete that line and it still works.]

## rationale...

@MaPal93 You did pretty well with the quadratic.

It's not easy to explain exactly how I came up with it, because I just played around with different things. I tried some things where P(x) was linear playing with the interpolation point, and I tried some polynomials on a transformed x-axis -> w = 1 - exp(-x), now from 0 to 1, which preserves derivatives at the origin - it looks close to linear but it was hard to improve. Most of these were below f(x). Then I tried some simple rational functions that preserve the slope at the origin. At some point I had something higher than f(x) and then added back the exp(), then simplified some more.

Once I had the general form (f0-finf)*exp(-a*x)/(1+b*x), I solved for b that gave the correct slope at the origin. That left only a as a parameter. I just used Explore on the plot and adjusted the slider for a. That looked good, so I played with the slider on the relative error plot - there is a range of a that works well, and 0.126 seemed roughly the best. That was close enough to 1/8, for which b was then 0.837, and 5/6 = 0.833 was close enough and (perhaps) nicer.

## Bespoke approximation...

Simple and resonably accurate

 > restart;
 > f := u -> RootOf(8*_Z^4 + 12*u*_Z^3 + (5*u^2 - 4)*_Z^2 - 4*u*_Z - u^2): f0 := 1/sqrt(2): finf := 1/sqrt(5): Df0 := -1/4:
 > fapprox:=(f0-finf)*exp(-x/8)/(1+(5/6)*x)+finf; # 5/6 could be 0.837 for D(fapprox)(0) = -0.25002 evalf(eval(diff(fapprox,x),x=0));

 > plot([f(x),fapprox],x=0..20,0..0.8,color=[red,blue]);

Maximum relative error for 0 to infinity <2%

 > plot((fapprox-f(x))/f(x),x=0..20);

 >

@MaPal93 I remain confused about what you actually want:

1. a simple function (a few parameters) that gives good agreement for the function at 0 and infinity and for the derivative at 0, but is not necessarily that accurate.

2. an accurate approximation over a limited range.

3. an accurate approximation over the full range.

My objective was (1), because you asked for a 1-line function and you seemed to want to interpret it easily. That was also what I did in that paper. But now you are comparing as though you want (2), which you did ask for at some point and @acer gave a nice answer to. For (3) you can just use the function itself.

Since the comparison you provided is for (2) then a fair comparison requires the same number of parameters, which is 8 here in each case (though maybe 9 for the rational function; I'm not sure). (That's more that I was originally thinking of for (1).) Now plot them out to 200 and you will see that mine is OK at infinity (by design), but has the oscillation. The second one is a polynomial and will become large at infinity. The rational function can go to a constant value at infinity if the degree of the numerator and denominator polynomials are the same, and @acer chose [4,4], presumably for this reason. As @acer pointed out, the rational functions are usually better than the polynomials anyway, and you see that here.

The numapprox routines are focussed on accuracy over the whole range and they do well for that. They should also do well for derivatives at zero since they are based on some series expansions around that point. I'm assuming they are based on evaluating the function at evenly spaced points across the chosen interval or some criterion that treats all the interval evenly. If you get to choose the interpolation points then you can do better than if they are evenly spaced. In Gaussian quadrature, you optimize this, so (from memory) an (n/2)-degree polynomial with optimally chosen points can do as well as an n-degree polynomial at evenly spaced points. This is not quadrature, so I don't think the Laguerre points are optimal, but the basic idea is that they should spread out with more near the origin (where the function is changing) and fewer at large values (where it isn't changing much). But that again is assuming you want to approximate out to infinity. And then why not (3)?

## DirectSearch...

@MaPal93 Nonlinear equations can be tricky. DirectSearch, an external package from the Maple Applications Centre can solve this. You need some interpolation points closer to the origin, and then spreading out with fewer later where the function is featureless. One possibility is to use roots of Laguerre polynomials, which are spread out in this way, and are used in Guass-Laguerre quadrature for functions in a semi-infinite range; see the end of the file. But that is just a guess.

Approx_new_DS.mw

## misc...

@MaPal93 That's a nice fit (0.2%) over that range. Df0 looks good. If you want more than the probe accuracy, you can use numapprox:-infnorm. You have oscillations outside the cutoff. If you want it to work over a longer range, you will have to spread the interpolation points out. High-degree polynomials can oscillate - you can likely remove the oscillations with a lower degree polynomial and still get reasonable accuracy. But maybe it is fine as it is.

Approx_new.mw

1. Should I consider looking into [15,16,17] to try and find a similarly accurate and simple approximation for my function? Would that be challenging yet worthy? Would you help?
Those refs are about fitting to experimental data and not relevant. I developed that approximation by playing around using series and asympt and some intuition. In fact a referee asked me to explain how I had "derived it", but I was unable to answer that question and so only the vague description is is the paper. For your case the series/asympt expressions are complicated, so I don't think they will help; another approach will be required. Notice that the approximation was not accurate (up to 4%); the errors only had to be small compared to the experimental errors. I'll take a look to see if anything is obvious.
2. You also mention "relative errors (with respect to Eq. (2))" and "systematic error in the parameters can be estimated by individually varying the parameters to find the minimum in the residual sum of squares". I think it could be interesting to quantify the errors for my approximation as well.
The relative error is just from a plot of (fapprox-fexact)/fexact. The other errors are related to the fit to experimental data.
3. Talking about interpolation instead, you mention "Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values" and "c1*x term will mess up the derivative at zero". Which replacement term would preserve the derivative at 0?​​
The simplest would be to replace the Df0 exponential decay constant with a parameter a, differentiate the whole thing and set its value at zero equal to Df0 as another equation to be solved.

## nice...

@Preben Alsholm Vote up. Nice analysis.

## approximations...

@MaPal93 That is more or less what I meant, but you interpolated the polynomial not the overall function fapprox. Howerer, I should have given it a bit more thought before suggesting it. I suggested the exp form with P(x) = 1 because I wanted something that has the correct behaviour at infinity. I don't like the cutoff idea for that reason, and like to hack things that work well at both ends (for a real example see Eq. (13) in doi:10.1016/j.elecom.2019.106566). I also wanted a good derivative at 0. Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values. Other methods work hard to match values and derivatives at one point (like the numapprox routines with polynomials) but need a cutoff. Since a decaying exponential dominates a polynomial at infinity, I suggested multiplying the exp by a polynomial. I made the constant 1 to keep the value at 0 correct, but forgot that the c1*x term will mess up the derivative at zero.

So there are tradeoffs about what you want to work well, the simplicity of the function and the ease of implementation.

## fdiff and evalf/D...

@acer But evalf(D(f)(1e-9)); and fdiff(f, [1], [1e-9]); gave very different values; it seems more is going on here.