So, since the limit(n^(1/n),n=infinity) is 1, do you suspect that partial sums of the alternating series (-1)^n*n^(1/n) would necessarily have two limiting values (in terms of the alternating partial sums)?
How do you suspect that this might relate, if at all, to the value returned from,
evalf(sum((-1)^n*n^(1/n),n=1..infinity));
Maybe there already is a body of theory for this sort of thing.
What do you foresee a theory about this looking like? I ask because, once one has been motivated by some casual observations, the next step in mathematics is to develop a formalism and a notation. It's not easy to get a good notation, but a sure sign of success is that good notation leads to rich insight. I'm not sure that insight can come from observation alone.
acer
I must have missed something in this post. Perhaps you could help me understand what some of the insights are?
You started off with something like,
F1:=sum((-1)^n*(n^(1/n)-1),n=1..x):
F2:=sum((-1)^n*(n^(1/n)-2),n=1..x):
F3:=sum((-1)^n*(n^(1/n)-3),n=1..x):
And then you noticed that for each second set of values for these, ie.for x=2,4,6, the F1, F2, and F3 would agree.
But that's just because when x is any even positive integer all three of F1, F2, and F3 simplify immediately to sum((-1)^n*n^(1/n),n = 1 .. infinity);
Now, Maple can show that directly, with,
simplify([F1,F2,F3]) assuming x::even;
You wrote, "Notice that the blue, red and green graphs meet at least one y-value. As x goes to infinity, that y-value is the value of the MRB Constant." But isn't that just the same as the limits of any subsequences of the function coloured green itself, as x goes to infinity? What do the red and blue curves add, for illustrating the behaviour of the green curve?
So it seems that you've named sum((-1)^n*n^(1/n),n=1..infinity) . Is that right?
What do you think of the value that Maple gives when one takes evalf() of that expression? Do you think that the green curve has some subsequences that converge to both a positive number and a negative number? If so, how do you think that those relate to the evalf() result?
I couldn't follow what the summations mean when the index ranges from 1 to sqrt(3). I had thought that the dummy index n was representing positive integers. What does this mean,
sum((-1)^n*n^(1/n),n = 1 .. 27*6^(1/2))));
Thanks very much,
acer
No, not really. You've used matrix with a lowercase m. He wants to use LinearAlgebra, so it should be uppercase M, ie. Matrix.
But then your example with square brackets is not good.
> A := Matrix(3,(i,j)->i+j):
> sum(sum(A[i,j],i=1..3),j=1..3);
Error, bad index into Matrix
The best solution could be to use add() instead of sum(). See their help-pages for the difference.
> A := Matrix(3,(i,j)->i+j):
> add(add(A[i,j],i=1..3),j=1..3);
36
It should not be necessary to load LinearAlgebra, using with(), in order to create a Matrix, by the way.
Also of possible interest are,
> add( x, x in A );
36
and this strange one,
> sum(sum( ('y->(''x->A[x,y]'')'(i))(j) ,i=1..3) ,j=1..3);
36
acer
No, not really. You've used matrix with a lowercase m. He wants to use LinearAlgebra, so it should be uppercase M, ie. Matrix.
But then your example with square brackets is not good.
> A := Matrix(3,(i,j)->i+j):
> sum(sum(A[i,j],i=1..3),j=1..3);
Error, bad index into Matrix
The best solution could be to use add() instead of sum(). See their help-pages for the difference.
> A := Matrix(3,(i,j)->i+j):
> add(add(A[i,j],i=1..3),j=1..3);
36
It should not be necessary to load LinearAlgebra, using with(), in order to create a Matrix, by the way.
Also of possible interest are,
> add( x, x in A );
36
and this strange one,
> sum(sum( ('y->(''x->A[x,y]'')'(i))(j) ,i=1..3) ,j=1..3);
36
acer
PeridentityMatrix := proc(n::posint)
LinearAlgebra[LinearAlgebra:-CreatePermutation](Vector([
seq(n - i, i = 0 .. floor(1/2*n) - 1),
seq(floor(1/2*n) + 1 + i, i = 0 .. ceil(1/2*n) - 1)]));
end proc:
acer
fsolve should order the roots in a reproducible, non-session-dependent manner. This order can also be "fixed up" to agree with the ordering of indexed RootOf's.
But you might also want to make sure that your code is efficient. If you are trying to compute the floating-point approximations of several of the roots of a univariate polynomial, then one call to fsolve should suffice. On the other hand, evalf(RootOf(...,index=i)) could end up calling fsolve for each i. And that might be wasted, duplicate effort.
# Using your e1 and e2 as posted...
P:=convert(subs(E1=e1,E2=e2,E1*NZ^4-NZ^2*(2*E1^2)+E1^3-E1*E2^2=0),rational):
[seq(evalf(RootOf(P,NZ,index=i)),i=1..4)];
`RootOf/sort`([fsolve(P,NZ)]);
If you set stopat(fsolve) you can see that the first of these two approaches calls fsolve four times (each time computing all the roots) while the second approach calls fsolve just the once and then fixes up the ordering to agree with that of the indexed RootOf's.
acer
fsolve should order the roots in a reproducible, non-session-dependent manner. This order can also be "fixed up" to agree with the ordering of indexed RootOf's.
But you might also want to make sure that your code is efficient. If you are trying to compute the floating-point approximations of several of the roots of a univariate polynomial, then one call to fsolve should suffice. On the other hand, evalf(RootOf(...,index=i)) could end up calling fsolve for each i. And that might be wasted, duplicate effort.
# Using your e1 and e2 as posted...
P:=convert(subs(E1=e1,E2=e2,E1*NZ^4-NZ^2*(2*E1^2)+E1^3-E1*E2^2=0),rational):
[seq(evalf(RootOf(P,NZ,index=i)),i=1..4)];
`RootOf/sort`([fsolve(P,NZ)]);
If you set stopat(fsolve) you can see that the first of these two approaches calls fsolve four times (each time computing all the roots) while the second approach calls fsolve just the once and then fixes up the ordering to agree with that of the indexed RootOf's.
acer
It could be that the degree of estimated roundoff error is dependent upon the value of vm. As vm gets below some critical value, that estimation of roundoff might tip over some internal limit. After all, the integrand itself, and not just the upper limit of integration, depends on vm.
I doubt that evalf/Int would attempt to change epsilon from its initial value, in case of failure. More interesting is the question of whether it does, or should, change the working precision on the fly. I would say that it ought not to do so, aside from some fixed initial adjustment of addition of guard digits. It would be un-Maple-like. The degree of conditioning, or such, of a given problem would affect how high Digits might have to be to attain a given accuracy. In that case, there'd be some examples that could make evalf/Int go away for arbitrarily long periods of time, which few people might like.
But evalf/Int allows both epsilon and the working precision to be supplied as options. So there is control. It does what it's told to do. As mentioned, one can raise the working precision, hold epsilon constant, and attain solutions, with no method specified.
I found this interesting:
> foo := proc(v) if sqrt(sech(v) - sech(VM)) = 0 then return Float(undefined) \
> else return Re(cosh(v)/sqrt(sech(v) - sech(VM))) end if; end proc:
> N:=(vm,eps)->evalf(subs({VM=vm,EPS=ep\
> s},Int(eval(foo),0..VM,epsilon=EPS,method = _d01ajc))):
> infolevel[`evalf`]:=100:
> N(.000222,1.0e-6);
evalf/int/control: integrating on 0 .. .222e-3 the integrand
proc(v)
if sqrt(sech(v) - sech(0.000222)) = 0 then return Float(undefined)
else return Re(cosh(v)/sqrt(sech(v) - sech(0.000222)))
end if
end proc
evalf/int/control: tolerance = .10e-5; method = _d01ajc
Control: Entering NAGInt
trying d01ajc (nag_1d_quad_gen)
d01ajc: epsabs=.100000000000000e-8; epsrel=.10e-5; max_num_subint=200
d01ajc: procedure for evaluation is:
proc (v) if sqrt(sech(v)-sech(.222e-3)) = 0 then return Float(undefined) else \
return Re(cosh(v)/sqrt(sech(v)-sech(.222e-3))) end if end proc
Error, (in evalf/int) NE_QUAD_MAX_SUBDIV:
The maximum number of subdivisions has
been reached: max_num_subint = 200
Here, failure seems related to the maximal number of subdivisions allowed in the d01ajc integrator. But that doesn't seem to be a limit that is under user's control as an option to evalf/Int.
acer
One could add this: It is not at all uncommon for a function to have a singularity (or even something that merely appears as such) and be both very easy to symbolically integrate yet not so straightforward to numerically integrate. Not every numerical integration method will handle such functions the same way. So in evalf/Int, it's easily conceivable that method _Dexp could handle such an integrand without accumulating the same sort of accumulated roundoff error that method _d01ajc might accrue.
Without forcing a given method, evalf/Int can do analysis of the integrand and select whichever methods it wants to use to obtain the desired accuracy that is specified by the epsilon option.
acer
The integrand appears to have a singularity. Eg,
foo := proc(v) if sqrt(sech(v) - sech(VM)) = 0 then return Float(undefined) else return Re(cosh(v)/sqrt(sech(v) - sech(VM))) end if; end proc:
plot(subs({VM=1},eval(foo)),0.2..1);
So, if one attempts to integrate it numerically with method=_d01ajc, then many functional evaluations may be done near the singularity. That could account for enough accumulated roundoff error to prevent attaining an accuracy better than the smallest found successful epsilon.
For the plot, after,
N:=(vm,eps)->evalf(subs({VM=vm,EPS=eps},Int(eval(foo),0..VM,epsilon=EPS,method = _d01ajc))):
did you perhaps intend either of,
plot('N'(v,1.0e-6),v=0.2..1);
or,
plot(v->N(v,1.0e-6),0.2..1);
so as to delay any attempt at immediate evaluation of the argument N(v,1.0e-6) ?
acer
At a given working precision (Digits), the integrand might evaluate to a quantity with a possibly small nonzero imaginary component, for some value of v. That seems to be disallowed in the purely real integrator, Nag's d01ajc. So one could put a Re() call around the integrand, to help with the case of "unable to obtain a real result".
The other error message you obtained says that round-off error was apparently detected while trying to obtain the default tolerance. So one could try adjusting that, using evalf/Int's epsilon option.
Let's see how small one could make the tolerance, and have it succeed.
> foo := proc(v) if sqrt(sech(v) - sech(VM)) = 0 then return Float(undefined) else return Re(cosh(v)/sqrt(sech(v) - sech(VM))) end if; end proc:
> N:=(vm,eps)->evalf(subs({VM=vm,EPS=eps},Int(eval(foo),0..VM,epsilon=EPS,method = _d01ajc))):
> N(.01,1.0e-6);
2.221566392
> N(.01,1.0e-7);
2.221566471
> N(.01,1.0e-8);
2.221566424
> N(.01,1.0e-9);
Error, (in evalf/int) NE_QUAD_ROUNDOFF_EXTRAPL:
Round-off error is detected during extrapolation.
> Digits := 15:
> N(.01,1.0e-9);
Error, (in evalf/int) NE_QUAD_ROUNDOFF_EXTRAPL:
Round-off error is detected during extrapolation.
I didn't expect setting Digits to 14 or 15 to make much, if any, difference, as that's going to be a close match to a compiled double precision external integrator using evalhf callbacks for functional evaluations of the integrand.
> Digits := 25:
> N(.01,1.0e-9);
Error, (in evalf/int) expect Digits<=15 for method _d01ajc but received 28
This error message seem to be indicating that it's not valid to use this method with a working precision greater than evalhf's.
I notice that if one removes the method=_d01ajc restriction then at Digits=25 a result is returned even with epsilon as small as 1.0e-9 .
I do not understand the remark, "Curiously, these problems occur where elementary integration is feasible." Numerical computational issues can have nothing to do whether a problem is symbolically solvable, in general and for quadrature in particular, as an entirely different methodology might be used. I don't see what is remarkable about such a difference.
acer
One way to experiment with whether the extra time may be due to garbage collection (gc), is to increase the limit that controls gc frequency.
For example, set,
> kernelopts(gcfreq=10^8):
and then see what efect that has on the repeated loop timings.
acer
The original example is covered by its own explanatory paragraph in the help-page ?sign .
There is also sometimes confusion between the functions sign() and signum().
Consider this below, where no quoting is necessary,
plot([signum(x)],x=-1..1);
and see ?signum .
acer
Hello again,
The Matrix ls_m is currently set up as 6x9. So, assuming that the extra 3 columns of ls_m actually contain pertinent and meaningful data, the system appears underdetermined. Presumably, this is why you've supplied a name for the generation of the free variables to appear in any solution, by supplying the option free='t'.
What might not be made clear enough in the help-pages ?LinearSolve and ?IterativeSolver is that the underlying sparse solvers are not designed for producing symbolically parametrized solutions of underdetermined systems. They are purely numerical, and can by nature only ever generate a purely floating-point solution with no free variables. (Such iterative methods generally use some form of Matrix-Vector multiplication, or a related way to "iteratively" get closer to an answer. There's no room in that for supplying any symbolic piece to the result.)
If ls_m were 6x6 (or 9x9) and had datatype=float[8], with B of corresponding size, then that should work with the sparse method, I believe. You might even be able to pad out the data so as to get the a (single, purely numeric) sol which approximately satisfied the original system. But the current sparse solvers are purely numeric, and can't get you a parametrized solution, I don't think.
As you've pointed out, though, this below does produce a solution to the problem (sized, as stated), as long as ls_m and B don't contain floats,
sol := LinearSolve(ls_m, B, free='t');
So perhaps you could just use that, then, and conclude that the sparse solver is float-based and not appropriate here?
acer
Hello again,
The Matrix ls_m is currently set up as 6x9. So, assuming that the extra 3 columns of ls_m actually contain pertinent and meaningful data, the system appears underdetermined. Presumably, this is why you've supplied a name for the generation of the free variables to appear in any solution, by supplying the option free='t'.
What might not be made clear enough in the help-pages ?LinearSolve and ?IterativeSolver is that the underlying sparse solvers are not designed for producing symbolically parametrized solutions of underdetermined systems. They are purely numerical, and can by nature only ever generate a purely floating-point solution with no free variables. (Such iterative methods generally use some form of Matrix-Vector multiplication, or a related way to "iteratively" get closer to an answer. There's no room in that for supplying any symbolic piece to the result.)
If ls_m were 6x6 (or 9x9) and had datatype=float[8], with B of corresponding size, then that should work with the sparse method, I believe. You might even be able to pad out the data so as to get the a (single, purely numeric) sol which approximately satisfied the original system. But the current sparse solvers are purely numeric, and can't get you a parametrized solution, I don't think.
As you've pointed out, though, this below does produce a solution to the problem (sized, as stated), as long as ls_m and B don't contain floats,
sol := LinearSolve(ls_m, B, free='t');
So perhaps you could just use that, then, and conclude that the sparse solver is float-based and not appropriate here?
acer