## 29909 Reputation

18 years, 207 days

## not kernel-bound...

The gauss procedure uses ilcm. Its parameter-processing calls `type/listlist`. The setup of the data A, prior to calling gauss(), uses RandomTools. acer

## off by how many ulps?...

You showed a result from Maple of 19.99909999 . Presumably that was done at the default value of Digits=10. But, evalf[11](exp(Pi)-Pi) returns as 19.999099979 . So why isn't the the first result, at the default value of Digits=10, instead 19.99909998 ? acer

## provide a range...

Notice that the answer obtained from A.Vogt's method was 0.0197... times I. That is, it was a purely imaginary number. The result was not the real number 0.0197... Notice also that, if you plug in the imaginary number, remembering to make it I*0.0197.., then that does produce a value very close to 0.5. Ie, evalf(abs(f(nTst,0.01970538187*I))); So, the answer I*0.0197.. was correct, given that you did not specify a real-valued range for the result from fsolve. You can supply a purely real-valued range to fsolve, eg, f := (n,v) -> Zeta(0,n+1,1-2*Pi*v*I)/Zeta(n+1): nTst:= 5: fsolve( abs(f(nTst,v))=0.5, v=0..0.2 ); That gives the result 0.08039... that you mentioned. acer

## provide a range...

Notice that the answer obtained from A.Vogt's method was 0.0197... times I. That is, it was a purely imaginary number. The result was not the real number 0.0197... Notice also that, if you plug in the imaginary number, remembering to make it I*0.0197.., then that does produce a value very close to 0.5. Ie, evalf(abs(f(nTst,0.01970538187*I))); So, the answer I*0.0197.. was correct, given that you did not specify a real-valued range for the result from fsolve. You can supply a purely real-valued range to fsolve, eg, f := (n,v) -> Zeta(0,n+1,1-2*Pi*v*I)/Zeta(n+1): nTst:= 5: fsolve( abs(f(nTst,v))=0.5, v=0..0.2 ); That gives the result 0.08039... that you mentioned. acer

## actually...

So, since the limit(n^(1/n),n=infinity) is 1, do you suspect that partial sums of the alternating series (-1)^n*n^(1/n) would necessarily have two limiting values (in terms of the alternating partial sums)? How do you suspect that this might relate, if at all, to the value returned from, evalf(sum((-1)^n*n^(1/n),n=1..infinity)); Maybe there already is a body of theory for this sort of thing. What do you foresee a theory about this looking like? I ask because, once one has been motivated by some casual observations, the next step in mathematics is to develop a formalism and a notation. It's not easy to get a good notation, but a sure sign of success is that good notation leads to rich insight. I'm not sure that insight can come from observation alone. acer

## curious...

I must have missed something in this post. Perhaps you could help me understand what some of the insights are? You started off with something like, F1:=sum((-1)^n*(n^(1/n)-1),n=1..x): F2:=sum((-1)^n*(n^(1/n)-2),n=1..x): F3:=sum((-1)^n*(n^(1/n)-3),n=1..x): And then you noticed that for each second set of values for these, ie.for x=2,4,6, the F1, F2, and F3 would agree. But that's just because when x is any even positive integer all three of F1, F2, and F3 simplify immediately to sum((-1)^n*n^(1/n),n = 1 .. infinity); Now, Maple can show that directly, with, simplify([F1,F2,F3]) assuming x::even; You wrote, "Notice that the blue, red and green graphs meet at least one y-value. As x goes to infinity, that y-value is the value of the MRB Constant." But isn't that just the same as the limits of any subsequences of the function coloured green itself, as x goes to infinity? What do the red and blue curves add, for illustrating the behaviour of the green curve? So it seems that you've named sum((-1)^n*n^(1/n),n=1..infinity) . Is that right? What do you think of the value that Maple gives when one takes evalf() of that expression? Do you think that the green curve has some subsequences that converge to both a positive number and a negative number? If so, how do you think that those relate to the evalf() result? I couldn't follow what the summations mean when the index ranges from 1 to sqrt(3). I had thought that the dummy index n was representing positive integers. What does this mean, sum((-1)^n*n^(1/n),n = 1 .. 27*6^(1/2)))); Thanks very much, acer

## Matrix with a capital M...

No, not really. You've used matrix with a lowercase m. He wants to use LinearAlgebra, so it should be uppercase M, ie. Matrix. But then your example with square brackets is not good. > A := Matrix(3,(i,j)->i+j): > sum(sum(A[i,j],i=1..3),j=1..3); Error, bad index into Matrix The best solution could be to use add() instead of sum(). See their help-pages for the difference. > A := Matrix(3,(i,j)->i+j): > add(add(A[i,j],i=1..3),j=1..3); 36 It should not be necessary to load LinearAlgebra, using with(), in order to create a Matrix, by the way. Also of possible interest are, > add( x, x in A ); 36 and this strange one, > sum(sum( ('y->(''x->A[x,y]'')'(i))(j) ,i=1..3) ,j=1..3); 36 acer

## Matrix with a capital M...

No, not really. You've used matrix with a lowercase m. He wants to use LinearAlgebra, so it should be uppercase M, ie. Matrix. But then your example with square brackets is not good. > A := Matrix(3,(i,j)->i+j): > sum(sum(A[i,j],i=1..3),j=1..3); Error, bad index into Matrix The best solution could be to use add() instead of sum(). See their help-pages for the difference. > A := Matrix(3,(i,j)->i+j): > add(add(A[i,j],i=1..3),j=1..3); 36 It should not be necessary to load LinearAlgebra, using with(), in order to create a Matrix, by the way. Also of possible interest are, > add( x, x in A ); 36 and this strange one, > sum(sum( ('y->(''x->A[x,y]'')'(i))(j) ,i=1..3) ,j=1..3); 36 acer

## PeridentityMatrix...

PeridentityMatrix := proc(n::posint) LinearAlgebra[LinearAlgebra:-CreatePermutation](Vector([ seq(n - i, i = 0 .. floor(1/2*n) - 1), seq(floor(1/2*n) + 1 + i, i = 0 .. ceil(1/2*n) - 1)])); end proc: acer

## ordered roots, fsolve, and efficiency...

fsolve should order the roots in a reproducible, non-session-dependent manner. This order can also be "fixed up" to agree with the ordering of indexed RootOf's. But you might also want to make sure that your code is efficient. If you are trying to compute the floating-point approximations of several of the roots of a univariate polynomial, then one call to fsolve should suffice. On the other hand, evalf(RootOf(...,index=i)) could end up calling fsolve for each i. And that might be wasted, duplicate effort. # Using your e1 and e2 as posted... P:=convert(subs(E1=e1,E2=e2,E1*NZ^4-NZ^2*(2*E1^2)+E1^3-E1*E2^2=0),rational): [seq(evalf(RootOf(P,NZ,index=i)),i=1..4)]; `RootOf/sort`([fsolve(P,NZ)]); If you set stopat(fsolve) you can see that the first of these two approaches calls fsolve four times (each time computing all the roots) while the second approach calls fsolve just the once and then fixes up the ordering to agree with that of the indexed RootOf's. acer

## ordered roots, fsolve, and efficiency...

fsolve should order the roots in a reproducible, non-session-dependent manner. This order can also be "fixed up" to agree with the ordering of indexed RootOf's. But you might also want to make sure that your code is efficient. If you are trying to compute the floating-point approximations of several of the roots of a univariate polynomial, then one call to fsolve should suffice. On the other hand, evalf(RootOf(...,index=i)) could end up calling fsolve for each i. And that might be wasted, duplicate effort. # Using your e1 and e2 as posted... P:=convert(subs(E1=e1,E2=e2,E1*NZ^4-NZ^2*(2*E1^2)+E1^3-E1*E2^2=0),rational): [seq(evalf(RootOf(P,NZ,index=i)),i=1..4)]; `RootOf/sort`([fsolve(P,NZ)]); If you set stopat(fsolve) you can see that the first of these two approaches calls fsolve four times (each time computing all the roots) while the second approach calls fsolve just the once and then fixes up the ordering to agree with that of the indexed RootOf's. acer

## roundoff, as a function of the particula...

It could be that the degree of estimated roundoff error is dependent upon the value of vm. As vm gets below some critical value, that estimation of roundoff might tip over some internal limit. After all, the integrand itself, and not just the upper limit of integration, depends on vm. I doubt that evalf/Int would attempt to change epsilon from its initial value, in case of failure. More interesting is the question of whether it does, or should, change the working precision on the fly. I would say that it ought not to do so, aside from some fixed initial adjustment of addition of guard digits. It would be un-Maple-like. The degree of conditioning, or such, of a given problem would affect how high Digits might have to be to attain a given accuracy. In that case, there'd be some examples that could make evalf/Int go away for arbitrarily long periods of time, which few people might like. But evalf/Int allows both epsilon and the working precision to be supplied as options. So there is control. It does what it's told to do. As mentioned, one can raise the working precision, hold epsilon constant, and attain solutions, with no method specified. I found this interesting: > foo := proc(v) if sqrt(sech(v) - sech(VM)) = 0 then return Float(undefined) \ > else return Re(cosh(v)/sqrt(sech(v) - sech(VM))) end if; end proc: > N:=(vm,eps)->evalf(subs({VM=vm,EPS=ep\ > s},Int(eval(foo),0..VM,epsilon=EPS,method = _d01ajc))): > infolevel[`evalf`]:=100: > N(.000222,1.0e-6); evalf/int/control: integrating on 0 .. .222e-3 the integrand proc(v) if sqrt(sech(v) - sech(0.000222)) = 0 then return Float(undefined) else return Re(cosh(v)/sqrt(sech(v) - sech(0.000222))) end if end proc evalf/int/control: tolerance = .10e-5; method = _d01ajc Control: Entering NAGInt trying d01ajc (nag_1d_quad_gen) d01ajc: epsabs=.100000000000000e-8; epsrel=.10e-5; max_num_subint=200 d01ajc: procedure for evaluation is: proc (v) if sqrt(sech(v)-sech(.222e-3)) = 0 then return Float(undefined) else \ return Re(cosh(v)/sqrt(sech(v)-sech(.222e-3))) end if end proc Error, (in evalf/int) NE_QUAD_MAX_SUBDIV: The maximum number of subdivisions has been reached: max_num_subint = 200 Here, failure seems related to the maximal number of subdivisions allowed in the d01ajc integrator. But that doesn't seem to be a limit that is under user's control as an option to evalf/Int. acer

## to be more clear...

One could add this: It is not at all uncommon for a function to have a singularity (or even something that merely appears as such) and be both very easy to symbolically integrate yet not so straightforward to numerically integrate. Not every numerical integration method will handle such functions the same way. So in evalf/Int, it's easily conceivable that method _Dexp could handle such an integrand without accumulating the same sort of accumulated roundoff error that method _d01ajc might accrue. Without forcing a given method, evalf/Int can do analysis of the integrand and select whichever methods it wants to use to obtain the desired accuracy that is specified by the epsilon option. acer
 First 540 541 542 543 544 545 546 Page 542 of 547
﻿