acer

20896 Reputation

29 Badges

15 years, 60 days

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Axel made some good points. I believe that his comment on Theta being uninitialized is mistaken, however. I too am not a big fan of 2Dmath for authoring maple programs. I hope that my conversion of to 1dmath input is correct. Here are some suggestions and comments. - Use float[8] an complex[8] datatypes, if possible. This helps avoid temporary memory allocation by pushing some of the computation out to compiled external code. - Initialize x, R, and Theta using the initializer available within the Matrix constructor. This is more efficient than initializing them after creation, with loops. - Vpq is enormous, requiring >800MB of memory with complex[8] datatype. This may not be avoidable. More on this at the end. - Rpq is as big as Vpq, but it is not needed. I understand that Vpq may be wanted, after the work is done. (It could be returned, with moments, instead of being a global.) But it seems that only two or three entries of Rpq are needed at any time within the inner part of the double-loop at the end of the code. So by getting rid of Rpq you might be able to save >800MB. - Use ArrayTools[Alias] instead of ArrayTools[Reshape]. The latter produces an unnecessary copy, while the former does not. - Use evalhf where possible. This gives you double-precision, and actually saves memory too. - In the double-loop to initialize pq, observe that the inner ("and") condition of q<=p can be dropped if the inner loop variable q is only allowed to go as high as p. - Use evalhf for all the computations done in the nasty, super expensive double loop at the end of the procedure. More on this later. - The moments Vector can be created by the muliplication. You don't need to create it, and then loop to assign to it. - Don't use A . B . C for efficient linear algebra. Make use of the LinearAlgebra commands directly. That saves at least one unnecessary copy, or perhaps two. Make sure you control these operations. Scale moments by coef, inplace after the muliplication. (The worst scenario would be scaling Vpq which is enormous by coef. Bad because it's so big, and horrible if it produces an enourmous scaled copy.) For the dim=200,pmax=70,qmax=70 case this code below took about 830MB and 3h20min on a fast P4. Please check that I didn't make mistakes in the translation. You can try running smaller examples against original and mine. If I did make mistakes then they can likely be corrected while remaining efficient. Most of my changes are in somewhat independent chunks. I tried to check that moments[1] stayed the same, but I didn't test Vpq results. zernike:=proc(dim,pmax,qmax,image) local i,j,y,kmax,k,r,t,Z,Ztot,coef,exponente,size,p,q,pq,s, H1,H2,H3,im,imtot,Theta,R,Image,Vpq,moments,x, Rpqis,Rpqis1,Rpqis2,bainit,st,oldDigits; st,bainit := time(),kernelopts(bytesalloc); # Raise Digits during initialization of x, R, and Theta. oldDigits,Digits := Digits,max(Digits,trunc(evalhf(Digits))); x:=Matrix(dim,1,(i)->evalf(((sqrt(2)*(i-1)))/((dim-1))-1/(sqrt(2))), datatype=float); y:=x; R:=Matrix(dim,dim,(j,i)->evalf(sqrt(x[i,1]^2+y[dim-j+1,1]^2)), datatype=float); Theta:=Matrix(dim,dim,(j,i)->evalf(arctan(y[dim-j+1,1],x[i,1])), datatype=float); Digits := oldDigits; if type(pmax,odd)=true then size:= 1/4*(pmax+1)*(pmax+3); pq:=Matrix(size,2,datatype=float); else size:=1/4*(pmax+2)^2; pq:= Matrix(size,2, datatype=float ); end if; userinfo(1,`zernike`,`stage 1`, time()-st, kernelopts(bytesalloc)-bainit ); i:=0; for p from 0 to pmax do for q from 0 to p do if type(p-abs(q),even)=true then i:= i+1; pq[i,1]:= p; pq[i,2]:= q; end if; end do; end do; userinfo(1,`zernike`,`stage 2`, time()-st, kernelopts(bytesalloc)-bainit ); R:=ArrayTools[Alias](R,[dim^2,1]); Theta:=ArrayTools[Alias](Theta,[dim^2,1]); Image:=ArrayTools[Alias](image,[dim^2]); Vpq:=Matrix(size,dim^2,datatype=complex(float)); userinfo(1,`zernike`,`stage 3`, time()-st, kernelopts(bytesalloc)-bainit ); Rpqis1,Rpqis2:=0.0,0.0; for s from size by -1 to 1 do for i to dim^2 do if pq[s,1]=pq[s,2] then Rpqis:=evalhf( R[i,1]^(pq[s,1]) ); elif pq[s,1]-pq[s,2]=2 then Rpqis:= evalhf( pq[s,1]*R[i,1]^(pq[s,1]) - (pq[s,1]-1)*R[i,1]^((pq[s,1]-2)) ); else H3:= evalhf( (-4*(pq[s+2,2]-2)*(pq[s+2,2]-3))/ ((pq[s,1]+pq[s+2,2]-2)*(pq[s,1]-pq[s+2,2]+4)) ): H2:= evalhf( (H3*(pq[s,1]+pq[s+2,2])*(pq[s,1]-pq[s+2,2]+2))/ (4*(pq[s+2,2]-1))+(pq[s+2,2]-2) ): H1:= evalhf( (pq[s+2,2]*(pq[s+2,2]-1))/2 -pq[s+2,2]*H2+ (H3*(pq[s,1]+pq[s+2,2]+2)*(pq[s,1]-pq[s+2,2]))/8 ): Rpqis:= evalhf( H1*Rpqis2+(H2+H3/(R[i,1]^2))*Rpqis1 ): Rpqis2,Rpqis1:=Rpqis1,Rpqis; end if; Vpq[s,i]:= evalhf( Rpqis*(exp(I*pq[s,2]*Theta[i,1])) ); end do; coef:=evalhf( ((2*pq[s,1]+2))/(Pi*((dim))^2) ); end do; userinfo(1,`zernike`,`stage 4`, time()-st, kernelopts(bytesalloc)-bainit ); moments := LinearAlgebra[MatrixVectorMultiply](Vpq,Image); LinearAlgebra[VectorScalarMultiply](moments,coef,inplace=true); userinfo(1,`zernike`,`stage 5`, time()-st, kernelopts(bytesalloc)-bainit ); return moments, Vpq; end proc: kernelopts(printbytes=false): #(dim,pmax,qmax):=200,70,70: (dim,pmax,qmax):=20,7,7; # Just for testing. mI := LinearAlgebra[RandomMatrix](dim,dim,outputoptions=[datatype=complex(float)]): # Let it be verbose, about how it's doing. infolevel[`zernike`]:=1: solmom,solVpq:=zernike(dim,pmax,qmax,mI): solmom[1]; Now, what can be said about this double loop, with something like 1200 and 10000 iterations at each level? That's where almost all of the 3hr20min of computation time goes. If it didn't have exp(I*pq[s,2]*Theta[i,1]) in it you could stick it inside its own procedure and run Compiler[Compile] against it. You still could do that, actually, by wrapping an eval() around that problematic line involving nonreal, complex computation. But then memory usage might go up high. I'm not sure how much garbage collection is possible when a Compiled routine runs. In order to try this, you would have to use ArrayTools[ComplexAsFloat] on Vpq. I was thinking something like this... first, get rid of the double loop at the end of zernike, and instead make that just one line like, say, coef := stage4_compiled(dim,size,pq,R,real_Vpq,Theta); And then, outside of zernike, have something like all this. st4_r := proc(x,y,z) local temp; temp := x*exp(I*y*z); return Re(temp); end proc: st4_i := proc(x,y,z) local temp; temp := x*exp(I*y*z); return Im(temp); end proc: stage4 := proc(dim::integer,size::integer,pq::Matrix(datatype=float[8]),R::Matrix(datatype=float[8]),Vpq::Matrix(datatype=float[8]),Theta::Matrix(datatype=float[8])) local Rpqis, Rpqis1, Rpqis2, H1, H2, H3, s, i, coef; global st4_r,st4_i; Rpqis1,Rpqis2:=0.0,0.0; for s from size by -1 to 1 do for i to dim^2 do if pq[s,1]=pq[s,2] then Rpqis:= R[i,1]^(pq[s,1]); elif pq[s,1]-pq[s,2]=2 then Rpqis:= pq[s,1]*R[i,1]^(pq[s,1]) - (pq[s,1]-1)*R[i,1]^((pq[s,1]-2)); else H3:= (-4*(pq[s+2,2]-2)*(pq[s+2,2]-3))/ ((pq[s,1]+pq[s+2,2]-2)*(pq[s,1]-pq[s+2,2]+4)): H2:= (H3*(pq[s,1]+pq[s+2,2])*(pq[s,1]-pq[s+2,2]+2))/ (4*(pq[s+2,2]-1))+(pq[s+2,2]-2): H1:= (pq[s+2,2]*(pq[s+2,2]-1))/2 -pq[s+2,2]*H2+ (H3*(pq[s,1]+pq[s+2,2]+2)*(pq[s,1]-pq[s+2,2]))/8: Rpqis:= H1*Rpqis2+(H2+H3/(R[i,1]^2))*Rpqis1: Rpqis2,Rpqis1:=Rpqis1,Rpqis; end if; Vpq[2*s-1,i]:= eval(st4_r(Rpqis,pq[s,2],Theta[i,1])); Vpq[2*s,i]:= eval(st4_i(Rpqis,pq[s,2],Theta[i,1])); end do; coef:= ((2*pq[s,1]+2))/(Pi*((dim))^2); end do; return coef; end proc: stage4_compiled:=Compiler[Compile](stage4); You'd almost certainly want to check that I hadn't done the wrong thing, with the assignments into Vpq above. You might also try to run CodeGeneration[C] on something like stage4, and replacing the exp call by a dummy unrecognized name. Then afterwards you could replace that dummy in the produced C souce with a call to cexp(). Then you could compile it into a dynamic library outside of Maple. And then replace all the stuff for it in the example by setting up a Maple procedure to point to the compiled external function using define_external(). acer

Axel made some good points. I believe that his comment on Theta being uninitialized is mistaken, however. I too am not a big fan of 2Dmath for authoring maple programs. I hope that my conversion of to 1dmath input is correct. Here are some suggestions and comments. - Use float[8] an complex[8] datatypes, if possible. This helps avoid temporary memory allocation by pushing some of the computation out to compiled external code. - Initialize x, R, and Theta using the initializer available within the Matrix constructor. This is more efficient than initializing them after creation, with loops. - Vpq is enormous, requiring >800MB of memory with complex[8] datatype. This may not be avoidable. More on this at the end. - Rpq is as big as Vpq, but it is not needed. I understand that Vpq may be wanted, after the work is done. (It could be returned, with moments, instead of being a global.) But it seems that only two or three entries of Rpq are needed at any time within the inner part of the double-loop at the end of the code. So by getting rid of Rpq you might be able to save >800MB. - Use ArrayTools[Alias] instead of ArrayTools[Reshape]. The latter produces an unnecessary copy, while the former does not. - Use evalhf where possible. This gives you double-precision, and actually saves memory too. - In the double-loop to initialize pq, observe that the inner ("and") condition of q<=p can be dropped if the inner loop variable q is only allowed to go as high as p. - Use evalhf for all the computations done in the nasty, super expensive double loop at the end of the procedure. More on this later. - The moments Vector can be created by the muliplication. You don't need to create it, and then loop to assign to it. - Don't use A . B . C for efficient linear algebra. Make use of the LinearAlgebra commands directly. That saves at least one unnecessary copy, or perhaps two. Make sure you control these operations. Scale moments by coef, inplace after the muliplication. (The worst scenario would be scaling Vpq which is enormous by coef. Bad because it's so big, and horrible if it produces an enourmous scaled copy.) For the dim=200,pmax=70,qmax=70 case this code below took about 830MB and 3h20min on a fast P4. Please check that I didn't make mistakes in the translation. You can try running smaller examples against original and mine. If I did make mistakes then they can likely be corrected while remaining efficient. Most of my changes are in somewhat independent chunks. I tried to check that moments[1] stayed the same, but I didn't test Vpq results. zernike:=proc(dim,pmax,qmax,image) local i,j,y,kmax,k,r,t,Z,Ztot,coef,exponente,size,p,q,pq,s, H1,H2,H3,im,imtot,Theta,R,Image,Vpq,moments,x, Rpqis,Rpqis1,Rpqis2,bainit,st,oldDigits; st,bainit := time(),kernelopts(bytesalloc); # Raise Digits during initialization of x, R, and Theta. oldDigits,Digits := Digits,max(Digits,trunc(evalhf(Digits))); x:=Matrix(dim,1,(i)->evalf(((sqrt(2)*(i-1)))/((dim-1))-1/(sqrt(2))), datatype=float); y:=x; R:=Matrix(dim,dim,(j,i)->evalf(sqrt(x[i,1]^2+y[dim-j+1,1]^2)), datatype=float); Theta:=Matrix(dim,dim,(j,i)->evalf(arctan(y[dim-j+1,1],x[i,1])), datatype=float); Digits := oldDigits; if type(pmax,odd)=true then size:= 1/4*(pmax+1)*(pmax+3); pq:=Matrix(size,2,datatype=float); else size:=1/4*(pmax+2)^2; pq:= Matrix(size,2, datatype=float ); end if; userinfo(1,`zernike`,`stage 1`, time()-st, kernelopts(bytesalloc)-bainit ); i:=0; for p from 0 to pmax do for q from 0 to p do if type(p-abs(q),even)=true then i:= i+1; pq[i,1]:= p; pq[i,2]:= q; end if; end do; end do; userinfo(1,`zernike`,`stage 2`, time()-st, kernelopts(bytesalloc)-bainit ); R:=ArrayTools[Alias](R,[dim^2,1]); Theta:=ArrayTools[Alias](Theta,[dim^2,1]); Image:=ArrayTools[Alias](image,[dim^2]); Vpq:=Matrix(size,dim^2,datatype=complex(float)); userinfo(1,`zernike`,`stage 3`, time()-st, kernelopts(bytesalloc)-bainit ); Rpqis1,Rpqis2:=0.0,0.0; for s from size by -1 to 1 do for i to dim^2 do if pq[s,1]=pq[s,2] then Rpqis:=evalhf( R[i,1]^(pq[s,1]) ); elif pq[s,1]-pq[s,2]=2 then Rpqis:= evalhf( pq[s,1]*R[i,1]^(pq[s,1]) - (pq[s,1]-1)*R[i,1]^((pq[s,1]-2)) ); else H3:= evalhf( (-4*(pq[s+2,2]-2)*(pq[s+2,2]-3))/ ((pq[s,1]+pq[s+2,2]-2)*(pq[s,1]-pq[s+2,2]+4)) ): H2:= evalhf( (H3*(pq[s,1]+pq[s+2,2])*(pq[s,1]-pq[s+2,2]+2))/ (4*(pq[s+2,2]-1))+(pq[s+2,2]-2) ): H1:= evalhf( (pq[s+2,2]*(pq[s+2,2]-1))/2 -pq[s+2,2]*H2+ (H3*(pq[s,1]+pq[s+2,2]+2)*(pq[s,1]-pq[s+2,2]))/8 ): Rpqis:= evalhf( H1*Rpqis2+(H2+H3/(R[i,1]^2))*Rpqis1 ): Rpqis2,Rpqis1:=Rpqis1,Rpqis; end if; Vpq[s,i]:= evalhf( Rpqis*(exp(I*pq[s,2]*Theta[i,1])) ); end do; coef:=evalhf( ((2*pq[s,1]+2))/(Pi*((dim))^2) ); end do; userinfo(1,`zernike`,`stage 4`, time()-st, kernelopts(bytesalloc)-bainit ); moments := LinearAlgebra[MatrixVectorMultiply](Vpq,Image); LinearAlgebra[VectorScalarMultiply](moments,coef,inplace=true); userinfo(1,`zernike`,`stage 5`, time()-st, kernelopts(bytesalloc)-bainit ); return moments, Vpq; end proc: kernelopts(printbytes=false): #(dim,pmax,qmax):=200,70,70: (dim,pmax,qmax):=20,7,7; # Just for testing. mI := LinearAlgebra[RandomMatrix](dim,dim,outputoptions=[datatype=complex(float)]): # Let it be verbose, about how it's doing. infolevel[`zernike`]:=1: solmom,solVpq:=zernike(dim,pmax,qmax,mI): solmom[1]; Now, what can be said about this double loop, with something like 1200 and 10000 iterations at each level? That's where almost all of the 3hr20min of computation time goes. If it didn't have exp(I*pq[s,2]*Theta[i,1]) in it you could stick it inside its own procedure and run Compiler[Compile] against it. You still could do that, actually, by wrapping an eval() around that problematic line involving nonreal, complex computation. But then memory usage might go up high. I'm not sure how much garbage collection is possible when a Compiled routine runs. In order to try this, you would have to use ArrayTools[ComplexAsFloat] on Vpq. I was thinking something like this... first, get rid of the double loop at the end of zernike, and instead make that just one line like, say, coef := stage4_compiled(dim,size,pq,R,real_Vpq,Theta); And then, outside of zernike, have something like all this. st4_r := proc(x,y,z) local temp; temp := x*exp(I*y*z); return Re(temp); end proc: st4_i := proc(x,y,z) local temp; temp := x*exp(I*y*z); return Im(temp); end proc: stage4 := proc(dim::integer,size::integer,pq::Matrix(datatype=float[8]),R::Matrix(datatype=float[8]),Vpq::Matrix(datatype=float[8]),Theta::Matrix(datatype=float[8])) local Rpqis, Rpqis1, Rpqis2, H1, H2, H3, s, i, coef; global st4_r,st4_i; Rpqis1,Rpqis2:=0.0,0.0; for s from size by -1 to 1 do for i to dim^2 do if pq[s,1]=pq[s,2] then Rpqis:= R[i,1]^(pq[s,1]); elif pq[s,1]-pq[s,2]=2 then Rpqis:= pq[s,1]*R[i,1]^(pq[s,1]) - (pq[s,1]-1)*R[i,1]^((pq[s,1]-2)); else H3:= (-4*(pq[s+2,2]-2)*(pq[s+2,2]-3))/ ((pq[s,1]+pq[s+2,2]-2)*(pq[s,1]-pq[s+2,2]+4)): H2:= (H3*(pq[s,1]+pq[s+2,2])*(pq[s,1]-pq[s+2,2]+2))/ (4*(pq[s+2,2]-1))+(pq[s+2,2]-2): H1:= (pq[s+2,2]*(pq[s+2,2]-1))/2 -pq[s+2,2]*H2+ (H3*(pq[s,1]+pq[s+2,2]+2)*(pq[s,1]-pq[s+2,2]))/8: Rpqis:= H1*Rpqis2+(H2+H3/(R[i,1]^2))*Rpqis1: Rpqis2,Rpqis1:=Rpqis1,Rpqis; end if; Vpq[2*s-1,i]:= eval(st4_r(Rpqis,pq[s,2],Theta[i,1])); Vpq[2*s,i]:= eval(st4_i(Rpqis,pq[s,2],Theta[i,1])); end do; coef:= ((2*pq[s,1]+2))/(Pi*((dim))^2); end do; return coef; end proc: stage4_compiled:=Compiler[Compile](stage4); You'd almost certainly want to check that I hadn't done the wrong thing, with the assignments into Vpq above. You might also try to run CodeGeneration[C] on something like stage4, and replacing the exp call by a dummy unrecognized name. Then afterwards you could replace that dummy in the produced C souce with a call to cexp(). Then you could compile it into a dynamic library outside of Maple. And then replace all the stuff for it in the example by setting up a Maple procedure to point to the compiled external function using define_external(). acer

The gauss procedure uses ilcm. Its parameter-processing calls `type/listlist`. The setup of the data A, prior to calling gauss(), uses RandomTools. acer
You showed a result from Maple of 19.99909999 . Presumably that was done at the default value of Digits=10. But, evalf[11](exp(Pi)-Pi) returns as 19.999099979 . So why isn't the the first result, at the default value of Digits=10, instead 19.99909998 ? acer
Notice that the answer obtained from A.Vogt's method was 0.0197... times I. That is, it was a purely imaginary number. The result was not the real number 0.0197... Notice also that, if you plug in the imaginary number, remembering to make it I*0.0197.., then that does produce a value very close to 0.5. Ie, evalf(abs(f(nTst,0.01970538187*I))); So, the answer I*0.0197.. was correct, given that you did not specify a real-valued range for the result from fsolve. You can supply a purely real-valued range to fsolve, eg, f := (n,v) -> Zeta(0,n+1,1-2*Pi*v*I)/Zeta(n+1): nTst:= 5: fsolve( abs(f(nTst,v))=0.5, v=0..0.2 ); That gives the result 0.08039... that you mentioned. acer
Notice that the answer obtained from A.Vogt's method was 0.0197... times I. That is, it was a purely imaginary number. The result was not the real number 0.0197... Notice also that, if you plug in the imaginary number, remembering to make it I*0.0197.., then that does produce a value very close to 0.5. Ie, evalf(abs(f(nTst,0.01970538187*I))); So, the answer I*0.0197.. was correct, given that you did not specify a real-valued range for the result from fsolve. You can supply a purely real-valued range to fsolve, eg, f := (n,v) -> Zeta(0,n+1,1-2*Pi*v*I)/Zeta(n+1): nTst:= 5: fsolve( abs(f(nTst,v))=0.5, v=0..0.2 ); That gives the result 0.08039... that you mentioned. acer
So, since the limit(n^(1/n),n=infinity) is 1, do you suspect that partial sums of the alternating series (-1)^n*n^(1/n) would necessarily have two limiting values (in terms of the alternating partial sums)? How do you suspect that this might relate, if at all, to the value returned from, evalf(sum((-1)^n*n^(1/n),n=1..infinity)); Maybe there already is a body of theory for this sort of thing. What do you foresee a theory about this looking like? I ask because, once one has been motivated by some casual observations, the next step in mathematics is to develop a formalism and a notation. It's not easy to get a good notation, but a sure sign of success is that good notation leads to rich insight. I'm not sure that insight can come from observation alone. acer
I must have missed something in this post. Perhaps you could help me understand what some of the insights are? You started off with something like, F1:=sum((-1)^n*(n^(1/n)-1),n=1..x): F2:=sum((-1)^n*(n^(1/n)-2),n=1..x): F3:=sum((-1)^n*(n^(1/n)-3),n=1..x): And then you noticed that for each second set of values for these, ie.for x=2,4,6, the F1, F2, and F3 would agree. But that's just because when x is any even positive integer all three of F1, F2, and F3 simplify immediately to sum((-1)^n*n^(1/n),n = 1 .. infinity); Now, Maple can show that directly, with, simplify([F1,F2,F3]) assuming x::even; You wrote, "Notice that the blue, red and green graphs meet at least one y-value. As x goes to infinity, that y-value is the value of the MRB Constant." But isn't that just the same as the limits of any subsequences of the function coloured green itself, as x goes to infinity? What do the red and blue curves add, for illustrating the behaviour of the green curve? So it seems that you've named sum((-1)^n*n^(1/n),n=1..infinity) . Is that right? What do you think of the value that Maple gives when one takes evalf() of that expression? Do you think that the green curve has some subsequences that converge to both a positive number and a negative number? If so, how do you think that those relate to the evalf() result? I couldn't follow what the summations mean when the index ranges from 1 to sqrt(3). I had thought that the dummy index n was representing positive integers. What does this mean, sum((-1)^n*n^(1/n),n = 1 .. 27*6^(1/2)))); Thanks very much, acer
No, not really. You've used matrix with a lowercase m. He wants to use LinearAlgebra, so it should be uppercase M, ie. Matrix. But then your example with square brackets is not good. > A := Matrix(3,(i,j)->i+j): > sum(sum(A[i,j],i=1..3),j=1..3); Error, bad index into Matrix The best solution could be to use add() instead of sum(). See their help-pages for the difference. > A := Matrix(3,(i,j)->i+j): > add(add(A[i,j],i=1..3),j=1..3); 36 It should not be necessary to load LinearAlgebra, using with(), in order to create a Matrix, by the way. Also of possible interest are, > add( x, x in A ); 36 and this strange one, > sum(sum( ('y->(''x->A[x,y]'')'(i))(j) ,i=1..3) ,j=1..3); 36 acer
No, not really. You've used matrix with a lowercase m. He wants to use LinearAlgebra, so it should be uppercase M, ie. Matrix. But then your example with square brackets is not good. > A := Matrix(3,(i,j)->i+j): > sum(sum(A[i,j],i=1..3),j=1..3); Error, bad index into Matrix The best solution could be to use add() instead of sum(). See their help-pages for the difference. > A := Matrix(3,(i,j)->i+j): > add(add(A[i,j],i=1..3),j=1..3); 36 It should not be necessary to load LinearAlgebra, using with(), in order to create a Matrix, by the way. Also of possible interest are, > add( x, x in A ); 36 and this strange one, > sum(sum( ('y->(''x->A[x,y]'')'(i))(j) ,i=1..3) ,j=1..3); 36 acer
PeridentityMatrix := proc(n::posint) LinearAlgebra[LinearAlgebra:-CreatePermutation](Vector([ seq(n - i, i = 0 .. floor(1/2*n) - 1), seq(floor(1/2*n) + 1 + i, i = 0 .. ceil(1/2*n) - 1)])); end proc: acer
fsolve should order the roots in a reproducible, non-session-dependent manner. This order can also be "fixed up" to agree with the ordering of indexed RootOf's. But you might also want to make sure that your code is efficient. If you are trying to compute the floating-point approximations of several of the roots of a univariate polynomial, then one call to fsolve should suffice. On the other hand, evalf(RootOf(...,index=i)) could end up calling fsolve for each i. And that might be wasted, duplicate effort. # Using your e1 and e2 as posted... P:=convert(subs(E1=e1,E2=e2,E1*NZ^4-NZ^2*(2*E1^2)+E1^3-E1*E2^2=0),rational): [seq(evalf(RootOf(P,NZ,index=i)),i=1..4)]; `RootOf/sort`([fsolve(P,NZ)]); If you set stopat(fsolve) you can see that the first of these two approaches calls fsolve four times (each time computing all the roots) while the second approach calls fsolve just the once and then fixes up the ordering to agree with that of the indexed RootOf's. acer
fsolve should order the roots in a reproducible, non-session-dependent manner. This order can also be "fixed up" to agree with the ordering of indexed RootOf's. But you might also want to make sure that your code is efficient. If you are trying to compute the floating-point approximations of several of the roots of a univariate polynomial, then one call to fsolve should suffice. On the other hand, evalf(RootOf(...,index=i)) could end up calling fsolve for each i. And that might be wasted, duplicate effort. # Using your e1 and e2 as posted... P:=convert(subs(E1=e1,E2=e2,E1*NZ^4-NZ^2*(2*E1^2)+E1^3-E1*E2^2=0),rational): [seq(evalf(RootOf(P,NZ,index=i)),i=1..4)]; `RootOf/sort`([fsolve(P,NZ)]); If you set stopat(fsolve) you can see that the first of these two approaches calls fsolve four times (each time computing all the roots) while the second approach calls fsolve just the once and then fixes up the ordering to agree with that of the indexed RootOf's. acer
It could be that the degree of estimated roundoff error is dependent upon the value of vm. As vm gets below some critical value, that estimation of roundoff might tip over some internal limit. After all, the integrand itself, and not just the upper limit of integration, depends on vm. I doubt that evalf/Int would attempt to change epsilon from its initial value, in case of failure. More interesting is the question of whether it does, or should, change the working precision on the fly. I would say that it ought not to do so, aside from some fixed initial adjustment of addition of guard digits. It would be un-Maple-like. The degree of conditioning, or such, of a given problem would affect how high Digits might have to be to attain a given accuracy. In that case, there'd be some examples that could make evalf/Int go away for arbitrarily long periods of time, which few people might like. But evalf/Int allows both epsilon and the working precision to be supplied as options. So there is control. It does what it's told to do. As mentioned, one can raise the working precision, hold epsilon constant, and attain solutions, with no method specified. I found this interesting: > foo := proc(v) if sqrt(sech(v) - sech(VM)) = 0 then return Float(undefined) \ > else return Re(cosh(v)/sqrt(sech(v) - sech(VM))) end if; end proc: > N:=(vm,eps)->evalf(subs({VM=vm,EPS=ep\ > s},Int(eval(foo),0..VM,epsilon=EPS,method = _d01ajc))): > infolevel[`evalf`]:=100: > N(.000222,1.0e-6); evalf/int/control: integrating on 0 .. .222e-3 the integrand proc(v) if sqrt(sech(v) - sech(0.000222)) = 0 then return Float(undefined) else return Re(cosh(v)/sqrt(sech(v) - sech(0.000222))) end if end proc evalf/int/control: tolerance = .10e-5; method = _d01ajc Control: Entering NAGInt trying d01ajc (nag_1d_quad_gen) d01ajc: epsabs=.100000000000000e-8; epsrel=.10e-5; max_num_subint=200 d01ajc: procedure for evaluation is: proc (v) if sqrt(sech(v)-sech(.222e-3)) = 0 then return Float(undefined) else \ return Re(cosh(v)/sqrt(sech(v)-sech(.222e-3))) end if end proc Error, (in evalf/int) NE_QUAD_MAX_SUBDIV: The maximum number of subdivisions has been reached: max_num_subint = 200 Here, failure seems related to the maximal number of subdivisions allowed in the d01ajc integrator. But that doesn't seem to be a limit that is under user's control as an option to evalf/Int. acer
One could add this: It is not at all uncommon for a function to have a singularity (or even something that merely appears as such) and be both very easy to symbolically integrate yet not so straightforward to numerically integrate. Not every numerical integration method will handle such functions the same way. So in evalf/Int, it's easily conceivable that method _Dexp could handle such an integrand without accumulating the same sort of accumulated roundoff error that method _d01ajc might accrue. Without forcing a given method, evalf/Int can do analysis of the integrand and select whichever methods it wants to use to obtain the desired accuracy that is specified by the epsilon option. acer
First 437 438 439 440 441 442 443 Page 439 of 443