acer

32490 Reputation

29 Badges

20 years, 7 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Markiyan Hirnyk Another possibility, similar to A but with lower powers, is,

q*Pi^2*( ( 2*ln(1/(kf2+q)*(q+kf1)) - 1 )*q^2
         + 2*ln(1/kf2*(kf2+q))*kf2^2
         + 2*kf1^2*ln(q/(q+kf1))
         + (2*kf2-2*kf1)*q + kf1^2 );


     2 //          q~ + kf1~ \   2                              2
q~ Pi  ||-1 + 2 ln(---------)| q~  + (2 kf2~ - 2 kf1~) q~ + kf1~
       \\          kf2~ + q~ /


            kf2~ + q~      2         2       q~     \
     + 2 ln(---------) kf2~  + 2 kf1~  ln(---------)|
              kf2~                        q~ + kf1~ /

The leading coefficient of 1/2 in result B can be simplified away, leaving a similar form, with the `normal` command.

> normal(B);

     2      2     2                                        2
q~ Pi  (kf1~  - q~  + 2 q~ kf2~ - 2 q~ kf1~ + 2 ln(q~) kf1~

                      2                       2       2
     - 2 ln(kf2~) kf2~  + 2 ln(kf2~ + q~) kf2~  - 2 q~  ln(kf2~ + q~)

                           2       2
     - 2 ln(q~ + kf1~) kf1~  + 2 q~  ln(q~ + kf1~))

As you say, it's a matter of preference, at some point.

@Markiyan Hirnyk Thank you, that is much more terse a set of commands than the path I had taken. It seems to produce an expression a little longer, without some pairs of ln's finally combined, in my Maple 13 or Maple 16. (Perhaps I transcribed your 2D Math code wrongly.) But the important step of converting the acrtanh's is done.

I see now that there are other short ways to get something similar in length to what I had, eg.

combine(simplify(convert(evalc(R),ln)));

I'm not sure if you are also interested in obtaining smaller representations of this expression. (If not, then sorry.)

If `R` is the original expression then (under your stated assumptions on `kf1`, `q`, and `kf2`) I suspect that `R` is equal to,

q*Pi^2*(   kf1^2*ln(q^2/(q+kf1)^2)
         + kf2^2*ln((kf2+q)^2/kf2^2)
         + ln((q+kf1)^2/(kf2+q)^2)*q^2
         - q^2 + 2*q*kf2 - 2*q*kf1 + kf1^2 );

acer

@Markiyan Hirnyk 

Try it with the `seq` line replaced as,

eqs:=op~({seq(seq([(D@@k)(r)(L)=(D@@k)(f)(L)],k=0..0),L=[2,5,10])});

as per the comment preceding that line.

@Markiyan Hirnyk 

Try it with the `seq` line replaced as,

eqs:=op~({seq(seq([(D@@k)(r)(L)=(D@@k)(f)(L)],k=0..0),L=[2,5,10])});

as per the comment preceding that line.

@stanto As you mention, Maple considers the names to be possibly nonreal here, unless qualified under assumptions.

Compare what you are now asking with the results from,

simplify(e,radical) assuming a::real, Db::real;

and

simplify(e,radical) assuming real;

@stanto As you mention, Maple considers the names to be possibly nonreal here, unless qualified under assumptions.

Compare what you are now asking with the results from,

simplify(e,radical) assuming a::real, Db::real;

and

simplify(e,radical) assuming real;

@majd This is an interesting question, and sorry that I could not respond earlier because of vacation and the recent security lock-out of the site (my valid cookie was expired/clobbered). Presumably this is not something where pdsolve/numeric could be used instead of ArrayInterpolation. It might be possible to compute the finite differences (which fdiff does) oneself, and to apply this in a Compiled procedure after first using ArrayInterpolation to quickly generate the needed subgrids of evaluation points in a fast vectorized manner. Easier for 1D than for the 2D case. I'm afraid that I'll likely be too busy to give it a try, for some weeks.

To get a smoother (ie. not piecewise flat) 2nd derivative from ArrayInterpolation might mean using method=spline and degree=4 or higher. The higher and higher the order used for the spline then the worse might become the 2nd deriv approximations for points farther and farther from the boundary.

@majd This is an interesting question, and sorry that I could not respond earlier because of vacation and the recent security lock-out of the site (my valid cookie was expired/clobbered). Presumably this is not something where pdsolve/numeric could be used instead of ArrayInterpolation. It might be possible to compute the finite differences (which fdiff does) oneself, and to apply this in a Compiled procedure after first using ArrayInterpolation to quickly generate the needed subgrids of evaluation points in a fast vectorized manner. Easier for 1D than for the 2D case. I'm afraid that I'll likely be too busy to give it a try, for some weeks.

To get a smoother (ie. not piecewise flat) 2nd derivative from ArrayInterpolation might mean using method=spline and degree=4 or higher. The higher and higher the order used for the spline then the worse might become the 2nd deriv approximations for points farther and farther from the boundary.

@Markiyan Hirnyk The suggestion to use rhs(sol[1]) instead of eval(k[1],sol) is not so widely reliable, because in older versions of Maple (or in recent Maple if launched with --setsort=0) using rhs(sol[1]) may produce an incorrect answer.

The reason is that in some older Maple versions (or with --setsort=0) the elements of the set sol could be in a session dependent order. The k[2]=... could be sol[1], so sol[1] is not generally reliable as a means to extract the value in the equation k[1]=...

Eg, in Maple 11.02 (or Maple 16.0 launched with --setsort=0) I see,

> restart:
> k[2]: k[2]=666.0: k[1]=5.0:
> sol:={k[1]=5.0,k[2]=666.0};
                       sol := {k[2] = 666.0, k[1] = 5.0}

> rhs(sol[1]);
                                     666.0

> restart:
> sol:={k[1]=5.0,k[2]=666.0};
                       sol := {k[1] = 5.0, k[2] = 666.0}

> rhs(sol[1]);
                                      5.0

@Markiyan Hirnyk The suggestion to use rhs(sol[1]) instead of eval(k[1],sol) is not so widely reliable, because in older versions of Maple (or in recent Maple if launched with --setsort=0) using rhs(sol[1]) may produce an incorrect answer.

The reason is that in some older Maple versions (or with --setsort=0) the elements of the set sol could be in a session dependent order. The k[2]=... could be sol[1], so sol[1] is not generally reliable as a means to extract the value in the equation k[1]=...

Eg, in Maple 11.02 (or Maple 16.0 launched with --setsort=0) I see,

> restart:
> k[2]: k[2]=666.0: k[1]=5.0:
> sol:={k[1]=5.0,k[2]=666.0};
                       sol := {k[2] = 666.0, k[1] = 5.0}

> rhs(sol[1]);
                                     666.0

> restart:
> sol:={k[1]=5.0,k[2]=666.0};
                       sol := {k[1] = 5.0, k[2] = 666.0}

> rhs(sol[1]);
                                      5.0

@dj_gssst You are using sol[1] and sol[2] inappropriately.

Look at what sol[1] and sol[2] are.

sol[1];

         k[1] = 0.001637935477698018732276851530308612938004131156280124580147547622858728899732152932156471141157968415

sol[2];

         k[2] = 0.8738880449255668655493761487129800277295707224229803653538821365150932040118518225052273427259289080

Those are both equations, not names or scalar values.

If you want to use the name k[1] then just use it, not sol[1]. If you want to use the value in that equation returned by fsolve then use eval(k[1],sol), and not sol[1].

It's somewhat irritating to wait for this site to load all the image of 2D Math that you've inlined into your posts. It would be quicker and nicer (in future) if you could please just type in your comments here, along with the link to your uploaded worksheets heavy with 2D Math.

@dj_gssst You are using sol[1] and sol[2] inappropriately.

Look at what sol[1] and sol[2] are.

sol[1];

         k[1] = 0.001637935477698018732276851530308612938004131156280124580147547622858728899732152932156471141157968415

sol[2];

         k[2] = 0.8738880449255668655493761487129800277295707224229803653538821365150932040118518225052273427259289080

Those are both equations, not names or scalar values.

If you want to use the name k[1] then just use it, not sol[1]. If you want to use the value in that equation returned by fsolve then use eval(k[1],sol), and not sol[1].

It's somewhat irritating to wait for this site to load all the image of 2D Math that you've inlined into your posts. It would be quicker and nicer (in future) if you could please just type in your comments here, along with the link to your uploaded worksheets heavy with 2D Math.

@jimmyinhmb There's a lot to consider here. I'll have to leave parallelization of it until another day, hope that's ok.

Your elementwise approach is mostly getting bogged down by memory management, I think. Consider the statement,

        G1[Mid..Last] := G1[Mid..Last] + exp~( c +~ H[Mid..Last] ):

I count at least five overt Vector workspaces being created there. Two to extract H[Mid..Last] and G1[Mid..Last] on the RHS. One more to do elementwise adddition of `c`, and another to do elementwise exponentiation. And then another to do the final addition (prior to recopying into G1).

And the second method below cuts that down to just two overt Vector workspaces created, and gets some improvement. Further improvement comes from getting rid of all iterated temp workspace creation, and acting purely inplace. (A re-usable workspace or two, created just once outside the loop can really help performance.) It seems natural to conclude that the improvements are largely due to avoiding repeated Vector creation and collection as garbage: ie. avoidable memory management. And, indeed, this aspect of the performance improvement seems to match the reduction in kernelopts(bytesused) which is one measure of the amount of memory management.

It might be work noting that simply putting the whole job into a single procedure that acts inplace on G, and merely Compiling that, is almost fastest. And yet it is one of the easiest to code.

The following output is from a run with Maple 16.01 on a Core2 Duo running Windows XP. I saw some variation with regard to the relative performance of methods 3-to-5 below when run on Windows 7 or Linux 64bit.

 

restart:

(N,maxiter):=10^5, 10^3:

(Mid, Last):=floor(N/2),N; c:=1.3e-7:

50000, 100000

G:=LinearAlgebra:-RandomVector[row](N,generator=0.0..0.1e-6,outputoptions=[datatype=float[8]]):
# Make a bunch of these, to test the various results, since the job acts inplace.
G1:=copy(G):G2:=copy(G):G3:=copy(G):G4:=copy(G):G5:=copy(G):
H:=LinearAlgebra:-RandomVector[row](N,generator=0.0..0.1e-6,outputoptions=[datatype=float[8]]):

# usage: DAXPY(num,c,x,offsetx,incx,y,offsety,incy)
DAXPY:=ExternalCalling:-DefineExternal('hw_f06ecf',
                                       ExternalCalling:-ExternalLibraryName("linalg")):

EXP:=proc(start::integer[4],finish::integer[4],R::Vector(datatype=float[8]))
   local i::integer[4]:
   for i from start to finish do R[i]:=exp(R[i]); end do:
   NULL;
end proc:
cEXP:=Compiler:-Compile(EXP):

ALL:=proc(start::integer[4],finish::integer[4],R1::Vector(datatype=float[8]),
                                               R2::Vector(datatype=float[8]),
                                               c::float[8])
   local i::integer[4]:
   for i from start to finish do R1[i]:=R1[i]+exp(c+R2[i]); end do:
   NULL;
end proc:
cALL:=Compiler:-Compile(ALL):

run:=[true,true,true,true,true,true]:

if run[1] then
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at five temporary Vectors, at each iteration.
      G1[Mid..Last] := G1[Mid..Last] + exp~( c +~ H[Mid..Last] ):
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

12.359, 12.406

2001770640

if run[2] then
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates two temporary Vectors, at each iteration.
      G2[Mid..Last] := LinearAlgebra:-VectorAdd( G2[Mid..Last],
                                                 map[evalhf,inplace](exp, H[Mid..Last] ),
                                                 1, evalhf(exp(c)), inplace ):
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

5.875, 5.906

806437012

if run[3] then
C3:=Vector[row](N,datatype=float[8]): # We'll re-use this, after Aliasing to right length.
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at zero temporary Vectors, at each iteration.
      C3midlast:=ArrayTools:-Alias(C3,0,[1..Last-Mid+1]):
      ArrayTools:-Copy(Last-Mid+1,H,Mid-1,1,C3midlast):
      map[evalhf,inplace](exp,C3midlast):
      DAXPY(Last-Mid+1,evalhf(exp(c)),C3,0,1,G3,Mid-1,1);
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

3.031, 3.031

365548

if run[4] then
C4:=Vector[row](N,datatype=float[8]): # We'll re-use this, and no need to Alias it.
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at zero temporary Vectors, at each iteration.
      ArrayTools:-Copy(Last-Mid+1,H,Mid-1,1,C4):
      cEXP(1,Last-Mid+1,C4):
      DAXPY(Last-Mid+1,evalhf(exp(c)),C4,0,1,G4,Mid-1,1);
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

3.484, 3.500

167212

if run[5] then
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at zero temporary Vectors, at each iteration.
      cALL(Mid,Last,G5,H,c);
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

3.485, 3.484

31212

# Discrepencies based on using exp(c+h)=exp(c)*exp(h) may or may not
# appear if you are doing something other than just repeating adding
# terms involving exp of H. Your computation may do something different,
# and you'd want to check for correctness, at high iterations. My crude
# example is for timing, and understandably for constant c the accuracy
# is affected by maxiter the number of iterations.

LinearAlgebra:-Norm(zip(`/`,(G1-G2),G1));
LinearAlgebra:-Norm(zip(`/`,(G1-G3),G1));
LinearAlgebra:-Norm(zip(`/`,(G1-G4),G1));
LinearAlgebra:-Norm(zip(`/`,(G1-G5),G1));

0.570707844023472596e-13

0.555928563343654250e-13

0.555928563343654250e-13

0.

 

 

Download maptry3.mw

@jimmyinhmb There's a lot to consider here. I'll have to leave parallelization of it until another day, hope that's ok.

Your elementwise approach is mostly getting bogged down by memory management, I think. Consider the statement,

        G1[Mid..Last] := G1[Mid..Last] + exp~( c +~ H[Mid..Last] ):

I count at least five overt Vector workspaces being created there. Two to extract H[Mid..Last] and G1[Mid..Last] on the RHS. One more to do elementwise adddition of `c`, and another to do elementwise exponentiation. And then another to do the final addition (prior to recopying into G1).

And the second method below cuts that down to just two overt Vector workspaces created, and gets some improvement. Further improvement comes from getting rid of all iterated temp workspace creation, and acting purely inplace. (A re-usable workspace or two, created just once outside the loop can really help performance.) It seems natural to conclude that the improvements are largely due to avoiding repeated Vector creation and collection as garbage: ie. avoidable memory management. And, indeed, this aspect of the performance improvement seems to match the reduction in kernelopts(bytesused) which is one measure of the amount of memory management.

It might be work noting that simply putting the whole job into a single procedure that acts inplace on G, and merely Compiling that, is almost fastest. And yet it is one of the easiest to code.

The following output is from a run with Maple 16.01 on a Core2 Duo running Windows XP. I saw some variation with regard to the relative performance of methods 3-to-5 below when run on Windows 7 or Linux 64bit.

 

restart:

(N,maxiter):=10^5, 10^3:

(Mid, Last):=floor(N/2),N; c:=1.3e-7:

50000, 100000

G:=LinearAlgebra:-RandomVector[row](N,generator=0.0..0.1e-6,outputoptions=[datatype=float[8]]):
# Make a bunch of these, to test the various results, since the job acts inplace.
G1:=copy(G):G2:=copy(G):G3:=copy(G):G4:=copy(G):G5:=copy(G):
H:=LinearAlgebra:-RandomVector[row](N,generator=0.0..0.1e-6,outputoptions=[datatype=float[8]]):

# usage: DAXPY(num,c,x,offsetx,incx,y,offsety,incy)
DAXPY:=ExternalCalling:-DefineExternal('hw_f06ecf',
                                       ExternalCalling:-ExternalLibraryName("linalg")):

EXP:=proc(start::integer[4],finish::integer[4],R::Vector(datatype=float[8]))
   local i::integer[4]:
   for i from start to finish do R[i]:=exp(R[i]); end do:
   NULL;
end proc:
cEXP:=Compiler:-Compile(EXP):

ALL:=proc(start::integer[4],finish::integer[4],R1::Vector(datatype=float[8]),
                                               R2::Vector(datatype=float[8]),
                                               c::float[8])
   local i::integer[4]:
   for i from start to finish do R1[i]:=R1[i]+exp(c+R2[i]); end do:
   NULL;
end proc:
cALL:=Compiler:-Compile(ALL):

run:=[true,true,true,true,true,true]:

if run[1] then
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at five temporary Vectors, at each iteration.
      G1[Mid..Last] := G1[Mid..Last] + exp~( c +~ H[Mid..Last] ):
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

12.359, 12.406

2001770640

if run[2] then
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates two temporary Vectors, at each iteration.
      G2[Mid..Last] := LinearAlgebra:-VectorAdd( G2[Mid..Last],
                                                 map[evalhf,inplace](exp, H[Mid..Last] ),
                                                 1, evalhf(exp(c)), inplace ):
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

5.875, 5.906

806437012

if run[3] then
C3:=Vector[row](N,datatype=float[8]): # We'll re-use this, after Aliasing to right length.
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at zero temporary Vectors, at each iteration.
      C3midlast:=ArrayTools:-Alias(C3,0,[1..Last-Mid+1]):
      ArrayTools:-Copy(Last-Mid+1,H,Mid-1,1,C3midlast):
      map[evalhf,inplace](exp,C3midlast):
      DAXPY(Last-Mid+1,evalhf(exp(c)),C3,0,1,G3,Mid-1,1);
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

3.031, 3.031

365548

if run[4] then
C4:=Vector[row](N,datatype=float[8]): # We'll re-use this, and no need to Alias it.
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at zero temporary Vectors, at each iteration.
      ArrayTools:-Copy(Last-Mid+1,H,Mid-1,1,C4):
      cEXP(1,Last-Mid+1,C4):
      DAXPY(Last-Mid+1,evalhf(exp(c)),C4,0,1,G4,Mid-1,1);
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

3.484, 3.500

167212

if run[5] then
st,str:=time(),time[real](): bu:=kernelopts(bytesused):
   for p from 1 to maxiter do
      # This creates at zero temporary Vectors, at each iteration.
      cALL(Mid,Last,G5,H,c);
   end do:
print(time()-st,time[real]()-str); print(kernelopts(bytesused)-bu);
end if:

3.485, 3.484

31212

# Discrepencies based on using exp(c+h)=exp(c)*exp(h) may or may not
# appear if you are doing something other than just repeating adding
# terms involving exp of H. Your computation may do something different,
# and you'd want to check for correctness, at high iterations. My crude
# example is for timing, and understandably for constant c the accuracy
# is affected by maxiter the number of iterations.

LinearAlgebra:-Norm(zip(`/`,(G1-G2),G1));
LinearAlgebra:-Norm(zip(`/`,(G1-G3),G1));
LinearAlgebra:-Norm(zip(`/`,(G1-G4),G1));
LinearAlgebra:-Norm(zip(`/`,(G1-G5),G1));

0.570707844023472596e-13

0.555928563343654250e-13

0.555928563343654250e-13

0.

 

 

Download maptry3.mw

First 402 403 404 405 406 407 408 Last Page 404 of 594