acer

32747 Reputation

29 Badges

20 years, 111 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

One way to test equivalence of two expressions is to test whether their difference (subtraction) is equivalent (or even directly equal) to zero.

You could look at the help-pages evalb , is , simplify/details , testeq , and verify.

The command `Testzero` is underdocumented. By default `Testzero` uses `Normalizer`.

Some examples, which might be more clear after scanning those pages:

expr1:=sin(alpha+beta):
expr2:=sin(alpha)*cos(beta) + cos(alpha)*sin(beta):

evalb(simplify(expr1-expr2,'trig')=0);

                              true

Normalizer:=t->simplify(t,trig):

Testzero(expr1-expr2);

                              true

evalb(23423*2^5=32);

                             false

eqn:=sqrt(2)*sqrt(3)=6^(1/2);

                      (1/2)  (1/2)    (1/2)
                     2      3      = 6     

evalb(eqn);

                             false
is(eqn);

                              true

acer

See the documentation of the NAG Fortran Library function f08mef. This is like CLAPACK's dbdsqr function. It's documentation mentions the (QR based, or differential QD based) algorithms used.

You can see which NAG routine is used by LinearAlgebra, from userinfo messages.

> infolevel[LinearAlgebra]:=1:

> LinearAlgebra:-SingularValues(<>):
SingularValues: calling external function
SingularValues: NAG hw_f08kef
SingularValues: NAG hw_f08mef

Maple also ships with precompiled external libraries (shared objects clapack.dll or libclapack.so) containing the double-precision real and complex (dxxxx and zxxxx) routines from versions 3.0 of CLAPACK. If you want another routine from that library, you can call it yourself using the external-calling mechanism. (See here for an example of such user-defined wrapperless external-calling to CLAPACK from Maple.)

You can use these notes to help decide which CLAPACK function you'd like to use for singular value computation. Note that function dbdsdc for the divide and conquer algorithm is already avaliable in version 3.0. So you could call that directly from stock, shipped Maple, using define_external.

If you are interested in getting more state-of-the-art, then you could consider LAPACK 3.2.x or higher. See these release notes, and in particular section 2 for items 2.5 and 2.10 and the relevent references [5],[6]. Note that v.3.2.1 is avaliable aready as CLAPACK (which you can obtain as source or may be even precompiled). If you build such a more recent version of (C)LAPACK yourself (compiling, linking as shared dynamic library with appropriate exports, etc) then you would again be able to call its functions directly using the wrapperless forms of define_external.

acer

At higher-than-double precision, special techniques (hypergeom, if I recall) may be used.

For double-precision, techniques like those provided by the numapprox package allow one to compute an approximating function which will be sufficiently accurate over a desired range (and which could be exported as compilable C source). Usual range reduction techniques can then be used to map an arbitrary point into that range.

Here's a crude example. Note that, while constructed approximant S does reasonably well, it is not quite as accurate as whatever evalhf uses (which is a longer story for another time...). The plots show the absolute error of each.

restart:

Digits:=50:

S:=numapprox:-chebyshev(sin(x), x=0..evalf(Pi/2), 10^(-28)):

S:=convert(eval(S, T=orthopoly[T]),horner):

S:=unapply(evalf[18](S),x); # something that fits in double-precision code

                           -30   /                      
x -> 1.44062452733909128 10    + \1.00000000000000000 + 

  /                      -26   /                        
  \9.06753070710926319 10    + \-0.166666666666666667 + 

  /                      -23   /                         
  \7.94570910169261477 10    + \0.00833333333333333333 + 

  /                      -21   /                           
  \9.26674469373075778 10    + \-0.000198412698412698471 + 

  /                      -19   /                            
  \2.66256453318299289 10    + \0.00000275573192239766239 + 

  /                      -18   /                       -8   
  \2.49025786553588515 10    + \-2.50521083906818366 10   + 

  /                      -18   /                      -10   
  \8.71393429557927929 10    + \1.60590426859692446 10    + 

  /                      -17   /                       -13   
  \1.20819897391039123 10    + \-7.64726420587879460 10    + 

  /                      -18   /                      -15   
  \6.56111046671949372 10    + \2.80814547640300074 10    + 

  /                      -18   /                       -18
  \1.25828982570811892 10    + \-8.56441515436031830 10   

     /                      -20                         -20  \  \  
   + \6.18404812488481967 10    + 1.37434690531914668 10    x/ x/ x

  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  
  / x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x

f:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(S(x))-eval(sin(x));
  log[10](abs(t));
end proc:

F:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(sin(x))-eval(sin(x));
  log[10](abs(t));
end proc:

plot([f,F,-16],0..Pi/4,numpoints=5000,
     view=[0..Pi/4,-17..-15.5],color=[red,green,gold]);

T15 and T17 can be thrown into the above mix. T17 is almost as accurate as S above, on (0,Pi/4), but T15 is not accurate to as many digits on that crucial range.

T15:=unapply(evalf(convert(convert(taylor(sin(x),x=0,15),polynom),horner)),x):

T17:=unapply(evalf(convert(convert(taylor(sin(x),x=0,17),polynom),horner)),x):

G15:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(T15(x))-eval(sin(x));
  log[10](abs(t));
end proc:

G17:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(T17(x))-eval(sin(x));
  log[10](abs(t));
end proc:

plot([G15,G17,f,F,-16],0..Pi/4,numpoints=1000,
     view=[0..Pi/4,-17..-13.5],color=[cyan,blue,red,green,gold]);

I have quite a few sheets that investigate this stuff, for hardware precision. I keep meaning to find spare time to blog it here.

acer

You asked what was the reason for the observed behaviour. The cause can be explained by printlevel.

But that only the cause. Ususally, inserting explicit `print` calls is a more convenient and well-aimed solution than is adjusting printlevel.

acer

Just for fun,

> cat(nprintf~("%a",L)[]);
                        1ax^2aBbac+f(x)

Part of the jape seems to be whether the spaces between entries and commas are wanted.

acer

Your worksheet contains an erroneous space between sin and (5 x) which is causing Maple to think that you have a product of terms sin*(5*x) instead of the desired function application sin(5*x).

Once that erroneous space is removed, then the faulty implicit multiplication instance is removed and your example ought to work as expected.

Please note that, for involved expressions, evalf[5](...) is not a good way to approximate a final result accurate to 5 decimal places. What it does do it make internal numeric computation occur with 5 digits of working precision (possibly with some guard digits for atomic operations). But that is not the same as ensuring 5 places of accuracy for involved, compound expressions. A better approach would be to use evalf at normal working precision, and then only applying evalf[5] to a final floating-point result. This is a common misunderstanding. Your particular example may not be involved enough for this distinction between precision and accuracy to have manifested itself.

acer

Sometimes there is some symbolic result possible for the differentiation, for symbolic n (but not a formal power series to n=infinity).  ...and sometimes this will fail miserably.

But it's sort of fun when it works. The adventure is in what gets assigned to `q` below.

restart:

symbolictaylor:=proc(expr,x,x0,n)
                 Sum(1/p!*eval(diff(expr,x$p),x=x0)*(x-x0)^p,p=0..n);
                end proc:

symbolictaylor(f(x),x,x0,n);

              n                                    
            -----                                  
             \                                    p
              )   (diff(f(x0), [x0 $ p])) (x - x0) 
             /    ---------------------------------
            -----           factorial(p)           
            p = 0      
                            
symbolictaylor(exp(A*x),x,x0,n) assuming x::real, x0::real, n::posint;

                    n                         
                  -----                       
                   \     p                   p
                    )   A  exp(A x0) (x - x0) 
                   /    ----------------------
                  -----      factorial(p)     
                  p = 0    
                   
t1 := value(symbolictaylor(exp(A*x),x,x0,3));

                                     1  2                   2
  exp(A x0) + A exp(A x0) (x - x0) + - A  exp(A x0) (x - x0) 
                                     2                       

       1  3                   3
     + - A  exp(A x0) (x - x0) 
       6    
                   
t2 := convert(taylor(exp(A*x),x=x0,4),polynom);

                                     1  2                   2
  exp(A x0) + A exp(A x0) (x - x0) + - A  exp(A x0) (x - x0) 
                                     2                       

       1  3                   3
     + - A  exp(A x0) (x - x0) 
       6    
                   
t2-t1;

                               0

q := simplify(value(symbolictaylor(exp(A*x),x,x0,n)));

               exp(A x) GAMMA(n + 1, A (x - x0))
               ---------------------------------
                         GAMMA(n + 1)    
       
simplify(t1 - eval(q,n=3));

                               0

eval(q,{x=4,x0=0,A=2});

                     exp(8) GAMMA(n + 1, 8)
                     ----------------------
                          GAMMA(n + 1)   
  
evalf(eval(%,n=3));

                          126.3333333

evalf(eval(t2,{x=4,x0=0,A=2}));

                          126.3333333

t1 := value(symbolictaylor(sin(A*x),x,x0,3));

                                     1            2         2
  sin(A x0) + cos(A x0) A (x - x0) - - sin(A x0) A  (x - x0) 
                                     2                       

       1            3         3
     - - cos(A x0) A  (x - x0) 
       6   
                    
t2 := convert(taylor(sin(A*x),x=x0,4),polynom);

                                     1            2         2
  sin(A x0) + cos(A x0) A (x - x0) - - sin(A x0) A  (x - x0) 
                                     2                       

       1            3         3
     - - cos(A x0) A  (x - x0) 
       6       
                
t2-t1;

                               0

q := simplify(value(symbolictaylor(sin(A*x),x,x0,n)))
       assuming x::real, x0::real, n::posint;


     1       /1                                                    
------------ |- I (n + 1) (GAMMA(n + 1, -I A x + I A x0) exp(-I A x
GAMMA(n + 2) \2                                                    

                                              \
  ) - GAMMA(n + 1, I A x - I A x0) exp(I A x))|
                                              /

simplify(convert(t1 - eval(q,n=3),expln)) assuming x::real, x0::real, n::posint;

                               0

simplify(combine(convert(eval(q,{x=4,x0=0,A=2}),expln))) assuming n::posint;

1                                                              
- I (GAMMA(n + 1, -8 I) exp(-8 I) - GAMMA(n + 1, 8 I) exp(8 I))
2                                                              
---------------------------------------------------------------
                         GAMMA(n + 1)     
                     
evalf(eval(%,n=3));

                      -77.33333333 - 0. I

evalf(eval(t1,{x=4,x0=0,A=2}));

                          -77.33333333

Please don't feel a need to give examples where it fails; they're too easy to find.

acer

Have I understood you rightly?

Uncomment the plot commands, if wanted.

restart:

f:=x+y:
g:=x*y:

stage2:=maximize(g,x=0..1);

                             stage2 := max(0, y)

stage1:=maximize(eval(f,x=stage2),y=0..1);

                                 stage1 := 2

stage2:=maximize(g,x=0..1,location);

              stage2 := max(0, y), {[{x = 0}, 0], [{x = 1}, y]}

stage1:=maximize(eval(f,x=stage2[1]),y=0..1,location);

                         stage1 := 2, {[{y = 1}, 2]}

# Let's try a harder example.
restart:

f:=sin(x*y):
g:=exp(x+y)^((x-6/10)^2):

# Let's first try and do as much as we can symbolically.
stage2:=maximize(g,y=0..1):

#plot(stage2,x=0..1);

fopt:=eval(f,y=stage2);

                /     /        /         2\              /         2\\\
                |     |        \(x - 3/5) /              \(x - 3/5) /||
     fopt := sin\x max\(exp(x))            , (exp(x + 1))            //

stage1:=maximize(fopt,x=0..1,location);

                    /        (4/25)\   /[         /        (4/25)\]\ 
       stage1 := sin\(exp(2))      /, { [FAIL, sin\(exp(2))      /] }
                                       \                           / 

evalf(%);

                    0.9813047879, {[FAIL, 0.9813047879]}

Optimization:-NLPSolve(fopt,x=0..1,maximize); # This agrees.

                        [0.981304787948013, [x = 1.]]

# Now let's try and do it purely numerically.

v := proc(X)
      if not type(X,numeric) then 'procname'(args);
      else Optimization:-NLPSolve(eval(g,x=X),y=0..1,maximize)[1];
      end if;
     end proc:

#plot(v(x),x=0..1);

st2num:=Optimization:-NLPSolve(v,0..1,maximize,method=branchandbound)[1];

                         st2num := 1.43332941456034

H:=proc(X)
    if not type(X,numeric) then 'procname'(args);
    else eval(f,{x=X,y=v(X)});
    end if;
   end proc:

#plot(H(x),x=0..1);

Optimization:-NLPSolve(H(x),{x>=0,1>=x},maximize,method=sqp);

Warning, no iterations performed as initial point satisfies first-order conditions
                      [0.981304787948013124, [x = 1.]]

H(1.0);

                              0.981304787948013

objf := proc(V::Vector)
  H(V[1]);
end proc:

objfgradient := proc(X::Vector,G::Vector)
  G[1] := fdiff( H, [1], [X[1]] );
  NULL;
end proc:

Optimization:-NLPSolve( 1, objf, [Matrix([[1],[-1]]),Vector([1,0])],
                       'objectivegradient'=objfgradient,
                       'initialpoint'=Vector([0.3]), 'method'='sqp',
                       'maximize' );

                        [0.981304787948013124, [1.]]

Using other methods, and set up with simple bounds instead of linear constraints, or not in Matrix form as in the last call above, I got some inconsistent results from NLPSolve (as if it were not re-entrant!?)

Have you considered a global optimization approach, such as using DirectSearch or GlobalOptimization?

acer

Try putting them into an Array of plots.

restart:

p1:=plots:-animate(plots:-sphereplot,
                   [exp(6*sin(t))-1, theta=0..2*Pi, phi=0..Pi],
                   t=0..3, frames=100):

p2:=plots:-animate(plots:-implicitplot,
                   [(i)^2 + (j)^2= (t)^2, i = -10..10, j = -10..10 ],
                   t=0..3, frames=100):

plots:-display(Array([p1, p2]));

Or, if you really want them stacked vertically,

plots:-display(Array([[p1],[p2]]));

acer

That's an interesting question. I know of none, as of yet.

Recent additions and enhancements to LAPACK seem (to me) to lean more toward lowering the algorithmic complexity than to attaining higher accuracy in the eigen-solving areas.

Is there a concrete reason why QR (or divide-and-conquer) based approaches such as LAPACK & Maple's LinearAlgebra provide are not satisfactory for your problems? Is the accuracy inadequate? Or do you have very large sparse Matrices? Or...?

acer

The first one seems simple to get, while the second shows a wrinkle. I suppose that the minor adjustment used to get the second one could be automated (possibly for both connections).

If these points arise from (numerically computed?) solutions of DEs then I'd guess that all this pain might be avoided by using Maple's dedicated facilities for plotting such solutions.

restart:

ch1 := [[0.1e-1, -.56], [0.25e-1, -.57], [0.35e-1, -.555], [.115, -.43], [.16, -.3],
        [.195, -.24], [.28, -.155], [.315, -0.25e-1], [.32, -0.85e-1], [.41, .19],
        [.425, .11], [.425, .425], [.425, .43], [.425, .44], [.43, .12], [.43, .265],
        [.43, .405], [.43, .445], [.435, .295], [.435, .38], [.465, .44], [.5, .29],
        [.505, .31], [.505, .395], [.51, .37]]:

ch2 := [[0.3e-1, -.5], [0.5e-1, .575], [.11, .495], [.15, -.295], [.19, -.33],
        [.23, -.155], [.23, .475], [.255, .55], [.265, -.205], [.27, .465],
        [.3, -0.25e-1], [.305, -0.15e-1], [.31, .54], [.32, 0.15e-1], [.325, .445],
        [.335, -0.85e-1], [.335, .44], [.365, .115], [.38, -0.5e-2], [.405, .36],
        [.41, 0.5e-1], [.485, .2], [.5, .235], [.505, .465], [.51, .46], [.525, .305]]:

ch11 := simplex[convexhull](ch1,output=[hull]):
plots:-display( plot(ch11), plot(ch1,style=point) );

ch1b:={op(ch1)} minus {op(ch11)}:
ch1b1:=simplex[convexhull](ch1b,output=[hull]):
plots:-display( plot([op(ch11),op(ListTools:-Reverse(ch1b1)),ch11[1]]),
                plot(ch1,style=point) );

ch22 := simplex[convexhull](ch2,output=[hull]):
plots:-display( plot(ch22), plot(ch2,style=point) );

ch2b:={op(ch2)} minus {op(ch22)}:
ch2b2:=simplex[convexhull](ch2b,output=[hull]):
plots:-display( plot([op(ch22),ch2b2[1],op(ListTools:-Reverse(ch2b2[2..-1])), ch2[1]]),
                plot(ch2,style=point) );

acer

> restart:

> s1:="LUklbXJvd0c2Iy9JK21vZHVsZW5hbWVHNiJJLFR5cGVzZXR0aW5n\
R0koX3N5c2xpYkdGJzYuLUkjbWlHRiQ2JVEiQUYnLyUnaXRhbGljR1EldHJ1ZUY\
nLyUsbWF0aHZhcmlhbnRHUSdpdGFsaWNGJy1JI21vR0YkNi1RKiZjb2xvbmV\
xO0YnL0YzUSdub3JtYWxGJy8lJmZlbmNlR1EmZmFsc2VGJy8lKnNlcGFyYXRv\
ckdGPS8lKXN0cmV0Y2h5R0Y9LyUqc3ltbWV0cmljR0Y9LyUobGFyZ2VvcEdGPS\
8lLm1vdmFibGVsaW1pdHNHRj0vJSdhY2NlbnRHRj0vJSdsc3BhY2VHUSwwLjI3\
Nzc3NzhlbUYnLyUncnNwYWNlR0ZMLUkjbW5HRiQ2JFEiNEYnRjktRjY2LVEiO0Yn\
RjlGOy9GP0YxRkBGQkZERkZGSC9GS1EmMC4wZW1GJ0ZNLUknbXNwYWNlR0\
YkNiYvJSdoZWlnaHRHUSYwLjBleEYnLyUmd2lkdGhHRlgvJSZkZXB0aEdGaG4vJ\
SpsaW5lYnJlYWtHUShuZXdsaW5lRictRjY2LVEifkYnRjlGO0Y+RkBGQkZERkZGS\
EZXL0ZORlgtRiw2JVEiQkYnRi9GMkY1LUZQNiRRIjdGJ0Y5RlMvJStleGVjdXRhYm\
xlR0Y9Rjk=":

> sfrombase64:=StringTools:-Decode(s1[1..-1],encoding=base64);

"-I%mrowG6#/I+modulenameG6"I,TypesettingGI(_syslibGF'6.-I#miGF$6\

  %Q"AF'/%'italicGQ%trueF'/%,mathvariantGQ'italicF'-I#moGF$6-Q*&\

  coloneq;F'/F3Q'normalF'/%&fenceGQ&falseF'/%*separatorGF=/%)str\

  etchyGF=/%*symmetricGF=/%(largeopGF=/%.movablelimitsGF=/%'acce\

  ntGF=/%'lspaceGQ,0.2777778emF'/%'rspaceGFL-I#mnGF$6$Q"4F'F9-F6\

  6-Q";F'F9F;/F?F1F@FBFDFFFH/FKQ&0.0emF'FM-I'mspaceGF$6&/%'heigh\

  tGQ&0.0exF'/%&widthGFX/%&depthGFhn/%*linebreakGQ(newlineF'-F66\

  -Q"~F'F9F;F>F@FBFDFFFHFW/FNFX-F,6%Q"BF'F/F2F5-FP6$Q"7F'F9FS/%+\

  executableGF=F9"

> fromdotm:=sscanf(sfrombase64[1..-1],"%m"):

> lprint(fromdotm);

[Typesetting:-mrow(Typesetting:-mi("A", italic = "true", mathvariant = "italic"), Typesetting:-mo("&coloneq;", mathvariant = "normal", fence = "false", separator = "false", stretchy = "false", symmetric = "false", largeop = "false", movablelimits = "false", accent = "false", lspace = "0.2777778em", rspace = "0.2777778em"), Typesetting:-mn("4", mathvariant = "normal"), Typesetting:-mo(";", mathvariant = "normal", fence = "false", separator = "true", stretchy = "false", symmetric = "false", largeop = "false", movablelimits = "false", accent = "false", lspace = "0.0em", rspace = "0.2777778em"), Typesetting:-mspace(height = "0.0ex", width = "0.0em", depth = "0.0ex", linebreak = "newline"), Typesetting:-mo(" ", mathvariant = "normal", fence = "false", separator = "false", stretchy = "false", symmetric = "false", largeop = "false", movablelimits = "false", accent = "false", lspace = "0.0em", rspace = "0.0em"), Typesetting:-mi("B", italic = "true", mathvariant = "italic"), Typesetting:-mo("&coloneq;", mathvariant = "normal", fence = "false", separator = "false", stretchy = "false", symmetric = "false", largeop = "false", movablelimits = "false", accent = "false", lspace = "0.2777778em", rspace = "0.2777778em"), Typesetting:-mn("7", mathvariant = "normal"), Typesetting:-mo(";", mathvariant = "normal", fence = "false", separator = "true", stretchy = "false", symmetric = "false", largeop = "false", movablelimits = "false", accent = "false", lspace = "0.0em", rspace = "0.2777778em"), executable = "false", mathvariant = "normal")]

> sprintf("%a",fromdotm);

"[Typesetting:-mrow(Typesetting:-mi("A",italic = "true",mathvari\

  ant = "italic"),Typesetting:-mo("&coloneq;",mathvariant = 

   "normal",fence = "false",separator = "false",stretchy = 

   "false",symmetric = "false",largeop = "false",movablelimits 

   = "false",accent = "false",lspace = "0.2777778em",rspace = 

   "0.2777778em"),Typesetting:-mn("4",mathvariant = "normal"),Ty\

  pesetting:-mo(";",mathvariant = "normal",fence = "false",separ\

  ator = "true",stretchy = "false",symmetric = "false",largeop 

   = "false",movablelimits = "false",accent = "false",lspace = 

   "0.0em",rspace = "0.2777778em"),Typesetting:-mspace(height = 

   "0.0ex",width = "0.0em",depth = "0.0ex",linebreak = 

   "newline"),Typesetting:-mo(" ",mathvariant = "normal",fence 

   = "false",separator = "false",stretchy = "false",symmetric = 

   "false",largeop = "false",movablelimits = "false",accent = 

   "false",lspace = "0.0em",rspace = "0.0em"),Typesetting:-mi("B\

  ",italic = "true",mathvariant = "italic"),Typesetting:-mo("&co\

  loneq;",mathvariant = "normal",fence = "false",separator = 

   "false",stretchy = "false",symmetric = "false",largeop = 

   "false",movablelimits = "false",accent = "false",lspace = 

   "0.2777778em",rspace = "0.2777778em"),Typesetting:-mn("7",mat\

  hvariant = "normal"),Typesetting:-mo(";",mathvariant = 

   "normal",fence = "false",separator = "true",stretchy = 

   "false",symmetric = "false",largeop = "false",movablelimits 

   = "false",accent = "false",lspace = "0.0em",rspace = 

   "0.2777778em"),executable = "false",mathvariant = "normal")]"

> print(op(fromdotm));

                                    A := 4;

                                    B := 7;

acer

The solution seems simple, but you may not like to hear it. The repeated nesting of unapply, function application, and evalf@simplify is unnecessary and is causing your code to be thousands of times less efficient than it ought to be.

Rewrite the entire worksheet and completely change your methodology. Forbid yourself from creating operators or procedures, altogether. Forbid yourself from using `unapply`. Forbid yourself from calling `evalf` on a symbolic expression which you then subsequently pass to `simplify`.

Use expressions instead of operators, function calls, and unapply. Call simplifiy sparingly. Never call evalf on a symbolic expression which is not expected to produce a purely numeric nonsymbolic result. Never call `simplify` on a symbolic expression with floating-point coefficients within it (and hopefully never create such expressions).

If you are bent on using some procedure, then let it be just a one procedure which would take as its arguments value for parameters ymin, yman, amin, amax, eps, etc. This would wrap around almost the whole sheet. Possibly, just possibly, consider having another inner procedure which might take as its arguments numeric values for y, a, d, etc. But that would be for moving the whole computation entirely to pure numerics -- and I don't know whether you want that or not, so investigate whether you want the symbolic approach first, done efficiently.

The define_external call assigned to `init` is unusual. It passes the 'MAPLE' option (as if it were calling a custom wrapper which did all its own argument processing), but also passes all the typed parameters for the external function (as if it were utilizing wrapperless external calling). Usually a define_external call has either one of those or the other, but not both. I could be wrong, but my guess is that you don't want/need the 'MAPLE' option for `init`, while you do for `FindZerosOfFirstAndSecondDerivatives` which is more obviously a custom wrapper.

acer

In your first example , you have not copied the Vector assigned to `a`, when you assign that also to `b`. All that you've accomplished is to make the value of `b` be the same object.

This behaviour, in the first example, should not seem mysterious. The most natural way to get `b` to be assigned a copy of the Vector assigned to `a` is to use the copy command (a top-level command, not part of any package, which knows how to copy various sorts of object).

Note that there is a flip-side to this copying business, which is related to uniqueness. A Vector is a so-called mutable data structure, which means that its entries can be changed in-place. This is related, in some respects, to the fact there can be more than one distinct length-2 Vector with the entries x and y. Changes to one such Vector does not change the other.

>  restart:

> V1:=Vector([x,y]):

> V2:=copy(V1):

> {V1,V2}; # This surprises some people, but it shouldn't.

                                 /[x]  [x]\ 
                                { [ ], [ ] }
                                 \[y]  [y]/ 

> evalb(V1=V2); # This surprises some people, but it shouldn't.

                                    false

> V1[2]:=17:

> V2[2];
                                      y

So, some people are surprised that mere assignment does not make distinct copies. And some people are surprised that separate instances are recognized as being distinct.

You can contrast all that you've seen here for rtables (of which a Vector is one kind, the other two being Matrix and Array), with the behaviour of lists and tables (the latter of which also form the data structure used for lowercase array, matrix, and vector). How do these differ, you may ask? Well, for lists mutability is faked, which is much discusses as the reason why doing list-entry assignment is highly inefficient and therefore quite dubious. When you assign into just an entry of a list then the maple kernel actually goes and replaces the whole structure with a new object behind the scenes. And tables have last-name-eval, which makes them behave differently especially when passed as arguments to a procedure.

Which brings us to your second example. The ArrayTools:-Copy behaviour is a little unusual. But the broader patterns of behaviour are motivated in part to efficiency needs and to argument passing for procedures. I'll try and explain:

Suppose you create a length-2 Vector V with entries x and y, where x and y are not yet assigned. And then you subsequently assign a value to x, let's say a value of 5.

> restart:

> V := Vector([x,y]):

> x := 5:

The Vector V still has entries of x and y. If you access V[1], then you get the fully evaluated result of 5. But the entry of V is still just x. When you cause V to be printed, then really you're just seeing how the `print/rtable` command sees V when in turn it has to access the entries. You can still see the stored entries of V using `lprint`.

> V[1];

                                  5

> V;

                                     [5]
                                     [ ]
                                     [y]

> lprint(V);
Vector[column](2, {1 = x, 2 = y}, datatype = anything, storage = rectangular, order = Fortran_order, shape = [])

Now, what would happen if we passed V as a argument to a procedure. Would we want all entries of V to be evaluated fully (as happens for some other structures...), or not?

Suppose we have an enormous Vector V, and the procedure will only access a small number of the entries a small number of times. In this scenario, we would very much want the entries of V to not all be fully evaluated, as far as what is seen inside the procedure. We'd want behaviour like at the top-level, outside the procedure, where evaluation only occurs upon individual entry access.

This is, in fact, the default behaviour to how rtables (eg. Vectors) are passed.

> f:=proc(S)
>    lprint(S);
>    S[1];
> end proc:

> f(V);
Vector[column](2, {1 = x, 2 = y}, datatype = anything, storage = rectangular, order = Fortran_order, shape = [])

                                      5

Now suppose that all entries of V will be accessed by the algorithm within the procedure. and that this will happen many times repeatedly. In this scenario, we'd very much prefer it if the entries of V have already been fully evaluated already, inside whatever is assigned to that formal parameter of the procedure. Because then each subsequent access would be fast and not require the full evaluation and resolution. There is a (rarely mentioned) command to handle this (somewhat uncommon) situation, called rtable_eval. Since these situations relate to efficiency, it's important to be able to resolve the values of the rtable, inplace, without necessitating extra memory allocation that is entailed with fully copying and replacement of the structure.

> restart:

> V := Vector([x,y]):

> y := 13:

> lprint(V);
Vector[column](2, {1 = x, 2 = y}, datatype = anything, storage = rectangular, order = Fortran_order, shape = [])

> f := proc(S)
>    rtable_eval(S,inplace);
>    lprint(S);
> end proc:

> f(V);
Vector[column](2, {1 = x, 2 = 13}, datatype = anything, storage = rectangular, order = Fortran_order, shape = [])

> lprint(V); # different!
Vector[column](2, {1 = x, 2 = 13}, datatype = anything, storage = rectangular, order = Fortran_order, shape = [])

All the same fundamental issues were also present for the older (now deprecated) table-based array, matrix, and vector objects. But since these had last-name-eval they faced the problems slightly differently. You can still see the workarounds, though. For example, several `linalg` package commands would accept a matrix argument A and immediately make an assignment to the local B of the form B:=eval(A). Sometimes that was done so that an inplace algorithm could act on local copy B, but sometimes it was done because the entries of last-name-eval matrix A were not all fully evaluated like gets done for other arguments to the procedure.

Now, the ArrayTools:-Copy command is interesting. It is not a kernel builtin, but it is a compiled routine. It appears to have some unusual behaviour, and I'd label your example as an anomaly. I'd stick with the top-level `copy` command instead, if I were you and I wanted to easily create a distinct copy of a Vector/Matrix/Array. Or assign the entries of the new Vector by some other means, if desired.

> restart:
> a := Vector([x,y]):
> b := copy(a):
> b;

                                     [x]
                                     [ ]
                                     [y]

> y:=13:
> a;

                                    [x ]
                                    [  ]
                                    [13]

> b;

                                    [x ]
                                    [  ]
                                    [13]

> restart:
> a := Vector([x,y]):
> b := Vector([0,0]):
> b[1],b[2]:=a[1],a[2]:
> b;

                                     [x]
                                     [ ]
                                     [y]

> y:=13:
> a;

                                    [x ]
                                    [  ]
                                    [13]

> b;

                                    [x ]
                                    [  ]
                                    [13]

> restart:
> a := Vector([x,y]):
> b := Vector([0,0]):
> b[1..2]:=a[1..2]:
> b;

                                     [x]
                                     [ ]
                                     [y]

> y:=13:
> a;

                                    [x ]
                                    [  ]
                                    [13]

> b;

                                    [x ]
                                    [  ]
                                    [13]

acer

  add(points[2*k,2],k=1..45); 

acer

First 277 278 279 280 281 282 283 Last Page 279 of 341