Dave L

Dave Linder

472 Reputation

16 Badges

15 years, 305 days
Maplesoft
Software Architect
Waterloo, Ontario, Canada

 

Dave Linder
Mathematical Software, Maplesoft

MaplePrimes Activity


These are replies submitted by Dave L

@goli See this answer.

@goli See this answer.

@goli If you'd like further explanation then please ask here and not as a duplicate in a new Question.

@goli If you'd like further explanation then please ask here and not as a duplicate in a new Question.

@Michael Did you forget to re-assign the result of eval(try2,G=M1) to try2?

 

@Michael Did you forget to re-assign the result of eval(try2,G=M1) to try2?

 

@Axel Vogt I agree, simple is often better. (eg. no need to introduce lexical scoping, or keep track of what name is what in 2-argument eval.)

And if F or G are more complicated expressions in x then it might still be done like so

F:=sin(x):

G:=cos(x):

h:=unapply(F,x)@unapply(G,x):

h(x);
sin(cos(x))

I'm supposing (as other are too, I think) that he wants an `h` which can be applied. Otherwise that expression returned by h(x) would of course just be obtained as

eval(F,x=G);
                          sin(cos(x))

@Thomas Richard It can also be made more efficient, I think. (Note the x-y switch below.)

> with(combinat):
> with(Statistics):

> M := LinearAlgebra:-RandomMatrix(5, 5):

> XYValues := convert(sort(permute([seq(1 .. 5), seq(1 .. 5)], 2)), Matrix):
> ZValues := convert(ListTools:-Flatten(convert(M, listlist)), Vector):
> Fit(a*x^2+b*y^2+c*x+d*y+e, XYValues, ZValues, [x, y]);

2 2
14.4428571428571 x + 3.01428571428571 y - 80.5571428571428 x

- 25.1057142857143 y + 136.360000000000

> XYValues := Matrix(sort(permute([seq(1 .. 5), seq(1 .. 5)], 2))):
> ZValues := ArrayTools:-Reshape(M,[5*5]):
> Fit(a*x^2+b*y^2+c*x+d*y+e, XYValues, ZValues, [y, x]);

2 2
14.4428571428571 x + 3.01428571428572 y - 80.5571428571428 x

- 25.1057142857144 y + 136.360000000000

But ArrayTools:-Alias could be used as an efficient alternative to either ArrayTools:-Reshape above, or to convert(M,Vector).

There are lots of ways to create the XYValues. Another way which scales pretty well is this.

XYValues := Matrix(5*5,2,(i,j)->`if`(j=1,`iquo`(i-1,5)+1,`mod`(i-1,5)+1));

For much larger dimensions, XYValues could be constructed efficiently using ArrayTools.

> restart:
> N:=500:

> st:=time[real]():
> XYValues1 := Matrix(sort(combinat:-permute([seq(1 .. N),
>                                             seq(1 .. N)], 2)),
>                     datatype=float[8]):
> time[real]()-st;
                             48.234

> st:=time[real]():
> XYValues2 := Matrix(N*N,2,(i,j)->`if`(j=1,`iquo`(i-1,N)+1,`mod`(i-1,N)+1),
>                         datatype=float[8]):
> time[real]()-st;
                             1.359

> st:=time[real]():
> XYValues3 := Matrix(N*N,2,datatype=float[8]):
>       B := Vector(N,j->j,datatype=float[8]):
>       for i from 1 to N do
>          ArrayTools:-Fill(N,i,XYValues3,(i-1)*N,1);
>          ArrayTools:-Copy(N,B,0,1,XYValues3,N*N+(i-1)*N,1);
>       end do:
> time[real]()-st;
                             0.032

> LinearAlgebra:-Norm(XYValues1-XYValues2);
                               0.

> LinearAlgebra:-Norm(XYValues1-XYValues3);
                               0.

@Thomas Richard It can also be made more efficient, I think. (Note the x-y switch below.)

> with(combinat):
> with(Statistics):

> M := LinearAlgebra:-RandomMatrix(5, 5):

> XYValues := convert(sort(permute([seq(1 .. 5), seq(1 .. 5)], 2)), Matrix):
> ZValues := convert(ListTools:-Flatten(convert(M, listlist)), Vector):
> Fit(a*x^2+b*y^2+c*x+d*y+e, XYValues, ZValues, [x, y]);

2 2
14.4428571428571 x + 3.01428571428571 y - 80.5571428571428 x

- 25.1057142857143 y + 136.360000000000

> XYValues := Matrix(sort(permute([seq(1 .. 5), seq(1 .. 5)], 2))):
> ZValues := ArrayTools:-Reshape(M,[5*5]):
> Fit(a*x^2+b*y^2+c*x+d*y+e, XYValues, ZValues, [y, x]);

2 2
14.4428571428571 x + 3.01428571428572 y - 80.5571428571428 x

- 25.1057142857144 y + 136.360000000000

But ArrayTools:-Alias could be used as an efficient alternative to either ArrayTools:-Reshape above, or to convert(M,Vector).

There are lots of ways to create the XYValues. Another way which scales pretty well is this.

XYValues := Matrix(5*5,2,(i,j)->`if`(j=1,`iquo`(i-1,5)+1,`mod`(i-1,5)+1));

For much larger dimensions, XYValues could be constructed efficiently using ArrayTools.

> restart:
> N:=500:

> st:=time[real]():
> XYValues1 := Matrix(sort(combinat:-permute([seq(1 .. N),
>                                             seq(1 .. N)], 2)),
>                     datatype=float[8]):
> time[real]()-st;
                             48.234

> st:=time[real]():
> XYValues2 := Matrix(N*N,2,(i,j)->`if`(j=1,`iquo`(i-1,N)+1,`mod`(i-1,N)+1),
>                         datatype=float[8]):
> time[real]()-st;
                             1.359

> st:=time[real]():
> XYValues3 := Matrix(N*N,2,datatype=float[8]):
>       B := Vector(N,j->j,datatype=float[8]):
>       for i from 1 to N do
>          ArrayTools:-Fill(N,i,XYValues3,(i-1)*N,1);
>          ArrayTools:-Copy(N,B,0,1,XYValues3,N*N+(i-1)*N,1);
>       end do:
> time[real]()-st;
                             0.032

> LinearAlgebra:-Norm(XYValues1-XYValues2);
                               0.

> LinearAlgebra:-Norm(XYValues1-XYValues3);
                               0.

@xely12 Whether you use copy or LinearAlgebra:-Copy it's up to you to assign the result to some name.

Eg.

> A := copy(B);

> C := copy(B);

> A := LinearAlgebra:-Copy(B);

> Z := LinearAlgebra:-Copy(B);

or whatever you intend to be assigned the copied Matrix (be it C, or A, or Z, or...).

@xely12 Whether you use copy or LinearAlgebra:-Copy it's up to you to assign the result to some name.

Eg.

> A := copy(B);

> C := copy(B);

> A := LinearAlgebra:-Copy(B);

> Z := LinearAlgebra:-Copy(B);

or whatever you intend to be assigned the copied Matrix (be it C, or A, or Z, or...).

One more thing...

Matrices were mentioned. They present a difficulty for ArrayTools:-Copy since copying a sub-Matrix could require many separate calls (eg, once for each row, or for each column).

But in Maple 12 the routine ArrayTools:-BlockCopy appeared, which addressed this issue for Matrices and Arrays.

These copying routines can also help avoid transient rtable object creation in situations other than as arguments to functions. Consider indexed assignment. Eg, for Matrices `New` and `Old`

  New[35..105,15..165] := Old[40..110,20..170];

In the above example, Maple creates a new sub-Matrix which is Old[40..110,20..170]. Such rtable indexing syntax is convenient, but it uses more resources that are strictly needed. The (anonymous) temporary sub-Matrix is created, used for the copy, and then since it it no longer referenced it should be up for grabs at the next gc sweep. This is the kind of opportunity where BlockCopy is beneficial. It can be an efficient alternative to the above syntax.

restart:
M:=Matrix(6000,6000,datatype=float[8]):
N:=Matrix(6000,6000,datatype=float[8]):
kernelopts(bytesalloc);
                           576786288

st:=time(): M[2..5999,2..5999]:=N[2..5999,2..5999]: time()-st;
                             0.110

kernelopts(bytesalloc);
                           864594320

restart:
M:=Matrix(6000,6000,datatype=float[8]):
N:=Matrix(6000,6000,datatype=float[8]):
kernelopts(bytesalloc);
                           576786288

st:=time():
ArrayTools:-BlockCopy(N,6000,6000,6000,5998,M,6000,6000,6000,5998);
time()-st;
                             0.063

kernelopts(bytesalloc);
                           576786288

The optional arguments and calling sequence of ArrayTools:-BlockCopy is even more beautiful than that of ArrayTools:-Copy.

For large amounts of float[8]/complex[8] copying routines like BlockCopy should be fast, since they are often making calls to compiled BLAS functions which are tuned to the L1/L2 cache (so as to avoid misses).

Maple does not create any new object for the "indexed" syntax used on the LHS of that assignment statement.

When I first read of the "Programmer indexing" on the rtable indexing help page there appeared a glimmer of hope that super efficient rtable-Aliasing might get invisibly tied into the new round-bracket syntax. That might have allowed avoiding all such temp object creation when doing rtable indexing on the RHS of assignment statements. But it is not so implemented, mostly because indexing functions, non-rectangular storage, and the need for external passing make it unworkable in general. Ah, well.

Dave Linder
Mathematical Software, Maplesoft

@PatrickT Thanks, Patrick -- I'll take didactic in the sense of pedagogic rather than pedantic. ;)

The result from kernelopts(bytesused) is an indicator of how much memory has been used and re-used. A high figure usually means that a lot of garbage collection has taken place.

The results from kernelopts(bytesalloc) show how much memory Maple's kernel has had to allocate or take from the operating system (OS). Since memory is not returned to the OS without a Maple `restart`, the value of bytesalloc at any given time shows the total memory that Maple has had to allocate since the last `restart`. But not all of that memory may be currently used for storing objects -- some of it may be memory that has been cleared by garbage collection (gc).

In the commandline interface, by default, a summary of memory use is printed (pretty much with each gc). If you see a slew of such printed messages then you can interpret this as meaning there is a lot of garbage collection happening. These messages can be turned off with the kernelopts(printbytes) setting. These messages look like this:

memory used=45.8MB, alloc=38.4MB, time=20.47
memory used=76.3MB, alloc=38.4MB, time=34.56

Generally, if you see that "memory used"=bytesused message frequently in your comamndline session, with values growing large, then your computation is using and re-using a lot of memory. Objects are being created and discarded.

In contrast, in the Standard GUI the status bar at bottom shows something akin to bytesalloc. It might be useful if the GUI were to also display (optionally, maybe off by default) the bytesused as another ticker value.

It's not crystal clear how anyone should best use these things to measure "efficiency". If anything they are just rough guides of how Maple is using memory. There are more sophisticated tools for investigating the time and memory resources used by some code, such as ?CodeTools,Profiling and ?exprofile . It's not really true to say that these are tools for measuring "efficiency" or "performance", unless we agree on what those terms mean. I'll try to explain some of the difficulty:

Suppose that you are doing a lot of computations. Suppose that Maple has allocated enough memory, and from a given point onward can simply recollect and re-use memory already allocated. In that case you likely won't see subsequent change in bytesalloc. So, in a sense, bytesalloc is merely a crude way to tell how much extra memory is needed for a given task, provided that you measure the task right at the start of the session, that memory has not yet been allocated and cleared, etc, etc. There are lots of caveats here to making sense in interpreting that measurement's change.

One way to compare performance is to do what you did in the parent post from which this current post was branched. You separately do each short simple variant, as quickly after restart as you can (just setting up data and required routine code beforehand). That seems like a practical and simple way to inspect time and memory use and allocation requirements of the particular method. You avoid the supposed pitfall of measuring two alternate methods back to back. The danger of measuring computations done back to back is that the second method's attempt might indicate spurious measurements due to garbage collection occuring during the calculation. Here's something I've seen done, where both taskA and taskB generate a lot of garbage that just happens to only get collected during the computation that follows them(!):

> restart:
> time( taskA );
2.0 sec
> time( taskB );
15.0 sec

And the faulty conclusion is that taskA is faster than taskB. The truth might only emerge if you happen to observe:

> restart:
> time( taskB );
1.0 sec

> time( taskA );
13.0 sec

There's a related but rare scenario, where the huge memory allocation of the first task makes the second task require the OS to swap out memory.

It's tempting to take that danger into consideration and conclude that wholly separate test runs are the best way to measure performance. It's tempting to assume that only a clean session will get performance measured clearly.

But, really, isn't it more the case that we want best performance under typical usage by the user? The user might call some routine, typically, in mid-session. Maybe it is better to do timings only after Maple's done lots of earlier work, allocation and creating garbage galore. How do methods compare at that point, in this more typical scenario? Why not do test runs more like taskA, taskB, taskA, taskB, etc? Possibly with loops. Other variations on this are also possible. It's not even true that this alternative measurement scheme is better, since it might not in fact be typical usage. Which way you choose to measure performance could depend on which way it will get used.

I need to point out that garbage collection is not inherently bad. There are lots of problems for which it is natural in pragmatic computational implementations. If Maple had no memory management scheme then it would have to keep allocating memory for every new object, and solving far fewer problems would be possible. In that sense, memory management makes Maple more efficient, as it can thus re-use memory instead of requiring ever more allocation. I've been discussing that transient object creation is sometimes avoidable. Also, it may inadvertantly sound like I'm saying that all this extra cost is in `gc`. But it's likely that a great deal of the cost is in the creation (and not just the disposal) of the temporary objects. The bytesused value can be, however, an indicator that much object creation and disposal is occuring.

Dave L

@PatrickT Using your original timing example, I get this (Windows XP, Core2 Duo, Maple 14):

restart:
A := Matrix(5,5,(i,j) -> a||i||j):
B := Matrix(5,5,(i,j) -> b||i||j):
t := time():
for i from 1 to 1000 do
   LinearAlgebra:-KroneckerProduct(A,B);    
end do:
time() - t;
                             1.313

restart:
KPKP:=proc(A,B)
local m,n,p,q,i,j,res,CC,
      offi_l,offi_u,offj_u,offj_l;
  m,n:=op(1,A);
  p,q:=op(1,B);
  res:=Matrix(m*p,n*q);
  CC:=Matrix(p,q);
  for i from 1 to m do
    offi_l,offi_u:=p*(i-1)+1,p*i;
    for j from 1 to n do
      offj_l,offj_u:=q*(j-1)+1,q*j;
      ArrayTools:-Copy(p*q,B,0,1,CC,0,1);
      map2['inplace'](`*`,A[i,j],CC);
      res[offi_l..offi_u,offj_l..offj_u]:=CC;
    end do;
  end do;
  res;
end proc:
A := Matrix(5,5,(i,j) -> a||i||j):
B := Matrix(5,5,(i,j) -> b||i||j):
t := time():
for i from 1 to 1000 do
   KPKP(A,B);    
end do:
time() - t;
                             0.860

Thanks, Joe.

Yes, sometimes ArrayTools:-Alias is preferred. But sometimes it is not. One problematic case in when the rtable (Vector, Matrix, Array) has an indexing function or non-rectangular storage (eg. sparse) -- which Alias doesn't support.

Another problem case is where the `f` acts in-place. That's not so relevent here but may pertain more to the parent post from which this post was branched.

Another problem case may be when the `f` passes the object externally and is not designed to somehow figure out and communicate externally any offset of the Aliased object (w.r.t the original data object like V).

And then there whole range of related issues when the original data object is a Matrix. Then the Alias offset may be a problem (possible worked around in other ways). Of course in that case Copying, if done in many function calls, may be more expensive.

The theme is really how to avoid rtable creation & collection, if it can be avoided. Absolutely, there are other ways and means for that. It well be that different situations' particulars make some ways of approaching the memory use issue better than others. No size fits all!

Dave Linder
Mathematical Software, Maplesoft

1 2 3 4 5 6 7 Last Page 2 of 11