@PatrickT Thanks, Patrick -- I'll take didactic in the sense of pedagogic rather than pedantic. ;)
The result from kernelopts(bytesused) is an indicator of how much memory has been used and re-used. A high figure usually means that a lot of garbage collection has taken place.
The results from kernelopts(bytesalloc) show how much memory Maple's kernel has had to allocate or take from the operating system (OS). Since memory is not returned to the OS without a Maple `restart`, the value of bytesalloc at any given time shows the total memory that Maple has had to allocate since the last `restart`. But not all of that memory may be currently used for storing objects -- some of it may be memory that has been cleared by garbage collection (gc).
In the commandline interface, by default, a summary of memory use is printed (pretty much with each gc). If you see a slew of such printed messages then you can interpret this as meaning there is a lot of garbage collection happening. These messages can be turned off with the kernelopts(printbytes) setting. These messages look like this:
memory used=45.8MB, alloc=38.4MB, time=20.47
memory used=76.3MB, alloc=38.4MB, time=34.56
Generally, if you see that "memory used"=bytesused message frequently in your comamndline session, with values growing large, then your computation is using and re-using a lot of memory. Objects are being created and discarded.
In contrast, in the Standard GUI the status bar at bottom shows something akin to bytesalloc. It might be useful if the GUI were to also display (optionally, maybe off by default) the bytesused as another ticker value.
It's not crystal clear how anyone should best use these things to measure "efficiency". If anything they are just rough guides of how Maple is using memory. There are more sophisticated tools for investigating the time and memory resources used by some code, such as ?CodeTools,Profiling and ?exprofile . It's not really true to say that these are tools for measuring "efficiency" or "performance", unless we agree on what those terms mean. I'll try to explain some of the difficulty:
Suppose that you are doing a lot of computations. Suppose that Maple has allocated enough memory, and from a given point onward can simply recollect and re-use memory already allocated. In that case you likely won't see subsequent change in bytesalloc. So, in a sense, bytesalloc is merely a crude way to tell how much extra memory is needed for a given task, provided that you measure the task right at the start of the session, that memory has not yet been allocated and cleared, etc, etc. There are lots of caveats here to making sense in interpreting that measurement's change.
One way to compare performance is to do what you did in the parent post from which this current post was branched. You separately do each short simple variant, as quickly after restart as you can (just setting up data and required routine code beforehand). That seems like a practical and simple way to inspect time and memory use and allocation requirements of the particular method. You avoid the supposed pitfall of measuring two alternate methods back to back. The danger of measuring computations done back to back is that the second method's attempt might indicate spurious measurements due to garbage collection occuring during the calculation. Here's something I've seen done, where both taskA and taskB generate a lot of garbage that just happens to only get collected during the computation that follows them(!):
> time( taskA );
> time( taskB );
And the faulty conclusion is that taskA is faster than taskB. The truth might only emerge if you happen to observe:
> time( taskB );
> time( taskA );
There's a related but rare scenario, where the huge memory allocation of the first task makes the second task require the OS to swap out memory.
It's tempting to take that danger into consideration and conclude that wholly separate test runs are the best way to measure performance. It's tempting to assume that only a clean session will get performance measured clearly.
But, really, isn't it more the case that we want best performance under typical usage by the user? The user might call some routine, typically, in mid-session. Maybe it is better to do timings only after Maple's done lots of earlier work, allocation and creating garbage galore. How do methods compare at that point, in this more typical scenario? Why not do test runs more like taskA, taskB, taskA, taskB, etc? Possibly with loops. Other variations on this are also possible. It's not even true that this alternative measurement scheme is better, since it might not in fact be typical usage. Which way you choose to measure performance could depend on which way it will get used.
I need to point out that garbage collection is not inherently bad. There are lots of problems for which it is natural in pragmatic computational implementations. If Maple had no memory management scheme then it would have to keep allocating memory for every new object, and solving far fewer problems would be possible. In that sense, memory management makes Maple more efficient, as it can thus re-use memory instead of requiring ever more allocation. I've been discussing that transient object creation is sometimes avoidable. Also, it may inadvertantly sound like I'm saying that all this extra cost is in `gc`. But it's likely that a great deal of the cost is in the creation (and not just the disposal) of the temporary objects. The bytesused value can be, however, an indicator that much object creation and disposal is occuring.