A few weeks ago I mentioned the ncrunch comparison of "mathematical programs for data analysis" in a comment in another thread.  There is now a new, 5th release of that review. The systems reviewed are:

  • GAUSS
  • Maple
  • Mathematica
  • Matlab
  • O-Matrix
  • Ox
  • SciLab

The review is skewed towards statistical computation and data manipulation, but it includes several interesting comparisons of the major computer algebra systems (CAS).

There is a comparative performance section, and the worksheets used for that benchmarking are available for download. Here is the Maple worksheet, which was used with Maple 11.

The Maple code used in the performance benchmarking could be improved for some of the individual tests. Maple's fast numerics are better than what the report indicates.

So here's an idea: if you have an improved version for one of the individual performance calculations, then perhaps post it here as a blog item or comment. It might help if there were only one mapleprimes blog/thread per problem (or else it could get confusing). Such entries could have easily searched names, like "ncrunch prob. 11, fibonacci", etc. A good contender should be both fast and simple. A blog item could start off with the original code used in the report. Giving the timings -- as run on your own machine -- for both the original and any candidate improvement would be helpful.

The performance comparisons are with systems like Matlab and Scilab which can only really do double-precision calculation on their own (without a symbolic toolbox, say). So it should be fair to use Maple's evalhf, and hardware float[8] datatype Matrices, etc.

Memory usage was not part of the performance comparison. That's a weakness of the analysis, as it might be significant.

note: The individual test labelled "2000x2000 normal distributed random matrix^1000" is actually an elementwise powering operation and not a matrix powering (matrix-matrix product) calculation. Checking the Maple, Mathematica, and Matlab code indicates that 's consistent across the sources.


Please Wait...