Since there is an accusation of dishonesty here against people that I speak for, I feel the need to comment:
First, on the suggestion that a machine was cherry-picked for its inability to multi-thread-- the machine in use was set up by our-sysadmin group for the Software Testing group for the purpose of running lots of third party software to verify results. A seperate analysis group who were not involved in the request used the machine for benchmarking in good faith that it was representative (and because we have to pay for Maple licenses like anyone else, so one is enough!). At some point the machine was upgraded (not at the request of the analysis group. It is not clear to me if it was a new machine, or just a memory upgrade) and it remained an assumption that a random Windows machine should be considered typical.
Secondly, I was naturally concerned that, accidentally, the machine may have had some configuration that Maple does not correctly support as @acer suggested so I re-ran the benchmark on my own personal laptop - a 2019 MacBook Pro (not as mainstream as Windows but certainly a more predictable hardware). Of the five functions that @acer said gave similar performance between Mathematica and Maple two were not in the benchmark (but perhaps should be added) two became much closer and one maintained its difference. The number of tests where Mathematica was faster fell from 572 to 566 (out of 586). However, overall Maple performed relatively worse on the Mac with three tests now crashing and having to be scaled back, and the median result showing Mathematica to be 45 times faster compared to 32 times on Windows. Generally Maple was now closer on linear algebra with float data and worse on function evaluation and integer linear algebra. The full results are here: Mathematca12.1Maple2020MacDraft.pdf. It should be noted that this was not run under test conditions with only a single run of the benchmark, where the protocol in the annual update is the average of three runs and has not been through any secondary checks that are required for Wolfram communications, so I make this post in my own name, not as an official Wolfram statement.
One cannot easily defend on the charge of incompetance, but the benchmark contains the entire source code for the tests. I would also be happy to share the report generation code to make analysis of test results easier. I repeat the invitation that I made in https://www.mapleprimes.com/posts/201237-Comparisons-#comment200915 that, if there are ways that the benchmark could be fairer, I will ensure that they are considered. I have not received any feedback since then.
For symmetry, it is worth noting from that in that 2015 thread, I shared with @ecterrab a number of ways in which MapleSoft's Comparison document at https://www.maplesoft.com/products/maple/compare/HowMapleComparestoMathematica.pdf was factually incorrect having previously sent them privately to @Samir Khan (for example, it does not take 14 lines of code to generate a simple slider -- the command since 2007 has been Slider ) and to date no change has been made to that document. As I said at the time, I would have thought that it is in all our interests to be accurate.
Jon McLoone (Wolfram Research)