:

## Parallel Sparse Polynomial Multiplication in Maple 14

Maple 14

Our previous article described the design of fast algorithms for multiplying and dividing sparse polynomials. We have integrated these algorithms into the expand and divide commands of Maple 14. In this post I want to talk a bit about what you might see when you try Maple 14. Keep in mind that the product isn't released yet and I don't work for Maplesoft, so general disclaimers apply. Nevertheless, one of the first things you may notice is this.

factoring a large polynomial in Maple 14

In Maple 14 the multiplication of polynomials having hundreds of terms is parallelized. This happens automatically, behind the scenes in the sdmp library. The result is far from perfect, but the effect is substantial. Numerous commands, such as the factor command above, automatically make use of multiple cores.

Our focus in this release has been to integrate the library and obtain immediate performance improvements. As a general purpose computer algebra system, Maple has often had difficultly competing with specialized systems, particularly for polynomial computations. Below you can see the improvement in Maple 14. These are real times on a quad core Core i7 920 2.66GHz 64-bit Linux system (hyperthreading disabled).

 Maple 14 Maple 13 Mathematica 7 Magma 2.16 Singular 3.1 Pari 2.3 multiply p1 := f1 ⋅ (f1+1) 0.03 s 1.60 s 5.23 s 0.30 s 0.58 s 0.45 s p2 := f2 ⋅ (f2+1) 0.28 s 26.76 s 50.36 s 3.76 s 6.96 s 3.96 s p3 := f3 ⋅ (f3+1) 0.80 s 95.97 s 273.01 s 13.01 s 30.64 s 39.15 s divide q1 := p1/ f1 0.20 s 1.53 s 6.20 s 0.35 s 0.42 s 0.27 s q2 := p2/ f2 0.80 s 24.74 s 46.39 s 4.01 s 3.98 s 2.88 s q3 := p3/ f3 3.46 s 93.42 s 242.87 s 17.81 s 15.91 s 17.67 s factor p1:   12341 terms 2.80 s 31.37 s 13.45 s 6.23 s 12.28 s - p2:   38711 terms 15.20 s 367.40 s 164.50 s 116.15 s 97.10 s - p3: 135751 terms 53.20 s 2921.60 s 655.49 s 323.82 s 404.86 s -

f1 := (1+x+y+z)+1       f2 := (1+x+y+z)+1       f3 := (1+x+y+z+t)+1

Note that we did not change anything in the factorization code. What we have found is that many of the algorithms in Maple are good, but their performance is held back by the speed and overhead of basic operations. We plan to eliminate those bottlenecks in a future release, which should improve the parallel speedup as well.

This is joint work with Michael Monagan at Simon Fraser University, undertaken with the cooperation and assistance of Maplesoft.

﻿