Let's compare the performance of two methods of computing the inverse of a large datatype=float[8] Matrix.

The two methods are that of calling `MatrixInverse`, and of calling `LinearSolve` with the identity Matrix as the "right hand side".

The point of the exercise is to try an determine which method might be making optimal use of any accelerated LAPACK functions in the Intel Performance Libraries (MKL) used by Maple on Microsoft Windows.

On other platforms, the situation might be similar or not according to what is used from ATLAS (which might in turn depend on the version being used).

Of course, computing the inverse of a floating-point Matrix should not be that common. One should generally not compute M_Inv the inverse of floating-point Matrix M just so as to be able to solve M.X=B by multiplying out M_Inv.B and so compute X. The better command for that task is LinearSolve(M,B), from a numerical analysis standpoint. There are, however, some genuine uses of the Matrix inverse, especially in computational statistics.

infolevel[LinearAlgebra]:=1: # show external routines used


First, solving   M . Inv = Identity  to get Inv the inverse of M, using inplace semantics. This calls F07ADF/DGETRF followed by F07AEF/DGETRS from NAG/LAPACK. The first of those does the LU factorization, while the second does the backward and
forward substitutions using the triangular L and U.

gc(): st:=time():
LinearSolve(M,Inv,inplace=true): # Inv becomes the inverse of M

LinearSolve: using method LU
LinearSolve: calling external function
LinearSolve: NAG hw_f07adf

LinearSolve: NAG hw_f07aef

Next, using MatrixInverse routine, which internally acts inplace on a created Vector. This calls F07ADF/DGETRF followed by F07AJF/DGETRI from NAG/LAPACK The first of those does the LU factorization, while the second is a specialized back/forward solver.

gc(): st:=time():

MatrixInverse: calling external function
MatrixInverse: NAG hw_f07adf

MatrixInverse: NAG hw_f07ajf

A comparison of the results.


MatrixNorm: calling external function
MatrixNorm: NAG hw_f06raf


That's the essence of it, in those two timings above. One way is simply (unexpectedly) faster than the other. What follows below is only analysis, trying to discover why it might be so.


Measure the time for just the LU factorization, by F07ADF/DGETRF, including overhead.

gc(): st:=time():

LUDecomposition: calling external function
LUDecomposition: NAG hw_f07adf

Measure just the back/forward substitutions of F07AEF/DGETRS, including overhead.

gc(): st:=time():

LinearSolve: using method LU
LinearSolve: calling external function

LinearSolve: NAG hw_f07aef

T[LS]-T[LU];  # time for only f07aef (should roughly match T[LS_PLU])

T[MI]-T[LU];  # time for only f07ajf

At size 1000x1000 of a dataype=float[8] Matrix, on Pentium M running 32bit Maple 14 on Windows XP, the time for LinearSolve is just over half of the time for MatrixInverse. Excluding the time of the initial LU factorization, the two schemes differ by closer to a factor of 2 for this size.

The tentative conclusion is that an optimized DGETRS is being used from MKL, while an unoptimized DGETRI is used as the "generic" version from NAG/LAPACK . And so `MatrixInverse` is measurably slower than `LinearSolve` at large sizes of float[8] Matrix.

If this is true, then an improvement to Maple would occur if DGETRI were instead taken from MKL. The DGETRI function is present in version 10.x of MKL as used in Maple 14.

Demonstrate some bug which causes MatrixInverse to waste much time when supposedly just doing the F07AJF/DGETRI step.

gc(): st:=time():
time()-st; # obviously some sort of Library-level bug

MatrixInverse: calling external function

MatrixInverse: NAG hw_f07ajf


Download matrix_inversion_tim.mw



Please Wait...