The Taylor or Maclaurin series for a function of a square matrix argument is the same as the series for the same function with a complex argument. The general non-commutativity of matrix multiplication plays no role in this because all powers of a single square matrix *do *commute. Maple can find the general term of the series for many fairly simple functions and express those series in sigma notation, like this:

**convert(exp(A), FormalPowerSeries); #****The second argument can be abbreviated FPS.**

The commands **series **or **taylor **(essentially the same thing) can give you any finite number of terms of the series for nearly any function that can be represented as a power series, but they won't give the general term:

**series(exp(A), A=0, 9);**

Here is a procedure for computing the nth partial sum of the series for **exp(A)**. The simplicity of this formulation amazes me! It avoids computing high powers of the matrix (which can easily lead to numeric overflow) by combining the updating of the matrix power and the updating of the factorial denominator with the single operation **Ak.= A/k**. This is an *updating assignment operator*, introduced in Maple 2018. It's equivalent to **Ak:= Ak.(A/k)**.

**#Computes the nth partial sum of the Maclaurin series for exp(A):**
**Exp:= (A::Matrix(square), n::nonnegint)->
local k, Ak:= rtable(identity, rtable_dims(A), rtable_options(A));
Ak + add((Ak.= A/k), k= 1..n)
: **
**#Examples:**
**Digits:= 32:
LA:= LinearAlgebra:
Display:= x-> (x, print(evalf[3]~(x)))[1]:
randomize():
A:= (Display@LA:-RandomMatrix)(
(6,6), generator= rand(-2.0..2.0), datatype= sfloat
):**
[ 0.621 1.09 1.55 0.810 -0.464 0.198]
[ ]
[ -1.41 0.573 -0.0609 -1.66 0.0230 0.816]
[ ]
[ 0.689 -0.178 1.51 0.384 1.73 -0.0984]
[ ]
[ 1.60 -0.448 0.815 1.79 -1.90 -1.53]
[ ]
[-0.209 -0.588 -1.19 -1.27 0.338 1.36]
[ ]
[ -1.73 0.935 0.441 -1.87 0.344 -0.0294]
**E1:= Display(CodeTools:-Usage(Exp(A, 99))):**
memory used=10.54MiB, alloc change=0 bytes,
cpu time=62.00ms, real time=68.00ms, gc time=0ns
[ 3.89 1.54 6.33 2.37 1.29 -0.195]
[ ]
[-20.6 0.996 -18.2 -23.9 7.80 11.1]
[ ]
[0.660 -0.262 3.31 -1.55 4.30 2.03]
[ ]
[ 30.8 0.288 29.1 37.0 -12.3 -17.5]
[ ]
[-14.5 -0.328 -14.5 -17.3 5.83 8.49]
[ ]
[-21.0 -0.0767 -18.4 -24.7 8.69 12.1]
**#Compare:**
**E2:= Display(CodeTools:-Usage(LA:-MatrixExponential(A))):**
memory used=2.85MiB, alloc change=0 bytes,
cpu time=16.00ms, real time=14.00ms, gc time=0ns
[ 3.89 1.54 6.33 2.37 1.29 -0.195]
[ ]
[-20.6 0.996 -18.2 -23.9 7.80 11.1]
[ ]
[0.660 -0.262 3.31 -1.55 4.30 2.03]
[ ]
[ 30.8 0.288 29.1 37.0 -12.3 -17.5]
[ ]
[-14.5 -0.328 -14.5 -17.3 5.83 8.49]
[ ]
[-21.0 -0.0767 -18.4 -24.7 8.69 12.1]
**#absolute and relative errors:**
**Display(<[Error__||(abs,rel)]=~ LA:-Norm~([E1-E2, (E1-E2)/~E2])>):**
[ -29]
[Error__abs = 1.72 10 ]
[ ]
[ -30]
[Error__rel = 3.99 10 ]
**#My procedure Exp can handle symbolic matrices just as well. Here is MacDude's
#example done by it:**
**Exp(<a, b; c, d>, 2);**
[ 1 2 1 1 1 ]
[1 + a + - a + - b c b + - a b + - b d ]
[ 2 2 2 2 ]
[ ]
[ 1 1 1 1 2]
[ c + - c a + - d c 1 + d + - b c + - d ]
[ 2 2 2 2 ]

If the square matrix **A** is *diagonalizable *(which is true for most numeric matrices in practice), then there's a better way than partial sums to compute **f(A)**: If **A = P^(-1).D.P **where **D **is a diagonal matrix, then **f(A) = P^(-1).f(D).P **where **f(D) **can be computed simply by applying the complex-valued **f** to the entries on the diagonal. This is why the **MatrixExponential **command shown above is faster.