The correct and incorrect ways to compute floating-point eigenvalues
Carl Love 2016-Jan-18
n:= 2^5: #Try also 2^3 and 2^4.
A is an exact matrix of integers; Af is its floating-point counterpart.
Af:= Matrix(A, datatype= float):
P:= LA:-CharacteristicPolynomial(A, x):
P is the exact characteristic polynomial with integer coefficients; Pf is the floating-point characteristic polynomial computed by the determinant method.
Pf:= LA:-Determinant(Af - LA:-DiagonalMatrix([x$n])):
RP:= [fsolve(P, complex)]:
RP is the list of floating-point eigenvalues computed from the exact polynomial; RPf is the list of eigenvalues computed from Pf.
RPf:= [fsolve(Pf, complex)]:
[Re,Im]~(R), style= point, symbol= cross, symbolsize= 24,
axes= box, color= red, labels= [Re,Im], args[2..]
We see that the eigenvalues computed from the determinant are completely garbage. The characteristic polynomial might as well have been x^n - a^n for some positive real number a > 1.
Ef is the eigenvalues computed from the floating-point matrix Af using the Eigenvalues command.
Ef:= convert(LA:-Eigenvalues(Af), list):
RootPlot(Ef, color= blue);
We see that this eigenvalue plot is visually indistinguishable from that produced from the exact polynomial. This is even more obvious if I plot them together:
plots:-display([RootPlot(Ef, color= blue), RootPlot(RP)]);
Indeed, we can compare the two lists of eigenvalues and show that the maximum difference is exceedingly small.
The following procedure is a novel way of sorting a list of complex numbers so that it can be compared to another list of almost-equal complex numbers.
RootSort:= (R::list(complexcons))-> sort(R, key= abs*map2(`@`, signum+2, Re+Im)):
max(abs~(RootSort(RP) -~ RootSort(Ef)));