The reason that it is taking so long is that it is first computing Eigenvalues(A) in exact arithmetic, and then applying evalf to that; rather than working in floating point from the start. With the all-in-floating-point approach, it is possible to work at Digits = 25 and get the eigenvalues of a 200x200 matrix in reasonable time. And, as acer suggests, it will be much faster still at Digits = 15.
I hope these examples explain everything. If not, please ask for clarification.
restart;
Digits:= 25:
with(LinearAlgebra):
A:= RandomMatrix(50,50):
Af:= Matrix(A, datatype= float):
CodeTools:-Usage(assign('E1', Eigenvalues(Af)));
memory used=349.99MiB, alloc change=0 bytes, cpu time=2.03s, real time=1.98s
CodeTools:-Usage(assign('E2', evalf(Eigenvalues(A))));
memory used=0.93GiB, alloc change=0 bytes, cpu time=7.38s, real time=7.20s
CodeTools:-Usage(assign('E3', Eigenvalues(A)));
memory used=341.92MiB, alloc change=0 bytes, cpu time=2.20s, real time=2.14s
CodeTools:-Usage(assign('E3', evalf(E3))):
memory used=0.59GiB, alloc change=0 bytes, cpu time=5.06s, real time=4.92s
So the time for E2 is the sum of the times for the two steps of E3.
Norm(sort(Re(E1)) - sort(Re(E2)));
-21
3.4 10
Norm(sort(Im(E1)) - sort(Im(E2)));
-21
2.5 10
Norm(E2 - E3);
0.
So E1 and E2 are approximately equal in values; whereas E2 and E3 are identical.
A:= RandomMatrix(200,200, datatype= float):
CodeTools:-Usage(Eigenvalues(A)):
memory used=16.58GiB, alloc change=24.00MiB, cpu time=108.56s, real time=104.13s
Digits:= 15:
A:= RandomMatrix(200,200, datatype=float[8]):
CodeTools:-Usage(Eigenvalues(A)):
memory used=401.12KiB, alloc change=0 bytes, cpu time=219.00ms, real time=484.00ms
A:= RandomMatrix(2^11, 2^11, datatype= float[8]):
CodeTools:-Usage(Eigenvalues(A)):
memory used=32.23MiB, alloc change=32.00MiB, cpu time=26.20s, real time=16.11s