@alex_01 I'm not sure what you think that you've shown here. The normal equations can be badly conditioned enough that taking their matrix-inverse and multiplying by it can produce poor results with unnecessarily high roundoff error. The problem can also get worse, as the size increases.

I have found that some folks who resist more often the idea that it is ill-advised to numerically inverting a Matrix, just in order to then solve a linear system by multiplying with it, are those who work with computational statistics. I think it relates to some notions about covariance, and some inverse being interesing in its own right (and not for solving systems). This seems to lead to a more general belief that floating-point Matrix inversion is suitable for solving all manner of linear systems.

But the fact is that numerically inverting a Matrix and then multiplying with it, to solve a floating-point linear system, is not a good idea.

For an underdetermined linear system, problems with the normal equations can get much worse.

You should use LinearSolve instead.

I'm not sure what your plot was supposed to demonstate. You can compute forward-error, resubstituting the solution into the original X.solution -Y and taking the norm.

Do you understand that there can be some variation in a least squares solution, by minimizing the 2-norm of the resulting Vector. The LeastSquares and the linalg:-leastsqrs commands can provide that. Did you have this in mind, perhaps, as far as what your plot might have shown!?

restart:
randomize():
with(LinearAlgebra):
with(Statistics):
N := 10:
X := RandomMatrix(10, N, outputoptions = [datatype = float]):
Y := Vector([seq(i, i = 1 .. 10)], datatype = float):
data1 := ( MatrixInverse(Transpose(X).X) . Transpose(X) ) . Y:
Norm(X.data1-Y);
-11
1.30455646285554394 10
data2 := LinearAlgebra[LeastSquares](X, Y, method=SVD):
Norm(X.data2-Y);
-13
1.27897692436818033 10
data3 := LinearSolve(Transpose(X).X,Transpose(X)) . Y:
Norm(X.data3-Y);
-12
1.33582034322898835 10
N := 20:
X := RandomMatrix(10, N, outputoptions = [datatype = float]):
Y := Vector([seq(i, i = 1 .. 10)], datatype = float):
data1 := ( MatrixInverse(Transpose(X).X) . Transpose(X) ) . Y:
Norm(X.data1-Y);
75.3374023437500000
data2 := LinearAlgebra[LeastSquares](X, Y, method=SVD):
Norm(X.data2-Y);
-14
2.30926389122032560 10
data3 := LinearSolve(Transpose(X).X,Transpose(X)) . Y:
Norm(X.data3-Y);
-15
8.88178419700125232 10

Incidentally, using evalf(...,5) is also a bad idea, if all you're trying to do is round the final results to 5 decimal places. For most Maple computations, what it will do instead is cause the whole computation to use only 5 digits of working precision. That will make it much more prone to roundoff. This is not what's causing the bad result here (inverting the Matrix is). In fact you are lucky here, as UseHardware floats default value of `deduced` will still allow LinearAlgebra to use hardware double-precision. As a general rule, if you want results rounded then you should compute at usual working precision, save the result, and only as a final step apply evalf[5]. Again, this is not the major problem with what you had.

Let's move on. What transformed Matrix are you referring to? Do you understand how singular value decomposition is used in order to do least squares solving of linear systems?