Items tagged with eigenvalues

Feed
Also available: eigenvalues

the Eigenvalues are showing with I and am not expecting a complex eigenvalues so what is that I stand for? Can you please help? Thank You

I have two square matrices (LS,RS) in the form of generalized eigen-value problem as below:

LS*z=omega*RS*z

One time, I am using inverse of RS multiplying by LS to get the eigen values as
MS := MatrixInverse(RS) . LS:
VL1, VR1 := Eigenvectors(MS):

Next time, I am using direct method as below to get the eigen values 

VL2, VR2 := Eigenvectors(LS, RS);
 

I am wondering there are meaningful differences between VL1 and VL2 as well as VR1 and VR2.

Does anybody know why?

RS.mw

LS.mw

 

 

 

I solve a mechanical exercise but i had a problem.

I know M (mass) and K (stifness) matrices (4x4).

I want to solve the (λ2M+K)v=0  eigenvalue problem, where λ are the eigenvalues and v eigenvectors.

How can i solve this problem.  I tried with the Eigenvectors() command but it didn't give the right solution.

The Eigenvalues are okay, but the eigenvectors not

K := Matrix([[4*10^7,-1.50*10^7,2*10^7,0],[-1.50*10^7,1.50*10^7,0,1.50*10^7],[2*10^7,0,8*10^7,2*10^7],[0,1.50*10^7,2*10^7,4*10^7]]);

M:=Matrix([[121.90,99.048,-91.429,0],[99.048,594.29,0,-99.048],[-91.429,0,243.81,-91.429],[0,-99.048,-91.429,121.90]]);

w1,w2:=Eigenvectors(K,M);

Acoording with the book the right eigenvectors(shape mode) are:

[0.013 991,  0.034 233,  0.073 683,  0.090 573]
[0.035 637, 0, -0.032 213, 0]
[0 ,-0.034 233, 0, 0.090 573]
[-0.013 991, 0.034 233, -0.073 683, 0.090 573]

Thank you
 

How to write a code find fundamental matrix of the following Matrix?

restart; with(LinearAlgebra): A:=Matrix([[0, 1, 0, 0], [-a, 0, b, 0], [0, 0, 0, 1], [c, 0, -d, 0]]);eigenvectors(A);

where a,b,c,d∈IR.

I want to find eigenvalues and eigenvectors and then want to calculate e^( λ i)*ri  where λi's are eigenvalues, ri's are eigenvectors of A for i=1,2,3,4  respectively.

Then, I want to calculate Wronskian of the matrix which consists of vectors e^(λi)*ri in the columns. Could you help me?

See: Fundamental Matrix

Hello guys,

I have some system of differential equations,

How can i find  eigenvalues of this system?

If i have solution

res := evalf(dsolve(sys union ics, convert(x, list), type = numeric, method = rkf45))

and

sol := evalf(dsolve(sys union ics, convert(x, list), method = laplace))

AnSolution.mw

Thanks!

can anyone run my code?
please do it and send it for me.
thanks.

restart;
with(plottools): with(LinearAlgebra): with(plots):
ode := `assuming`([diff(Y(X), `$`(X, 2))+2*alpha*(diff(Y(X), X))+beta^2*Y(X) = 0], [alpha >= 0, beta >= 0, alpha+beta > 0]):
F := unapply(solve(subs({X = x, Y(X) = y, diff(Y(X), X) = yp, diff(Y(X), `$`(X, 2)) = yz}, ode), yz), x, y, yp):
Fp := unapply(solve(subs({X = x, Y(X) = y, diff(Y(X), X) = yp, diff(Y(X), `$`(X, 2)) = yz, diff(Y(X), `$`(X, 3)) = yt}, diff(ode, X)), yt), x, y, yp, yz):
Ni := seq(i, i = 0 .. 9), 15, 20:
for ni in Ni do
print(ni);
st := time[real]();
f[0, ni] := F(x[0], y[0, ni], yp[0, ni]);
fp[0, ni] := Fp(x[0], y[0, ni], yp[0, ni], f[0, ni]);
f[1, ni] := F(x[1], y[1, ni], yp[1, ni]);
fp[1, ni] := Fp(x[1], y[1, ni], yp[1, ni], f[1, ni]);
y[2, 0] := y[0, ni]+2*h*yp[0, ni]+(6/5)*f[0, ni]*h^2+(4/15)*fp[0, ni]*h^3+(4/5)*f[1, ni]*h^2+(4/15)*fp[1, ni]*h^3;
yp[2, 0] := yp[0, ni]+2*f[0, ni]*h+(2/3)*fp[0, ni]*h^2+(4/3)*fp[1, ni]*h^2;
for j to ni do
f[2, j-1] := F(x[2], y[2, j-1], yp[2, j-1]);
fp[2, j-1] := Fp(x[2], y[2, j-1], yp[2, j-1], f[2, j-1]);
y[2, j] := y[1, ni]+h*yp[1, ni]+(7/20)*f[1, ni]*h^2+(1/20)*fp[1, ni]*h^3+(3/20)*f[2, j-1]*h^2-(1/30)*fp[2, j-1]*h^3;
yp[2, j] := yp[1, ni]+(1/2)*f[1, ni]*h+(1/12)*fp[1, ni]*h^2+(1/2)*f[2, j-1]*h-(1/12)*fp[2, j-1]*h^2;
end do:
Ms := Matrix(4, 4); Ms[1, 3] := 1; Ms[2, 4] := 1;
y[2, ni] := collect(algsubs(h*alpha = H1, expand(algsubs(h*beta = H2, expand(y[2, ni])))), {y[0, ni], y[1, ni], yp[0, ni], yp[1, ni]});
Ms[3, 1] := coeff(y[2, ni], y[0, ni]);
Ms[3, 2] := coeff(y[2, ni], yp[0, ni])/h;
Ms[3, 3] := coeff(y[2, ni], y[1, ni]);
Ms[3, 4] := coeff(y[2, ni], yp[1, ni])/h;
hyp[2, ni] := collect(algsubs(h*alpha = H1, expand(algsubs(h*beta = H2, expand(h*yp[2, ni])))), {y[0, ni], y[1, ni], yp[0, ni], yp[1, ni]});
Ms[4, 1] := coeff(hyp[2, ni], y[0, ni]);
Ms[4, 2] := coeff(hyp[2, ni], yp[0, ni])/h;
Ms[4, 3] := coeff(hyp[2, ni], y[1, ni]);
Ms[4, 4] := coeff(hyp[2, ni], yp[1, ni])/h;
sol := Eigenvalues(Ms);
print(time[real]()-st);
st := time[real]();
SR[ni, 1] := implicitplot(max(seq(abs(sol[ii]), ii = 1 .. numelems(sol))) <= 1, H1 = 0 .. 3, H2 = 0 .. 3, filledregions, gridrefine = 3, axes = Boxed, view = [-2 .. 3, -3 .. 3], labels = [H[1], H[2]], labeldirections = ["horizontal", "vertical"]);
SR[ni, 2] := implicitplot(max(seq(abs(sol[ii]), ii = 1 .. numelems(sol))) <= 1, H1 = -2 .. 3, H2 = -2 .. 3, gridrefine = 3, axes = Boxed, view = [-2 .. 3, -3 .. 3], labels = [H[1], H[2]], labeldirections = ["horizontal", "vertical"]);
print(time[real]()-st);
end do;
for i in Ni do
i;
display({SR[i, 1], SR[i, 2], line([-1, 0], [3, 0], color = red, linestyle = dash), line([0, -3], [0, 3], color = red, linestyle = dash)});
end do;
display({seq(SR[i, 2], i = 0 .. Ni)});

i am working on laplacian eigenvalues of some special graphs and when i want to find  min([laplacianEigenvalues]) then i alltime see same error code, [Error, (in simpl/min) complex argument to max/min...]

my aim is write a procedure about Algebraic connectivity

how to i fix it? please help

Hello! 

In an assigment I have been asked to use the Eigenvectors command to find the eigenvalues and eigenvectors of a particular matrix. 

As highlighted in the following image, my questions are:

1. What is the meaning of the suffix "+0.I"? Does it mean that there are further decimal digits which are not displayed?

2. How do the first and third eigenvalues, which are equal, result in different eigenvectors? As per my understanding, equal eigenvalues should have equal corresponding eigenvectors. Please help.

hi.how i can determind  eignvalue of matrix in the form parametric?

thanks1.mw

T := Matrix(5, 5, {(1, 1) = -b*beta*k/(c*u)-d, (1, 2) = 0, (1, 3) = -beta*lambda*c*u/(b*beta*k+c*d*u), (1, 4) = 0, (1, 5) = 0, (2, 1) = b*beta*k/(c*u), (2, 2) = -s*(lambda*c*k*beta/(b*beta*k+c*d*u)-a)/P-a, (2, 3) = beta*lambda*c*u/(b*beta*k+c*d*u), (2, 4) = -s*b/c, (2, 5) = r*(s-p)/s, (3, 1) = 0, (3, 2) = k, (3, 3) = -u, (3, 4) = 0, (3, 5) = 0, (4, 1) = 0, (4, 2) = -s*(lambda*c*k*beta/(b*beta*k+c*d*u)-a)/P, (4, 3) = 0, (4, 4) = -s*b/c-b, (4, 5) = r*(s+c)/s, (5, 1) = 0, (5, 2) = s*(lambda*c*k*beta/(b*beta*k+c*d*u)-a)/P, (5, 3) = 0, (5, 4) = s*b/c, (5, 5) = -r})

Matrix(5, 5, {(1, 1) = -b*beta*k/(c*u)-d, (1, 2) = 0, (1, 3) = -beta*lambda*c*u/(b*beta*k+c*d*u), (1, 4) = 0, (1, 5) = 0, (2, 1) = b*beta*k/(c*u), (2, 2) = -s*(lambda*c*k*beta/(b*beta*k+c*d*u)-a)/P-a, (2, 3) = beta*lambda*c*u/(b*beta*k+c*d*u), (2, 4) = -s*b/c, (2, 5) = r*(s-p)/s, (3, 1) = 0, (3, 2) = k, (3, 3) = -u, (3, 4) = 0, (3, 5) = 0, (4, 1) = 0, (4, 2) = -s*(lambda*c*k*beta/(b*beta*k+c*d*u)-a)/P, (4, 3) = 0, (4, 4) = -s*b/c-b, (4, 5) = r*(s+c)/s, (5, 1) = 0, (5, 2) = s*(lambda*c*k*beta/(b*beta*k+c*d*u)-a)/P, (5, 3) = 0, (5, 4) = s*b/c, (5, 5) = -r})

(1)

``

 

Download 1.mw

 

I'm trying to implement the QR algorithm to find the Eigenvalues of the input matrix which will be forwarded to another implementation (of the SVD alg.) to find the singular values. My implementation goes as follows:

1. feeding input: A::Matrix(datatype=float) # a bidiagonal matrix
2. construct input matrix for the QR alg. of matrix A and Z (zeros of size A): C := Matrix([[Z,Transpose(A)],[A,Z]], datatype=float); # therefore C should be symmetric
3. find the eigenvalues of matrix C with an implementation of the QR alg.:

for k from 1 to 400 do
Q, R := QRDecomposition(C);
C:=R.Q;
end do:

At this point, the eigenvalues of C should be placed in the diagonal of the matrix, but they're randomly placed around the diagonal, with only ~0 elements (like 2,xxx * 10^(-13)) in the diagonal.

If anyone knows how to resolve this, let the knowledge flow through. Any help will be appriciated, thanks in advance.

I am using maple 13 to found Eingenvalues of an hermitian matrix :

M1:=Matrix([
> [lambda3+lambda4,0,0,0,0,0,lambda4/sqrt(2),0,0,I*lambda4/sqrt(2)],
> [0,lambda3/4,0,0,0,0,0,0,0,0],
> [0,0,lambda3/4,0,0,0,0,0,0,0],
> [0,0,0,lambda3/4,0,0,0,0,0,0],
> [0,0,0,0,lambda3/4,0,0,0,0,0],
> [0,0,0,0,0,lambda3,0,0,0,0],
> [lambda4/sqrt(2),0,0,0,0,0,lambda3/2,0,0,0],
> [0,0,0,0,0,0,0,lambda2,0,0],
> [0,0,0,0,0,0,0,0,lambda2,0],
> [-I*lambda4/sqrt(2),0,0,0,0,0,0,0,0,lambda4/2]
> ]);

>Eigenvalues(M1);

my surprise is that maple gives me 8 correct solutions an 2 complex eigenvalues which are not acceptable (we now that the eigenvalues for an hermitian matrix are all real) .

To understand the output of maple, first,  I suspect that the complex part of the roots was null but without success I haven't found how to do it zero...

is it a bug? Thanks a lot to cooperation

I am having difficulty helping someone series expand an eigenvector solution.  I can expand the eigenvalues easily but get a numeric exception divide by zero when I attempt to expand a component of an eigenvector.  Mathematica seems to have no problem solving this problem.  Any help would be appreciated.

 

 

 

 

 

assume(varepsilon > 0)

H := Matrix(3, 3, {(1, 1) = 0, (1, 2) = -epsilon, (1, 3) = epsilon, (2, 1) = -epsilon, (2, 2) = 2-2*epsilon, (2, 3) = 0, (3, 1) = epsilon, (3, 2) = 0, (3, 3) = 2+2*epsilon})

Matrix(%id = 18446744078100429630)

(1)

with(LinearAlgebra):

evals, evecs := Eigenvectors(H):

e1 := convert(simplify(series(evals[1], varepsilon = 0, 4)), polynom)

2+2*varepsilon+(1/2)*varepsilon^2-(7/16)*varepsilon^3

(2)

e2 := convert(simplify(series(evals[2], varepsilon = 0, 4)), polynom)

-varepsilon^2

(3)

e3 := convert(simplify(series(evals[3], varepsilon = 0, 4)), polynom)

2-2*varepsilon+(1/2)*varepsilon^2+(7/16)*varepsilon^3

(4)

simplify(series(evecs[1][1], epsilon = 0, 4))

Error, (in simplify/sqrt/local) numeric exception: division by zero

 

``

 

Download CourseraOpticsEigenvalues.mwCourseraOpticsEigenvalues.mw

I faced a very large eigenproblem during my research. The square matrix under consideration is of size more than 2^30 times 2^30. I have tried to deal with this problem by the QR algorithm with double implicit shift (more precisely, the Francis double step QR algorithm). I'm a very beginner of programming, but I tried as follows:

--------------------------------------------------------------------------------------------------

A := Matrix([[7, 3, 4, -11, -9, -2], [-6, 4, -5, 7, 1, 12], [-1, -9, 2, 2, 9, 1], [-8, 0, -1, 5, 0, 8], [-4, 3, -5, 7, 2, 10], [6, 1, 4, -11, -7, -1]]):
H := HessenbergForm(A):
p:=6:  
for p while p>2 do: 
q:=p-1: 
s:=H(q,q)+H(p,p):  
t:=H(q,q)*H(p,p)-H(q,p)*H(p,q): 
x:=(H(1,1))^(2)+H(1,2)*H(2,1)-s*H(1,1)+t: 
y:=H(2,1)*(H(1,1)+H(2,2)-s): 
z:=H(2,1)*H(3,2): 
for k from 0 to p-3 do:  
V:=Vector([x,y,z]):   
P:=Transpose(HouseholderMatrix(1/(Norm(V+exp(argument(V(1))*I)*Norm(V,2)*Vector(3,shape=unit[1]),2))*(V+exp(argument(V(1))*I)*Norm(V,2)*Vector(3,shape=unit[1])))):   
r:=max(1,k):
H[k+1..k+3,r..6]:=MatrixMatrixMultiply(Transpose(P),SubMatrix(H,[k+1..k+3],[r..6])):  
r:=min(k+4,6):
H[1..r,k+1..k+3]:=MatrixMatrixMultiply(SubMatrix(H,[1..r],[k+1..k+3]),P):   
x:=H(k+2,k+1):
y:=H(k+3,k+1):   
if k<3 then z:=H(k+4,k+1):   
end if: 
od: 
P:=GivensRotationMatrix(Vector([x,y]),1,2): 
H[q..p,p-2..6]:=MatrixMatrixMultiply(Transpose(P),SubMatrix(H,[q..p],[p-2..6])): 
H[1..p,p-1,p]:=MatrixMatrixMultiply(SubMatrix(H,[1..p],[p-1,p]),P): 
if abs(H(p,q))<10^(-20)*(abs(H(q,q))+abs(H(p,p))) then    H(p,q):=0: p:=p-1:q=p-1:  
elif abs(H(p-1,q-1))<10^(-20)*(abs(H(q-1,q-1))+abs(H(q,q))) then    H(p-1,q-1):=0: p:=p-2:q:=p-1:  
end if:  od:
--------------------------------------------------------------------------------------------------

It seemed that replacing 0 in a Hessenberg matrix by a non-zero element is not allowed. How can I remedy this?

Plus, can anyone tell me the problem of the above thing(it's not really a programming...;( ), please?

I would also appreciate it if someone let me know a better idea for a huge eigenproblem.

Thanks in advance.

Sorry for the uninformative title. I've never used Maple, but I'm willing to buy a student license and learn it. But before spending too much effort and money I need to know if it suits my needs.

Basically what I need to do is:

1) I have a positive definite symmetric matrix of size nxn, where n can range from 2 to inf. I don't know the elements, except the fact that the diagonal has ones everywhere. All I know is that the elements out of the diagonal are in the range [0,1)

2) I have to compute the lower triangular cholesky decomposition of this matrix, lets call it L.

3) I need to subtract from each element of L the mean of the elements in the respective column. Lets call this matrix L*

4) Then I need to evaluate another nxn matrix computed from the elements of L* following a simple pattern.

5) Finally I need to find the eigenvalues of this last matrix.

What I would ideally want is to get a symbolic representation of the n eigenvalues as symbolic functions of the (unknown) elements of the matrix at point 1.

I can drop the assumption of n being unknown, i.e. fix n=3 and get the 3 functions that, after replacing the right values, give me the eigenvalues, then fix n=4 and get 4 functions, etc.

Is this possible to do in maple?

Thank you

There has been a spate of Questions posted in the past week about computing eigenvalues. Invariably, the Questioners have computed some eigenvalues by applying fsolve to a characteristic polynomial obtained from a floating-point matrix via LinearAlgebra:-Determinant. They are then surprised when various tests show that these eigenvalues are not correct. In the following worksheet, I show that the eigenvalues computed by the fsolve@Determinant method (when applied to a floating-point matrix) are 100% garbage for dense matrices larger than about Digits x Digits. The reason for this is that computing the determinant introduces too much round-off error into the coefficients of the characteristic polynomial. The best way to compute the eigenvalues is to use LinearAlgebra:-Eigenvalues or LinearAlgebra:-Eigenvectors. Furthermore, very accurate results can be obtained without increasing Digits.

 

The correct and incorrect ways to compute floating-point eigenvalues

Carl Love 2016-Jan-18

restart:

Digits:= 15:

macro(LA= LinearAlgebra):

n:= 2^5:  #Try also 2^3 and 2^4.

A:= LA:-RandomMatrix(n):

A is an exact matrix of integers; Af is its floating-point counterpart.

Af:= Matrix(A, datatype= float[8]):

P:= LA:-CharacteristicPolynomial(A, x):

P is the exact characteristic polynomial with integer coefficients; Pf is the floating-point characteristic polynomial computed by the determinant method.

Pf:= LA:-Determinant(Af - LA:-DiagonalMatrix([x$n])):

RP:= [fsolve(P, complex)]:

RP is the list of floating-point eigenvalues computed from the exact polynomial; RPf is the list of eigenvalues computed from Pf.

RPf:= [fsolve(Pf, complex)]:

RootPlot:= (R::list(complexcons))->
     plot(
          [Re,Im]~(R), style= point, symbol= cross, symbolsize= 24,
          axes= box, color= red, labels= [Re,Im], args[2..]
     )
:

RootPlot(RP);

RootPlot(RPf);

We see that the eigenvalues computed from the determinant are completely garbage. The characteristic polynomial might as well have been x^n - a^n for some positive real number a > 1.

 

Ef is the eigenvalues computed from the floating-point matrix Af using the Eigenvalues command.

Ef:= convert(LA:-Eigenvalues(Af), list):

RootPlot(Ef, color= blue);

We see that this eigenvalue plot is visually indistinguishable from that produced from the exact polynomial. This is even more obvious if I plot them together:

plots:-display([RootPlot(Ef, color= blue), RootPlot(RP)]);

Indeed, we can compare the two lists of  eigenvalues and show that the maximum difference is exceedingly small.

 

The following procedure is a novel way of sorting a list of complex numbers so that it can be compared to another list of almost-equal complex numbers.

RootSort:= (R::list(complexcons))-> sort(R, key= abs*map2(`@`, signum+2, Re+Im)):


max(abs~(RootSort(RP) -~ RootSort(Ef)));

HFloat(1.3258049636636544e-12)

 

 

``

 

Download Eigenvalues.mw

1 2 3 4 Page 1 of 4