# Items tagged with linear_algebralinear_algebra Tagged Items Feed

### Transitioning to Jordan Form...

November 02 2015 Maple 2015
4
10

I have two linear algebra texts [1, 2]  with examples of the process of constructing the transition matrix  that brings a matrix  to its Jordan form . In each, the authors make what seems to be arbitrary selections of basis vectors via processes that do not seem algorithmic. So recently, while looking at some other calculations in linear algebra, I decided to revisit these calculations in as orderly a way as possible.

First, I needed a matrix  with a prescribed Jordan form. Actually, I started with a Jordan form, and then constructed  via a similarity transform on . To avoid introducing fractions, I sought transition matrices  with determinant 1.

Let's begin with , obtained with Maple's JordanBlockMatrix command.

The eigenvalue  has algebraic multiplicity 6. There are sub-blocks of size 3×3, 2×2, and 1×1. Consequently, there will be three eigenvectors, supporting chains of generalized eigenvectors having total lengths 3, 2, and 1. Before delving further into structural theory, we next find a transition matrix  with which to fabricate .

The following code generates random 6×6 matrices of determinant 1, and with integer entries in the interval . For each, the matrix  is computed. From these candidates, one  is then chosen.

After several such trials, the matrix  was chosen as

for which the characteristic and minimal polynomials are

So, if we had started with just , we'd now know that the algebraic multiplicity of its one eigenvalue  is 6, and there is at least one 3×3 sub-block in the Jordan form. We would not know if the other sub-blocks were all 1×1, or a 1×1 and a 2×2, or another 3×3. Here is where some additional theory must be invoked.

The null spaces  of the matrices  are nested: , as depicted in Figure 1, where the vectors , are basis vectors.

 Figure 1   The nesting of the null spaces

The vectors  are eigenvectors, and form a basis for the eigenspace . The vectors , form a basis for the subspace , and the vectors , for a basis for the space , but the vectors  are not yet the generalized eigenvectors. The vector  must be replaced with a vector  that lies in  but is not in . Once such a vector is found, then  can be replaced with the generalized eigenvector , and  can be replaced with . The vectors  are then said to form a chain, with  being the eigenvector, and  and  being the generalized eigenvectors.

If we could carry out these steps, we'd be in the state depicted in Figure 2.

 Figure 2   The null spaces  with the longest chain determined

Next, basis vector  is to be replaced with , a vector in  but not in , and linearly independent of . If such a  is found, then  is replaced with the generalized eigenvector . The vectors  and  would form a second chain, with  as the eigenvector, and  as the generalized eigenvector.

Define the matrix  by the Maple calculation

and note

The dimension of  is 3, and of , 5. However, the basis vectors Maple has chosen for  do not include the exact basis vectors chosen for .

We now come to the crucial step, finding , a vector in  that is not in  (and consequently, not in  either). The examples in  are simple enough that the authors can "guess" at the vector to be taken as . What we will do is take an arbitrary vector in  and project it onto the 5-dimensional subspace , and take the orthogonal complement as .

A general vector in  is

A matrix that projects onto  is

The orthogonal complement of the projection of Z onto  is then . This vector can be simplified by choosing the parameters in Z appropriately. The result is taken as .

The other two members of this chain are then

A general vector in  is a linear combination of the five vectors that span the null space of , namely, the vectors in the list . We obtain this vector as

A vector in  that is not in  is the orthogonal complement of the projection of ZZ onto the space spanned by the eigenvectors spanning  and the vector . This projection matrix is

The orthogonal complement of ZZ, taken as , is then

Replace the vector  with , obtained as

The columns of the transition matrix  can be taken as the vectors , and the eigenvector . Hence,  is the matrix

Proof that this matrix  indeed sends  to its Jordan form consists in the calculation

 =

The bases for , are not unique. The columns of the matrix  provide one set of basis vectors, but the columns of the transition matrix generated by Maple, shown below, provide another.

I've therefore added to my to-do list the investigation into Maple's algorithm for determining an appropriate set of basis vectors that will support the Jordan form of a matrix.

References

 [1] Linear Algebra and Matrix Theory, Evar Nering, John Wiley and Sons, Inc., 1963 [2] Matrix Methods: An Introduction, Richard Bronson, Academic Press, 1969

### how to get maple to do linear algebra in Z_2 (inte...

June 17 2015
1 1

how to get maple to do linear algebra in Z_2 (integers modulo 2)

I don't want it to solve and then reduce mod 2 I want it to work over Z_2 so basis([ [1,1,1], [1,-1,1 ]) = [1,1,1]  etc

### Calculating determinant of a matrix of Bessel Func...

May 13 2015
1 5

Hello Everyone

I am new to Maple and I have to find the determinant of the following matrix

Here k is a constant.

### Nullspace Calculation By Maple...

May 04 2015
0 3

Hi,

When I calculate the nullspace of a Matrix my solution comes out in a different order than Maple.  So, the question is what steps does Maple use to calculate Nullspace, ColumnSpace, and eigenvalues.  All of these are calculated by Maple in a different order that when I calculate by hand.

What I meant to say is that the answer given by for nullspace is in a different order than if I were do the same calculation by Hand.

this is what got calculating the Nullspace by hand ,  when Maple does the calc. It returns the answer in the opposite order

Thanks

Bill

### Can Maple do Jacobi's Method with a given toleranc...

November 05 2014
0 6

Is there a way to do the following on Maple:

I want Maple to use Jacobi's method to give an approximation of the solution to the following linear system, with a tolerance of 10^(-2) and with a maximum iteration count of 300.

The linear system is

x_1-2x_3=0.2

-0.5x_1+x_2-0.25x_3=-1.425

x_1-0.5x_2+x_3=2

Thanks.

November 03 2014
0 0

Exercise Prove that (-1)u = - u in any vector space. Note that (-1)u means the number -1 is multiplied to the vector u, and - u means the negative vector in the fourth property of the definition of vector spaces.

Exercise Prove that (a1u1 + a2u2) + (b1u1 + b2u2) = (a1 + b1)u1 + (a2 + b2)u2 in any vector space.

Exercise Give a detailed reason why, in any vector space,

• u + v = 0 ⇒ u = - v.

• 3u + 2v - 4w = 0 ⇒ v = - 3/2 u + 2w.

### solving ill conditioned linear system...

August 27 2014
0 6

I wrote down a proceduer in maple for solving integro diff. equations which result in an ill conditioned linear system of algebraic equations. I used the LinearSolve command with method=LU to solve the system but my algorithm failed, and does not converge. Is there any command in maple for solving such systems

### What is the reduced row echelon form of A......qui...

August 16 2014
0 2

let A be a matrix=

[  7        7      9    -17

6        6      1    -2

-12    -12    -27    1

7       7      17   -15 ]

What is the reduced row echelon form of A?

What is the rank of A?

### Question on linear algebra...

August 16 2014
1 7

A consistent system of linear equations in 14 unknowns is reduced to row echelon form. There are then 10 non-zero rows (i.e. 10 pivots). How many parameters (free variables) will occur in the solution?

### creating block matrices...

July 29 2014
0 1

hi fellas
i have questions which maybe very common but i can't handle it right now
there are 9 matrices. each one of them is a 3*3 matrix, not to say all the matrices' elements are in symbolic form.
all i want is to form a matrix made out of the 9 matrices i mentioned above in **maple**; i mean something like "cell" in MatlabI'll be appreciated for your help

### how to calculate out (1,1,1) and (1,2,0)...

June 13 2014
0 2

http://en.wikibooks.org/wiki/Linear_Algebra/

Representing_Linear_Maps_with_Matrices

how to calculate the first step

(2,0) -> (1,1,1) and (1,4) -> (1,2,0)

how to use maple command to get (1,1,1) and (1,2,0)

June 13 2014
0 1

how to use maple command to calculate rep(h)

to get (0,-1/2,1) and (1,-1,0)

http://en.wikibooks.org/wiki/Linear_Algebra

/Representing_Linear_Maps_with_Matrices

May 02 2014
0 0

So im trying to write a maple script that computes the Jordan form of a given (3x3)- matrix
A. If {a,b,c} is a basis with respect to which A is in Jordan form, then I'm trying to make it
plot the three lines spanned by a, b and c, in the standard coordinate system. I was hinted to use plot3d here.

sidenote: I know how to compute the jordan matrix of A, such by find the eigen vectors and generalised eigen vectors and putting them in as columns in a 3x3 matrix say S,   where S is invertible    then  (S^-1)*(A)*(S) = (J).

### solving a linear equation subject to certain const...

December 19 2013
0 1

In my research a I need to solve the linear equation (getting its null space) under some constraints.

The matrix is given below:

The constraints shall be (x[1]...x[16]>0, x[17]...x[20] arbitary...)

The solutions shall actually be a canonical combination of a lot of vectors, (canonical combination means possitive sums of vectors). And I wish to get those vectors. is there a way that I could achieve this by Maple?

### convert to the vector mode...

June 24 2013
0 9

how can i convert these equations to the vector mode ?

eq1[t] := .4614468816e11*a[13][t]*a[14][t]+2291210.983*a[16][t]^2-.2842690977e11*a[17][t]^2-.1782456232e12*a[18][t]^2+.1689228391e12*a[13][t]^2+6406045.412*a[14][t]^2-4317791.317*a[15][t]^2+.9846526429e12*a[1][t]+.2533881291e12*a[2][t]+.3076607771e11*a[3][t]+8105875.203*a[4][t]-5054889.363*a[5][t]-34561.30764*a[6][t]-6275707.162*a[16][t]*a[13][t]-20873274.82*a[14][t]*a[16][t]-27435155.86*a[16][t]*a[15][t]-5539558.102*a[17...

 1 2 3 4 Page 1 of 4
﻿