I have two linear algebra texts [1, 2] with examples of the process of constructing the transition matrix
that brings a matrix
to its Jordan form
. In each, the authors make what seems to be arbitrary selections of basis vectors via processes that do not seem algorithmic. So recently, while looking at some other calculations in linear algebra, I decided to revisit these calculations in as orderly a way as possible.
First, I needed a matrix
with a prescribed Jordan form. Actually, I started with a Jordan form, and then constructed
via a similarity transform on
. To avoid introducing fractions, I sought transition matrices
with determinant 1.
Let's begin with
, obtained with Maple's JordanBlockMatrix command.
• |
Tools_Load Package: Linear Algebra
|
|
Loading LinearAlgebra
|
![J := JordanBlockMatrix([[2, 3], [2, 2], [2, 1]])](/view.aspx?sf=201749_post/0896392fd434db0a953e86d8958f54a2.gif)
![Matrix([[2, 1, 0, 0, 0, 0], [0, 2, 1, 0, 0, 0], [0, 0, 2, 0, 0, 0], [0, 0, 0, 2, 1, 0], [0, 0, 0, 0, 2, 0], [0, 0, 0, 0, 0, 2]])](/view.aspx?sf=201749_post/8f34a19fc06ffe5002ba91c673fad2a9.gif)
|
|
|

The eigenvalue
has algebraic multiplicity 6. There are sub-blocks of size 3×3, 2×2, and 1×1. Consequently, there will be three eigenvectors, supporting chains of generalized eigenvectors having total lengths 3, 2, and 1. Before delving further into structural theory, we next find a transition matrix
with which to fabricate
.
The following code generates random 6×6 matrices of determinant 1, and with integer entries in the interval
. For each, the matrix
is computed. From these candidates, one
is then chosen.
After several such trials, the matrix
was chosen as
for which the characteristic and minimal polynomials are
So, if we had started with just
, we'd now know that the algebraic multiplicity of its one eigenvalue
is 6, and there is at least one 3×3 sub-block in the Jordan form. We would not know if the other sub-blocks were all 1×1, or a 1×1 and a 2×2, or another 3×3. Here is where some additional theory must be invoked.

The null spaces
of the matrices
are nested:
, as depicted in Figure 1, where the vectors
, are basis vectors.

|
Figure 1 The nesting of the null spaces
|
|
|
The vectors
are eigenvectors, and form a basis for the eigenspace
. The vectors
, form a basis for the subspace
, and the vectors
, for a basis for the space
, but the vectors
are not yet the generalized eigenvectors. The vector
must be replaced with a vector
that lies in
but is not in
. Once such a vector is found, then
can be replaced with the generalized eigenvector
, and
can be replaced with
. The vectors
are then said to form a chain, with
being the eigenvector, and
and
being the generalized eigenvectors.
If we could carry out these steps, we'd be in the state depicted in Figure 2.

|
Figure 2 The null spaces with the longest chain determined
|
|
|
Next, basis vector
is to be replaced with
, a vector in
but not in
, and linearly independent of
. If such a
is found, then
is replaced with the generalized eigenvector
. The vectors
and
would form a second chain, with
as the eigenvector, and
as the generalized eigenvector.

Define the matrix
by the Maple calculation

and note

The dimension of
is 3, and of
, 5. However, the basis vectors Maple has chosen for
do not include the exact basis vectors chosen for
.
We now come to the crucial step, finding
, a vector in
that is not in
(and consequently, not in
either). The examples in
are simple enough that the authors can "guess" at the vector to be taken as
. What we will do is take an arbitrary vector in
and project it onto the 5-dimensional subspace
, and take the orthogonal complement as
.

A general vector in
is

A matrix that projects onto
is

The orthogonal complement of the projection of Z onto
is then
. This vector can be simplified by choosing the parameters in Z appropriately. The result is taken as
.

The other two members of this chain are then

A general vector in
is a linear combination of the five vectors that span the null space of
, namely, the vectors in the list
. We obtain this vector as

A vector in
that is not in
is the orthogonal complement of the projection of ZZ onto the space spanned by the eigenvectors spanning
and the vector
. This projection matrix is

The orthogonal complement of ZZ, taken as
, is then

Replace the vector
with
, obtained as
The columns of the transition matrix
can be taken as the vectors
, and the eigenvector
. Hence,
is the matrix

Proof that this matrix
indeed sends
to its Jordan form consists in the calculation

The bases for
, are not unique. The columns of the matrix
provide one set of basis vectors, but the columns of the transition matrix generated by Maple, shown below, provide another.

I've therefore added to my to-do list the investigation into Maple's algorithm for determining an appropriate set of basis vectors that will support the Jordan form of a matrix.
|
References
|
|

[1] Linear Algebra and Matrix Theory, Evar Nering, John Wiley and Sons, Inc., 1963
|
[2] Matrix Methods: An Introduction, Richard Bronson, Academic Press, 1969
|
|
|

|

Download JordanForm_blog.mw