I remember from my introductory class in linear algebra that my instructor said

It is impossible to calculate the Jordan canonical form of a matrix numerically.

Another quote I remember is

The Jordan canonical form does not depend continuously on the matrix.

For both quotes I did not remember the underlying reasons and since I do teach an introductory class on linear algebra this year, I got thinking about these issues again.

Here is a very simple example for the fact in the second quote:

Consider the matrix

for . This matrix has as a double eigenvalue, but the corresponding eigenspace is one-dimensional and spanned by . To extend this vector to a basis we calculate a principle vector by solving

which leads to

Defining we get the Jordan canonical form of as

So we have

but is not the Jordan canonical form of . So, in short: The Jordan canonical form of the limit of is not the limit of the Jordan canonical form of . In other words: Taking limits does not commute with forming the Jordan canonical form.

A side note: Of course, the Jordan canonical form is not even unique in general, so speaking of “dependence on the matrix” is an issue. What we have shown is, that there is no way to get continuous dependence on the matrix even if non-uniqueness is not an issue (like in the example above).

What about the first quote? Why is it impossible to compute the Jordan canonical form numerically? Let’s just try! We start with the simplest non-diagonalizable matrix

If we ask MATLAB or Octave to do the eigenvalue decomposition we get

>> [V,D] = diag(A) V = 1.00000 -1.00000 0.00000 0.00000 D = 1 0 0 1

We see that does not seem to be invertible and indeed we get

```
>> rank(V)
ans = 1
``````
```

What is happening? MATLAB did not promise the produce an invertible and it does not promise that the putcome would fulfill (which is my definition of diagonalizability). It does promise that and in fact

>> A*V ans = 1.00000 1.00000 0.00000 -0.00000 >> V*D ans = 1.00000 -1.00000 0.00000 0.00000 >> A*V-V*D ans = 0.0000e+00 2.2204e-16 0.0000e+00 0.0000e+00

so the promised identity is fulfilled up to machine precision (which is actually equal and we denote it by from here on).

How did MATLAB diagonalize this matrix? Here is the thing: The diagonalizable matrices are dense in ! (You probably have heard that before…) What does that mean numerically? Any matrix that you represent in floating point numbers is actually a representative of a whole bunch of matrices. Each entry is only known up to a certain precision. But this bunch of matrices does contain some matrix which is diagonalizable! This is exactly, what it means to be a dense set! So it is impossible to say if a matrix given in floating point numbers is actually diagonalizable or not. So, what matrix was diagonalized by MATLAB? Let us have closer look at the matrix : The entries in the first row are in fact and :

>> V(1,:) ans = 1 -1

In the second row we have

>> V(2,1) ans = 0 >> V(2,2) ans = 2.2204e-16

and there we have it. The inverse of does exist (although the matrix has numerical rank ) and it is

>> inv(V) warning: matrix singular to machine precision, rcond = 1.11022e-16 ans = 1.0000e+00 4.5036e+15 0.0000e+00 4.5036e+15

and note that is indeed just the inverse of the machine precision, so this inverse is actually 100% accurate. Recombining gives

>> inv(V)*D*V warning: matrix singular to machine precision, rcond = 1.11022e-16 ans = 1 0 0 1

which is not even close to our original .

How is that? Here is a solution. The matrix

is indistinguishable from numerically. However, it has two distinct eigenvalues, so it is diagonalizable. Indeed, a basis of eigenvectors is

which is indistinguishable from above and it holds that

which is indistinguishable from .

July 3, 2019 at 6:42 pm

[…] my previous post I illustrated why it is not possible to compute the Jordan canonical form numerically (i.e. in […]

January 31, 2023 at 2:56 pm

This entire post is built on a fallacy.

This argument applies nearly as much to the SVD as it does to the JCF, even though the former does in fact get used in numerical computing. While all the factors in the JCF do vary discontinuously, so do *some* of the factors of the SVD (specifically, the left and right singular vectors, but not the singular values themselves).

This seems to indicate that “can/cannot compute” type questions might not be addressed rigorously in current numerical analysis. I’m still not sure.

It might indeed be the case that the JCF “cannot be computed” but the above blog post does not explain why. And it looks like even the scholarly references aren’t good at answering the question. I don’t really know how such questions can be addressed rigorously; there might be a need for a model.

January 31, 2023 at 7:48 pm

The situation for JCF and SVD is different. The main point why the JCF can not be computed is not instability but that for any non-diagonalizable matrix A and any ε>0 there is a diaginalizable matrix B which is ε-close to A. No similar claim holds for the SVD.

More about instability-(non)-issues of computation of the SVD can be found in the answers here:

https://mathoverflow.net/questions/369930/why-is-uncomputability-of-the-spectral-decomposition-not-a-problem/

January 31, 2023 at 8:44 pm

The points made in that MO thread apply to the JCF. Where did you get the impression that they didn’t?

By the way, when you say something is “non-computable”, you need to provide rigorous definitions of what you mean. You haven’t done that. So the claim makes no sense because it isn’t even defined.

Also, see: https://www.sciencedirect.com/science/article/pii/S0024379521004493?casa_token=wSZNDhyPpfUAAAAA:7K8-dBAbH2ATpu_SDMtAtGcJ1AA7NuO_2E3BiZDKbFfL54WcGIqyhjWfZT3IcyLoJOpGgYYW_g#se0010

February 1, 2023 at 9:09 am

Well, I don’t try to define “non-computable” in a mathematically precise way (I basically know nothing about these stuff). What I am trying to highlight (mostly for myself) is that for any non-diagonalizable matrix A there is a matrix which is numerically indistinguishable from A which is diagonalizable.