Exact Numerical Solution of Rational Eigensystems from Rational Symmetric Matrices


J.A.D.W. Anderson
Department of Computer Science, The University of Reading, England.


P.K. Sweby
Department of Mathematics, The University of Reading, England.



Abstract: The rational orthogonal transformations have a simple, monotonic, rational parameterisation which allows them to be used in place of floating point approximations to the real orthogonal transformations. An immediate consequence of this is that standard algorithms employing Jacobi rotations all give exactly orthogonal eigenvectors. Thus rational algorithms, which all have zero orthogonality error, compute eigenvectors which are infinitely more orthogonal than the eigenvectors computed by any floating point algorithm.
Further theoretical and practical advantages follow when rational eigensystems are considered. For example, rational matrices are enumerable, so theoretical generate-and-test algorithms exist which compute the exact rational eigenvalues and the exact rational eigenvectors for any rational matrix.
Such algorithms are not, however, practical, though some progress toward practical solutions has been made by considering Jacobi-like algorithms in the solution of rational eigensystems arising from rational symmetric matrices.
Firstly, in N x N symmetric matrices arising from the concatenation of a rational, diagonal matrix of eigenvalues with a rational similarity transformation containing at most N-1 non-trivial, plane rotations, at most N-1 sweeps of a rational Jacobi algorithm solves the eigensystem exactly, giving both the eigenvalues and eigenvectors. An implementation of this algorithm runs at practical speeds.
Secondly, where the similarity transformations contain more than N-1 non-trivial, plane rotations a different algorithm is effective. Instead of sweeping the matrix the last column of off-diagonal elements is iteratively reduced to zero by many applications of, so called, reduction rotations, and one application of a Jacobi rotation. This reduces the matrix to a block of size N-1 x N-1 with one exact, rational eigenvalue on the diagonal at (n,n). Hence computation of the rotations needed to zero the next column is relatively faster, both because the column has one fewer elements and because the elements have shorter bit codes. Conversely the accumulation of eigenvectors is slower, because the matrix of eigenvectors has constant size N x N and because its elements become longer as the algorithm proceeds. In the last step of the algorithm the 2 x 2 block is solved by one Jacobi rotation.
Algorithms and heuristics for computing the reduction rotations at practical speeds are still a subject of research, but on theoretical grounds it is guaranteed that a search for reduction rotations will terminate in finite time. By contrast, conventional, real algorithms are asymptotic and so theoretically run without termination. Hence, the rational algorithms are infinitely faster than the standard algorithms. Of course, early termination of floating point implementations leads to shorter run times than the corresponding rational implementations. But this holds out the prospect of combining fast floating point implementations with slow, rational, refinement post-processes to achieve exact numerical solution of rational eigensystems or accurate solution of irrational eigensystems.