Next: Hermitian Eigenvalue Problems Up: An Introduction to Iterative Previous: Refined Projection Methods.   Contents   Index

# Spectral Transformations R. Lehoucq and D. Sorensen

It is well known that the iterative projection methods outlined in the previous section rapidly provide approximations to well-separated extremal eigenvalues. Quite often, however, the wanted eigenvalues may not be well separated or located in the interior of the convex hull of eigenvalues. In these situations, iterative projection methods need many steps to generate acceptable approximations, to these eigenvalues, if they converge at all. An alternative is to employ a spectral transformation of so that these poorly separated eigenvalues are transformed to well-separated extremal eigenvalues. Of course, the eigenvalues of should be easily recoverable from the eigenvalues of the transformed problem, and the transformation should be efficiently applicable in terms of computer time and storage.

The notion of well-separated (or nonclustered) eigenvalues can be roughly explained as follows. A well-separated eigenvalue is such that for all with . For Hermitian matrices, we suppose that , while for non-Hermitian matrices, .

The shift-and-invert spectral transformation (SI) is the one typically used. If is an eigenpair for , , and then the shift-and-invert eigenvalue problem is

 (15)

This SI is effective for finding eigenvalues near because the nearby eigenvalues of that are largest in magnitude correspond to the nearby eigenvalues that are nearest to the shift . These transformed eigenvalues of largest magnitude are precisely the eigenvalues that are the easiest to compute. Once they are found, they may be transformed back to eigenvalues of the original problem. The direct relation is

and the eigenvector associated with in the transformed problem is also an (generalized) eigenvector of the original problem corresponding to Other spectral transformation techniques include the generalized Cayley transformation; see [324] for further details.

The SI is extremely effective in terms of iteration steps (that is the dimension of the subspace) and should be used whenever possible. This is particularly the case when interior eigenvalues are sought, or when the desired eigenvalues are significantly smaller in magnitude than the eigenvalues largest in magnitude, or when the desired eigenvalues are clustered.

The potential drawback of a spectral transformation is that linear systems involving must be solved. This can either be accomplished via a matrix factorization or with an iterative method. The user should use a sparse direct method to factor the appropriate matrix whenever feasible. If an iterative method is used for the linear system solves, the accuracy of the solutions must be commensurate with the convergence tolerance used for the eigensolver. A heuristic is that a slightly more stringent tolerance is needed for the iterative linear system solves (relative to the desired accuracy of the eigenvalue calculation). See [283,291,414] for a discussion and further references.

It is also possible to carry out the transformation in an approximate way and to work with subspaces that no longer have a Krylov structure. This is the idea behind several algorithms, including the Jacobi-Davidson method. The latter, and other methods, will also be discussed in more detail in the remainder of the book.

Next: Hermitian Eigenvalue Problems Up: An Introduction to Iterative Previous: Refined Projection Methods.   Contents   Index
Susan Blackford 2000-11-20