Eigenvalue Problems

Eigenvalue problems have also provided a fertile ground for the development of higher performance algorithms. These algorithms generally all consist of three phases: (1) reduction of the original dense matrix to a condensed form by orthogonal transformations, (2) solution of condensed form, and (3) optional backtransformation of the solution of the condensed form to the solution of the original matrix. In addition to block versions of algorithms for phases 1 and 3, a number of entirely new algorithms for phase 2 have recently been discovered. In particular, Version 3.0 of LAPACK includes new block algorithms for the singular value decomposition (SVD) and SVD-based least squares solver, as well as a prototype of a new algorithm xSTEGR[35,87,86,36], which may be the ultimate solution for the symmetric eigenproblem and SVD on both parallel and serial machines.

The first step in solving many types of eigenvalue problems is to reduce the original matrix to a condensed form by orthogonal transformations. In the reduction to condensed forms, the unblocked algorithms all use elementary Householder matrices and have good vector performance. Block forms of these algorithms have been developed [46], but all require additional operations, and a significant proportion of the work must still be performed by Level 2 BLAS, so there is less possibility of compensating for the extra operations.

The algorithms concerned are:

- reduction of a symmetric matrix to tridiagonal form to solve a
symmetric eigenvalue problem: LAPACK routine xSYTRD
applies a symmetric block
update of the form

using the Level 3 BLAS routine xSYR2K; Level 3 BLAS account for at most half the work. - reduction of a rectangular matrix to bidiagonal form to compute
a singular value decomposition: LAPACK routine xGEBRD
applies a block update
of the form

using two calls to the Level 3 BLAS routine xGEMM; Level 3 BLAS account for at most half the work. - reduction of a nonsymmetric matrix to Hessenberg form to solve a
nonsymmetric eigenvalue problem: LAPACK routine xGEHRD
applies a block update
of the form

Level 3 BLAS account for at most three-quarters of the work.

Note that only in the reduction to Hessenberg form is it possible to
use the block Householder representation described in
subsection 3.4.2.
Extra work must be performed to compute the ** n**-by-

Nevertheless, the performance gains can be worthwhile on some machines for large enough matrices, for example, on an IBM Power 3, as shown in Table 3.11.

Following the reduction of a dense (or band) symmetric matrix to tridiagonal
form ** T**,
we must compute the eigenvalues and (optionally) eigenvectors of

Version 2.0 of LAPACK introduced a new algorithm,
xSTEDC,
for finding all eigenvalues and
eigenvectors of ** T**. The new algorithm can exploit Level 2 and 3 BLAS,
whereas
the previous algorithm,
xSTEQR,
could not. Furthermore, xSTEDC usually does
many
fewer flops than xSTEQR, so the speedup is compounded. Briefly, xSTEDC works
as follows
(for details, see [57,89]). The tridiagonal matrix

where

where

where the diagonals of are the desired eigenvalues of

The same recursive algorithm has been developed for the singular value decomposition of the bidiagonal matrix resulting from reducing a dense matrix with xGEBRD. The SVD driver using this algorithm is called xGESDD. This recursive algorithm is also used for the SVD-based linear least squares solver xGELSD; see Figure 3.3 to see how much faster DGELSD is than its older routine DGELSS. SBDSQR, Comparison timings of DGESVD and DGESDD can be found in Tables 3.18 and 3.19.

Version 3.0 of LAPACK introduced another new algorithm, xSTEGR, for finding all the eigenvalues and eigenvectors of a symmetric tridiagonal matrix. It is usually even faster than xSTEDC above, and we expect it to ultimately replace all other LAPACK algorithm for the symmetric eigenvalue problem and SVD. Here is a rough description of how it works; for details see [35,87,86,36].

It is easiest to think of xSTEGR as a variation on xSTEIN,
inverse iteration.
If all the eigenvalues were well separated, xSTEIN would
run in ** O(n^{2})** time.
But it is difficult for xSTEIN to compute accurate eigenvectors
belonging to close eigenvalues, those that have four or
more decimals in common with their neighbors. Indeed, xSTEIN
slows down because it reorthogonalizes the corresponding eigenvectors.
xSTEGR escapes this difficulty by exploiting the invariance of
eigenvectors under translation.

For each cluster
of close eigenvalues the algorithm chooses
a shift ** s** at one end of ,
or just outside, with the property
that

The next task is to compute an eigenvector for .
For each
the algorithm computes, with care, an optimal
*twisted factorization*

obtained by implementing triangular factorization both from top down and bottom up and joining them at a well chosen index

where ... indicates smaller terms and ,

where is 's neighbor in the shifted spectrum. Given this nice bound the algorithm computes

The procedure described above is repeated for any eigenvalues that
remain without eigenvectors. It can be shown that all the computed
** z**'s are very close to eigenvectors of small relative perturbations of
one global Cholesky factorization

For computing the eigenvalues and eigenvectors of a Hessenberg
matrix--or rather for computing its Schur factorization-- yet another
flavor of block algorithm has been developed: a **multishift**
** QR** iteration
[9]. Whereas the traditional EISPACK routine
HQR uses a double shift (and the corresponding complex
routine COMQR
uses a single shift), the multishift algorithm uses block shifts of
higher order. It has been found that often the total number of operations

Finally, we note that research into block algorithms for symmetric and nonsymmetric eigenproblems continues [11,68], and future versions of LAPACK will be updated to contain the best algorithms available.