    Next: Nonsymmetric Eigenproblem and Schur Up: ScaLAPACK - Content Previous: Generalized Orthogonal Factorizations and

## Symmetric Eigenproblems

Let A be a real symmetric or complex Hermitian n-by-n matrix. A scalar is called an eigenvalue and a nonzero column vector z the corresponding eigenvector if . is always real when A is real symmetric or complex Hermitian.

The basic task of the symmetric eigenproblem routines is to compute values of and, optionally, corresponding vectors z for a given matrix A.

This computation proceeds in the following stages:

1. The real symmetric or complex Hermitian matrix A is reduced to real tridiagonal form T. If A is real symmetric this decomposition is with Q orthogonal and T symmetric tridiagonal. If A is complex Hermitian, the decomposition is with Q unitary and T, as before, real symmetric tridiagonal.
2. Eigenvalues and eigenvectors of the real symmetric tridiagonal matrix T are computed. If all eigenvalues and eigenvectors are computed, this is equivalent to factorizing T as , where S is orthogonal and is diagonal. The diagonal entries of are the eigenvalues of T, which are also the eigenvalues of A, and the columns of S are the eigenvectors of T; the eigenvectors of A are the columns of Z=QS, so that ( when A is complex Hermitian).

The solution of the symmetric eigenproblem implemented in PDSYEVX consists of three phases: (1) reduce the original matrix A to tridiagonal form where Q is orthogonal and T is tridiagonal, (2) find the eigenvalues and eigenvectors of T so that , and (3) form the eigenvector matrix V of A so . Phases 1 and 3 are analogous to their LAPACK counterparts. However, our current design for phase 2 differs from the serial (or shared memory) design. We have chosen to do bisection followed by inverse iteration (like the LAPACK expert driver DSYEVX), but with the reorthogonalization phase of inverse iteration limited to the eigenvectors stored in a single process. A straightforward parallelization of DSYEVX would have led to a serial bottleneck and significant slowdowns in the rare situation of matrices with eigenvalues tightly clustered together. The current design guarantees that phase (2) is inexpensive compared to the other phases once problems are reasonably large. An alternative algorithm which eliminates the need for reorthogonalization is currently being developed by Dhillon and Parlett using Fernando's double factorization method , and we expect to use this new algorithm in the near future. This new routine should guarantee high accuracy and high speed independent of the eigenvalue distribution.    Next: Nonsymmetric Eigenproblem and Schur Up: ScaLAPACK - Content Previous: Generalized Orthogonal Factorizations and

Susan Blackford
Thu Jul 25 15:38:00 EDT 1996