Next: Software Availability Up: Jacobi-Davidson Methods   G. Sleijpen Previous: An Algorithm Template.   Contents   Index

## Computing Interior Eigenvalues

If one is searching for the eigenpair with the smallest or largest eigenvalue only, then the obvious restart approach works quite well, but often it does not do very well if one is interested in an interior eigenvalue. The problem is that the Ritz values converge monotonically towards exterior eigenvalues, and a Ritz value that is close to a target value in the interior of the spectrum may be well on its way to some other exterior eigenvalue. It may even be the case that the corresponding Ritz vector has only a small component in the direction of the desired eigenvector. It will be clear that such a Ritz vector represents a poor candidate for restart and the question is, What is a better vector for restart? One answer is given by the so-called harmonic Ritz vectors, discussed in §3.2; see also [331,349,411].

As we have seen, the Jacobi-Davidson methods generate basis vectors for a subspace . For the projection of onto this subspace we compute the vectors . The harmonic Ritz values are inverses of the Ritz values of , with respect to the subspace spanned by the . They can be computed without inverting , since a harmonic Ritz pair satisfies

 (60)

for and . This implies that the harmonic Ritz values are the eigenvalues of the pencil , or, since :

For stability reasons we orthonormalize the columns of and transform the columns of accordingly. This also further simplifies the equation: we see that the harmonic Ritz values are the inverses of the eigenvalues of the symmetric matrix .

In [349] it is shown that for Hermitian the harmonic Ritz values converge monotonically towards the smallest nonzero eigenvalues in absolute value. Note that the harmonic Ritz values are unable to identify a zero eigenvalue of , since that would correspond to an infinite eigenvalue of . Likewise, the harmonic Ritz values for the shifted matrix converge monotonically towards eigenvalues closest to the target value . Fortunately, the search subspace for the shifted matrix and the unshifted matrix coincide, which facilitates the computation of harmonic Ritz pairs for any shift. The harmonic Ritz vector for the shifted matrix, corresponding to the harmonic Ritz value closest to , can be interpreted as maximizing a Rayleigh quotient for . It represents asymptotically the best information that is available for the wanted eigenvalue, and hence it represents asymptotically the best candidate as a starting vector after restart, provided that .

For harmonic Ritz values, the correction equation has to take into account the orthogonality with respect to , and this leads to skew projections. We can use orthogonal projections in the following way. If is the selected approximation of an eigenvector, the Rayleigh quotient leads to the residual with smallest norm; that is, with , we have that for any scalar , including the harmonic Ritz value . Moreover, the residual for the Rayleigh quotient is orthogonal to . This makes compatible'' with the operator in the correction equation. Here .

An algorithm for the Jacobi-Davidson method based on harmonic Ritz values and vectors, combined with restart and deflation, is given in Algorithm 4.19. The algorithm can be used for the computation of a number of successive eigenvalues immediately to the right of the target value .

To apply this algorithm we need to specify a starting vector , a tolerance , a target value , and a number that specifies how many eigenpairs near should be computed. The value of denotes the maximum dimension of the search subspace. If it is exceeded, a restart takes place with a subspace of specified dimension .

On completion, the eigenvalues at the right side nearest to are delivered. The computed eigenpairs , , satisfy , where denotes the th column of .

For exterior eigenvalues a simpler algorithm has been described in §4.7.3. We will now comment on some parts of the algorithm in view of our discussions in previous subsections.

(1)
Initialization phase.

(3)-(7)
The vector is made orthogonal with respect to the current test subspace by means of modified Gram-Schmidt. This can be replaced, for improved numerical stability, by an adoption (for the vector ) of the template given in Algorithm 4.14.

(8)-(10)
The values represent elements of the square by matrix , where denotes the by matrix with columns , and likewise . Because is Hermitian, only the upper triangular part of this matrix is computed.

(11)-(13)
At this point the eigenpairs for the problem should be computed. This can be done with a suitable routine for Hermitian dense matrices from LAPACK. Note that the harmonic Ritz values are the inverses of the eigenvalues of . We have to compute the Rayleigh quotient for , and next normalize , in order to compute a proper residual . We have used that .

The vectors are the columns of by matrix and .

(14)
The stopping criterion is to accept an eigenvector approximation as soon as the norm of the residual (for the normalized eigenvector approximation) is below . This means that we accept inaccuracies in the order of in the computed eigenvalues, and inaccuracies (in angle) in the eigenvectors of , provided that the associated eigenvalue is simple and well separated from the others; see (4.4).

Detection of all wanted eigenvalues cannot be guaranteed; see note (14) for Algorithm 4.13 and note (13) for Algorithm 4.17.

(17)
This is a restart after acceptance of an approximate eigenpair. The restart is slightly more complicated since two subspaces are involved. Recomputation of the spanning vectors from these subspaces is done in (18)-(21).

(24)
At this point we have a restart when the dimension of the subspace exceeds . After a restart, the Jacobi-Davidson iterations are resumed with a subspace of dimension . The selection of this subspace is based on the harmonic Ritz values nearest to the target .

(31)-(32)
The deflation with computed eigenvectors is represented by the factors with . The matrix has the computed eigenvectors as its columns. If a left preconditioner is available for the operator , then with a Krylov solver similar reductions are realizable, as in the situation for exterior eigenvalues. A template for the efficient handling of the left-preconditioned operator is given in Algorithm 4.18.

Next: Software Availability Up: Jacobi-Davidson Methods   G. Sleijpen Previous: An Algorithm Template.   Contents   Index
Susan Blackford 2000-11-20