Algorithm

The Jacobi-Davidson algorithm is given in Algorithm 8.1. This algorithm attempts to compute the generalized Schur pairs , for which the ratio is closest to a specified target value in the complex plane. The algorithm includes restart in order to limit the dimension of the search space, and deflation with already converged left and right Schur vectors.

To apply this algorithm we need to specify a starting vector , a tolerance , a target value , and a number that specifies how many eigenpairs near should be computed. The value of specifies the maximum dimension of the search subspace. If it is exceeded then a restart takes place with a subspace of dimension .

On completion the generalized eigenvalues close to are delivered, and the corresponding reduced Schur form , , where and are by orthogonal and , are by upper triangular. The generalized eigenvalues are the on-diagonals of and . The computed form satisfies , , where is the th column of .

The accuracy of the computed reduced Schur form depends on the
distance between the target value and the eigenvalue
.
If we neglect terms of
order machine precision and of order , then we have
that
,
,
where the constants
and are given by

If , as in step (1) of the algorithm, then , . These values can be large if . In practice an accuracy of order is achieved also if is close to detected eigenvalues. The -accuracy can be guaranteed when an additional refinement step is performed with values for as .

We will now explain the successive main phases of the algorithm.

**(1)**- The initialization phase.
The choice for the scalars and
is in particular effective
if is in the interior of the spectrum.
The choice causes a breakdown if
is an eigenvalue (which can
easily be tested).
**(4)-(5)**- The new vector is made orthogonal with respect
to the current search subspace by means of modified Gram-Schmidt.
Likewise, the vector
is made orthogonal with
respect to the current test subspace . The two orthogonalization
processes can be replaced, for improved numerical stability, by a
template as in Algorithm 4.14 (p. ).
We expand the subspaces , , , and . denotes the matrix with the current basis vectors for the search subspace as its columns. The other matrices are defined in a similar obvious way.

**(12)-(13)**- The th row and column of the matrices
and
are computed.
Note that the scalars can also be computed from the scalars , and the orthogonalization constants of in step (10).

**(15)**The QZ decomposition for the pair of by matrices can be computed by a suitable routine for dense matrix pencils from LAPACK.

We have chosen to compute the generalized Petrov pairs, which makes the algorithm suitable for computing interior generalized eigenvalues of , for which is close to a specified .

For algorithms for reordering the generalized Schur form, see [448,449,171].

**(18)**- The stopping criterion is to accept a generalized eigenpair approximation
as soon as the norm of the residual (for the normalized right Schur
vector
approximation) is below . This means that we accept
inaccuracies in the order of in the computed generalized
eigenvalues, and inaccuracies (in angle) in the Schur vectors of
(provided that the concerned eigenvalue is
simple and well separated from the others).
Detection of all wanted eigenvalues cannot be guaranteed; see note (13) for Algorithm 4.17 (p. ).

**(22)**- After acceptance of a Petrov pair, we continue the search for
a next pair, with the remaining Petrov vectors as a
basis for the initial search space.
**(30)**- We restart when the dimension of the search space for the
current eigenvector exceeds . The process is restarted with
the subspaces spanned by the left and right Ritz vectors
corresponding to the generalized Ritz pairs closest to the target value
.
**(36)**- We have collected the locked (computed) right Schur vectors in ,
and the matrix is expanded with the current right Schur
eigenvector
approximation . Likewise, the converged left Schur vectors have been
collected in , and this matrix is expanded with .
This is done in order to obtain a more compact formulation; the
correction equation in step (37) of Algorithm 8.1 is equivalent
to the one in equation (8.13) for the deflated pair
in (8.15). The new correction has
to be orthogonal to the columns of as well as to .
Of course, the correction equation can be solved by any suitable process, for instance, a preconditioned Krylov subspace method that is designed to solve unsymmetric systems. However, because of the different projections, we always need a preconditioner (which may be the identity operator if nothing else is available) that is deflated by the same skew projections so that we obtain a mapping between and itself. Because of the occurrence of and , one has to be careful with the usage of preconditioners for the matrix . The inclusion of preconditioners can be done as in Algorithm 8.2. Make sure that the starting vector for an iterative solver satisfies the orthogonality constraints . Note that significant savings per step can be made in Algorithm 8.2 if is kept the same for a (few) Jacobi-Davidson iterations. In that case columns of can be saved from previous steps. Also the matrix can be updated from previous steps, as well as its decomposition.

It is not necessary to solve the correction equation very accurately. A strategy, often used for inexact Newton methods [113], also works well here: increase the accuracy with the Jacobi-Davidson iteration step, for instance, solve the correction equation with a residual reduction of in the th Jacobi-Davidson iteration ( is reset to 0 when a Schur vector is detected).

In particular, in the first few initial steps, the approximate eigenvalue may be very inaccurate, and then it does not make sense to solve the correction equation accurately. In this stage it can be more effective to temporarily replace by or to take for the expansion of the search subspace [335,172].

For the full theoretical background of this method, as well as details on the deflation technique with Schur vectors, see [172].