 
 
 
 
 
 
 
 
 
 
For restart purposes, specially if one is heading for interior eigenvalues, the harmonic Ritz vectors have been advocated for symmetric matrices [331]; see also §4.7.4.
The concept of harmonic Ritz values [349] is easily generalized
for unsymmetric matrices [411]. As we have seen, the
Jacobi-Davidson methods generate basis vectors 
 for a subspace
for a subspace  . For the projection of
. For the projection of  onto this
subspace we have to compute the vectors
 onto this
subspace we have to compute the vectors 
 . The
inverses of the Ritz values of
. The
inverses of the Ritz values of  , with respect to the subspace
spanned by the
, with respect to the subspace
spanned by the  , are called the harmonic Ritz values. The
harmonic Ritz values can be computed without inverting
, are called the harmonic Ritz values. The
harmonic Ritz values can be computed without inverting  , since a
harmonic Ritz pair
, since a
harmonic Ritz pair 
 satisfies
satisfies
 and
 and
 .
This implies that the harmonic Ritz values are the eigenvalues of the
pencil
.
This implies that the harmonic Ritz values are the eigenvalues of the
pencil 
 :
:
 
The exterior standard Ritz values usually converge to exterior
eigenvalues of  . Likewise, the interior harmonic Ritz values for the
shifted matrix
. Likewise, the interior harmonic Ritz values for the
shifted matrix  usually converge towards eigenvalues
 usually converge towards eigenvalues
 closest to the shift
 closest to the shift  . Fortunately, the search
subspaces
. Fortunately, the search
subspaces  for the shifted matrix and the unshifted matrix
coincide, which facilitates the computation of harmonic Ritz pairs. For
reasons of stability we construct both
 for the shifted matrix and the unshifted matrix
coincide, which facilitates the computation of harmonic Ritz pairs. For
reasons of stability we construct both  and
 and  to
orthonormal:
 to
orthonormal:  is such that
 is such that 
 ,
where
,
where  is
 is  by
 by  upper triangular.
 upper triangular.
In the resulting scheme we compute a (partial) Schur
decomposition rather than a partial eigendecomposition. That is, we wish
to compute vectors 
 , such that
, such that
 , with
, with 
 , and
, and  is
a
 is
a  by
 by  upper triangular matrix. The diagonal elements of
 upper triangular matrix. The diagonal elements of
 represent eigenvalues of
 represent eigenvalues of  , and the corresponding
eigenvectors of
, and the corresponding
eigenvectors of  can be computed from
 can be computed from  and
 and  .
.
An algorithm for Jacobi-Davidson, with partial reduction to Schur form
and based on harmonic Ritz values and vectors, is given in
Algorithm 7.19. The algorithm includes restart and deflation
techniques. It can be used for the computation of a number of
eigenvalues close to  .
.
To apply this algorithm we need to specify a starting vector  , a
tolerance
, a
tolerance  , a target value
, a target value  , and a number
, and a number  that specifies how many eigenpairs near
that specifies how many eigenpairs near  should be computed. The
value of
 should be computed. The
value of  denotes the maximum dimension of the search
subspace. If it is exceeded, then a restart takes place with a subspace
of specified dimension
 denotes the maximum dimension of the search
subspace. If it is exceeded, then a restart takes place with a subspace
of specified dimension  .
.
On completion the  eigenvalues close to
 eigenvalues close to  are delivered, and the corresponding reduced Schur form
are delivered, and the corresponding reduced Schur form  ,
 where
,
 where  is
 is  by
 by  orthogonal
and
 orthogonal
and  is
 is  by
 by  upper triangular. The eigenvalues
are on the diagonal of
 upper triangular. The eigenvalues
are on the diagonal of  .  The computed Schur form
satisfies
.  The computed Schur form
satisfies 
 ,
where
,
where  is the
 is the  th column of
th column of  .
.
We will comment on some parts of the algorithm in view of our discussions in previous subsections.
Initialization phase.
The newly computed correction is made orthogonal with respect to the
current search subspace by means of modified Gram-Schmidt. We have
chosen to store also the matrix 
 ;
;
 is the expansion vector for this matrix. The expansion
vector for
 is the expansion vector for this matrix. The expansion
vector for  is obtained by making
 is obtained by making  orthogonal with
respect to the space of detected Schur vectors and with respect to the
current test subspace by means of modified Gram-Schmidt. The
Gram-Schmidt steps can be replaced, for improved numerical stability, by
the template given in Algorithm 4.14.
 orthogonal with
respect to the space of detected Schur vectors and with respect to the
current test subspace by means of modified Gram-Schmidt. The
Gram-Schmidt steps can be replaced, for improved numerical stability, by
the template given in Algorithm 4.14.
 denotes the
 denotes the  by
 by  matrix
with columns
 matrix
with columns  ; likewise
; likewise  and
 and  .
.
The values  represent
elements of the square
 represent
elements of the square  by
 by  matrix
matrix 
 .
The values
.
The values  represent elements of the
represent elements of the  by
 by  upper triangular
upper triangular 
 .
.
(Note that 
 , if
, if
 . Therefore,
. Therefore,
  can be reconstructed from
 can be reconstructed from
 ,
,  ,
,  , and
, and  .
In particular,
.
In particular,  can be
computed from these quantities.
Instead of storing the
 can be
computed from these quantities.
Instead of storing the  -dimensional
matrix
-dimensional
matrix  , it suffices to store
the
, it suffices to store
the  by
 by  matrix
 matrix  (of elements
(of elements 
 ,
computed in (3)-(4)). This approach saves
memory space. However, for avoiding
instabilities, the deflation
procedure needs special attention.)
,
computed in (3)-(4)). This approach saves
memory space. However, for avoiding
instabilities, the deflation
procedure needs special attention.)
At this point the QZ decomposition
(generalized Schur form) of the matrix pencil
 has to be be computed:
 has to be be computed:
 and
 and
 , where
, where  and
 and
 are unitary and
 are unitary and  and
and  are upper triangular.
This can be done with a suitable routine
for dense matrix pencils from LAPACK.
 are upper triangular.
This can be done with a suitable routine
for dense matrix pencils from LAPACK.
In each step, the Schur form has to be sorted such that
 is smallest. Only if
 is smallest. Only if 
 does the sorting of the Schur form have to be
such that all of the
 does the sorting of the Schur form have to be
such that all of the  leading
diagonal elements of
 leading
diagonal elements of  and
 and  represent the
 represent the
 harmonic Ritz values closest to
 harmonic Ritz values closest to  .
For ease of presentation we sorted all diagonal elements here.
.
For ease of presentation we sorted all diagonal elements here.
For an algorithm of the sorting of a generalized Schur form, see [33,448,449] and [171, Chap. 6B].
The matrix  is
 is  by
 by  with columns
 with columns  ;
likewise
;
likewise  .
.
The value of  needs some attention. We have chosen to compute
the Rayleigh quotient
 needs some attention. We have chosen to compute
the Rayleigh quotient  (instead of the harmonic Ritz value)
corresponding to the harmonic Ritz vector
 (instead of the harmonic Ritz value)
corresponding to the harmonic Ritz vector  (see [412]). The
Rayleigh quotient follows from the requirement that
 (see [412]). The
Rayleigh quotient follows from the requirement that 
 instead of
 instead of  ; then
; then  .
.
 denotes the complex conjugate of
 denotes the complex conjugate of  .
.
The stopping criterion is to accept a Schur vector approximation as soon
as the norm of the residual (for the normalized Schur vector
approximation) is below  . This means that we accept
inaccuracies in the order of
. This means that we accept
inaccuracies in the order of  in the computed eigenvalues, and
inaccuracies (in angle) in the eigenvectors of
 in the computed eigenvalues, and
inaccuracies (in angle) in the eigenvectors of 
 (provided that the concerned eigenvalue is simple and well
separated from the others and the left and right eigenvectors have a
small angle).
 (provided that the concerned eigenvalue is simple and well
separated from the others and the left and right eigenvectors have a
small angle).
Detection of all wanted eigenvalues cannot be guaranteed; see note (13)
for Algorithm 4.17 (p. ![[*]](http://www.netlib.org/utk/icons/crossref.png) ).
).
This is a restart after acceptance of an approximate eigenpair.
At this point we have a restart when the dimension of the subspace
exceeds  . After a restart the Jacobi-Davidson iterations are
resumed with a subspace of dimension
. After a restart the Jacobi-Davidson iterations are
resumed with a subspace of dimension  .
.
The deflation with computed eigenvectors is represented by the factors
with  . The matrix
. The matrix  has the converged eigenvectors as its columns.
If a left preconditioner
 has the converged eigenvectors as its columns.
If a left preconditioner  is available for the operator
 is available for the operator  , then with a Krylov solver similar reductions are realizable as in
the situation for exterior eigenvalues. A template for the efficient
handling of the left-preconditioned operator is given in
Algorithm 4.18.
, then with a Krylov solver similar reductions are realizable as in
the situation for exterior eigenvalues. A template for the efficient
handling of the left-preconditioned operator is given in
Algorithm 4.18.
 
 
 
 
 
 
 
 
