 
 
 
 
 
 
 
 
 
 
We include a numerical example for testing purposes, so that potential users of the Jacobi-Davidson algorithms can verify and compare their results.
The symmetric matrix  is of dimension
 is of dimension  . The diagonal entries
are
. The diagonal entries
are  , the codiagonal entries are
, the codiagonal entries are 
 , and furthermore,
, and furthermore,
 . 
All other entries are zero. This example has been
taken from [88] and is discussed, in the context of the
Jacobi-Davidson algorithm, in [411, p. 410].
. 
All other entries are zero. This example has been
taken from [88] and is discussed, in the context of the
Jacobi-Davidson algorithm, in [411, p. 410].
We use Algorithm 4.17 for the computation of the  largest eigenvalues. The input parameters have been chosen as follows.
The starting vector
largest eigenvalues. The input parameters have been chosen as follows.
The starting vector 
 . The tolerance is
. The tolerance is
 . The subspace dimension parameters are
. The subspace dimension parameters are
 ,
,  , and the target value
, and the target value  .
.
|  | 
|  | 
We show graphically the norm of the residual vector as a function of
the iteration number in Figure 4.5. Every time the norm
is less than  , we have determined an eigenvalue within
this precision, and the iteration is continued with deflation for the
next eigenvalue. The four pictures represent, lexicographically, the
following different situations:
, we have determined an eigenvalue within
this precision, and the iteration is continued with deflation for the
next eigenvalue. The four pictures represent, lexicographically, the
following different situations:
 , in item (32) of
Algorithm 4.17, is simply taken as
, in item (32) of
Algorithm 4.17, is simply taken as  . In exact arithmetic,
this should deliver the same Ritz values as the Arnoldi algorithm
(assuming for Arnoldi a similar restart strategy as in
Algorithm 4.17).
. In exact arithmetic,
this should deliver the same Ritz values as the Arnoldi algorithm
(assuming for Arnoldi a similar restart strategy as in
Algorithm 4.17).
 , as in Algorithm 4.18, without further
subspace acceleration; that is, we stop after step (d
, as in Algorithm 4.18, without further
subspace acceleration; that is, we stop after step (d ).
This is equivalent to the method currently in use among 
quantum chemists [344].
).
This is equivalent to the method currently in use among 
quantum chemists [344].
 ).
).
 and
 and  steps of GMRES
(note that it would have been more efficient to use MINRES, but this
requires two-sided preconditioning, for which we did not supply the
algorithm).
 steps of GMRES
(note that it would have been more efficient to use MINRES, but this
requires two-sided preconditioning, for which we did not supply the
algorithm).
In Figure 4.6, we give the convergence history for interior
eigenvalues, as obtained with Algorithm 4.17 (top parts) and with
Algorithm 4.19 (bottom parts), with the following input
specifications: 
 ,
,  ,
,
 ,
,  ,
,  , and
, and  .
Again, every time the curve gets below
.
Again, every time the curve gets below  , this
indicates convergence of an approximated eigenvalue to within that
tolerance. For all figures, we used 5 steps of GMRES to solve the
correction equation in (32). For the left figures, we did not use
preconditioning. For the right figures, we preconditioned GMRES with
, this
indicates convergence of an approximated eigenvalue to within that
tolerance. For all figures, we used 5 steps of GMRES to solve the
correction equation in (32). For the left figures, we did not use
preconditioning. For the right figures, we preconditioned GMRES with
 , as in Algorithm 4.18.
, as in Algorithm 4.18. 
 
 
 
 
 
 
 
 
