 
 
 
 
 
 
 
 
 
 
A band Lanczos method for Hermitian matrices was first 
proposed in [375].
The Hermitian band Lanczos Algorithm 4.12 described 
here, however, is somewhat different in that it uses both the
first  Lanczos vectors (4.28) and the 
candidate vectors (4.30) for the following
 Lanczos vectors (4.28) and the 
candidate vectors (4.30) for the following  Lanczos
vectors.
This makes it possible to build only the
 Lanczos
vectors.
This makes it possible to build only the  matrix
 matrix 
 .
The algorithm in [375] constructs the first
.
The algorithm in [375] constructs the first  Lanczos vectors, and as a result, it needs to build a somewhat 
larger
Lanczos vectors, and as a result, it needs to build a somewhat 
larger 
 matrix, instead of
 matrix, instead of 
 .
The version of the band Lanczos method described here was
first derived in [177] as a special case of the general
nonsymmetric band Lanczos method proposed in [166].
.
The version of the band Lanczos method described here was
first derived in [177] as a special case of the general
nonsymmetric band Lanczos method proposed in [166].
The version of the band Lanczos method stated as 
Algorithm 4.12 
performs all multiplications with  as matrix-vector products
with single vectors.
However, at any stage of the algorithm, it is also possible to 
precompute the next
 as matrix-vector products
with single vectors.
However, at any stage of the algorithm, it is also possible to 
precompute the next  matrix-vector products, which will
be needed in the next
 matrix-vector products, which will
be needed in the next  iterations, as a single matrix-matrix 
product
 iterations, as a single matrix-matrix 
product  with a block
 with a block  of
 of  vectors.
The procedure is as follows.
Instead of performing step (7) in Algorithm 4.12, we
compute the matrix-matrix product
 vectors.
The procedure is as follows.
Instead of performing step (7) in Algorithm 4.12, we
compute the matrix-matrix product
![\begin{displaymath}
A \left[ \begin{array}{ccccc}
v_j & \hat{v}_{j+1} & \hat{v}_{j+2} &
\cdots & \hat{v}_{j+p_c-1}
\end{array} \right].
\end{displaymath}](img1217.png) 
 , which is used in 
the remaining steps of iteration
, which is used in 
the remaining steps of iteration  , as well as the vectors
, as well as the vectors
 iterations, we will need the vectors
 iterations, we will need the vectors
 is defined as
 is defined as  minus the
number of deflations that will have occurred during these 
following
 minus the
number of deflations that will have occurred during these 
following  iterations.
In particular,
 iterations.
In particular, 
 if no deflations will occur.
The matrix-vector products (4.44) we need are not quite
the ones we computed in (4.43).
However, the vectors (4.43) and (4.44) are
connected by the relation
 if no deflations will occur.
The matrix-vector products (4.44) we need are not quite
the ones we computed in (4.43).
However, the vectors (4.43) and (4.44) are
connected by the relation
 is the submatrix of
 is the submatrix of
 that consists of the entries in rows
 that consists of the entries in rows
 and columns
 and columns 
 of
of 
 .
If
.
If 
 , some of the
, some of the  's 
in (4.45) will lead to deflated vectors.
Let
's 
in (4.45) will lead to deflated vectors.
Let 
 be the vectors that are not deflated.
Furthermore, let
be the vectors that are not deflated.
Furthermore, let  be the matrix obtained from
 be the matrix obtained from 
 by deleting the columns that
correspond to the deflated
 by deleting the columns that
correspond to the deflated  vectors in (4.45).
With these notations, by (4.45), we have
 vectors in (4.45).
With these notations, by (4.45), we have
 is nonsingular and upper triangular.
Therefore, using (4.46), we can obtain each of the
vectors
 is nonsingular and upper triangular.
Therefore, using (4.46), we can obtain each of the
vectors  ,
, 
 , in (4.44)
as a suitable linear combination of
, in (4.44)
as a suitable linear combination of  of the precomputed
vectors (4.43).
 of the precomputed
vectors (4.43).
For Hermitian positive definite matrices  ,
yet another variant of Algorithm 4.12, based on coupled
recurrences, was proposed in [35].
The motivation for this variant was the observation that for
ill-conditioned 
Hermitian positive definite matrices
,
yet another variant of Algorithm 4.12, based on coupled
recurrences, was proposed in [35].
The motivation for this variant was the observation that for
ill-conditioned 
Hermitian positive definite matrices  , the smallest eigenvalue
obtained from the Lanczos matrix
, the smallest eigenvalue
obtained from the Lanczos matrix 
 may become negative;
see [34,35] for examples.
In exact arithmetic, this would be impossible, since the positive 
definiteness of
 may become negative;
see [34,35] for examples.
In exact arithmetic, this would be impossible, since the positive 
definiteness of  implies that
 implies that 
 is also 
positive definite.
However, in finite precision arithmetic, roundoff may cause the
matrix
 is also 
positive definite.
However, in finite precision arithmetic, roundoff may cause the
matrix 
 generated by Algorithm 4.12 to be
slightly indefinite.
This can be avoided easily by using coupled recurrences to generate
a Cholesky factor of
 generated by Algorithm 4.12 to be
slightly indefinite.
This can be avoided easily by using coupled recurrences to generate
a Cholesky factor of 
 , instead of
, instead of 
 itself. 
More precisely, the recurrences summarized in (4.32)
are replaced by coupled recurrences of the form
 
itself. 
More precisely, the recurrences summarized in (4.32)
are replaced by coupled recurrences of the form
 is a second basis matrix whose columns span the same
subspace as
 is a second basis matrix whose columns span the same
subspace as  , and it is constructed to be
, and it is constructed to be  -orthogonal:
-orthogonal:
 
 is a positive definite diagonal matrix.
Furthermore, in (4.47), the matrix
 is a positive definite diagonal matrix.
Furthermore, in (4.47), the matrix  is unit lower
triangular, and the matrix
 is unit lower
triangular, and the matrix  is unit upper triangular.
After
 is unit upper triangular.
After  iterations, the approximate eigenpairs of
 iterations, the approximate eigenpairs of  are
then obtained by computing the eigensolutions of the Hermitian
positive definite matrix
 are
then obtained by computing the eigensolutions of the Hermitian
positive definite matrix
 
 and
 and  .
.
Finally, we note that the block Lanczos method 
is the more traditional way of extending the
standard  Lanczos algorithm for a single starting vector to 
multiple starting vectors.
When the block Lanczos method is combined with deflation (as 
described, e.g., in [89]), then the resulting algorithm is 
mathematically equivalent to the band Lanczos method described
here.
However, the band Lanczos method has two distinct advantages
over the block Lanczos method.
First, deflation is extremely simple, since it requires
only to check the norm of a single vector.
In contrast, in the block Lanczos method, one needs to check if
a whole block of  vectors has full rank, and if not, find
the linearly dependent columns.
Typically, this requires the computation of a QR factorization
of each block.
Second, the band Lanczos method actually performs slightly fewer
explicit orthogonalizations, resulting in a Lanczos 
matrix
 vectors has full rank, and if not, find
the linearly dependent columns.
Typically, this requires the computation of a QR factorization
of each block.
Second, the band Lanczos method actually performs slightly fewer
explicit orthogonalizations, resulting in a Lanczos 
matrix 
 that is slightly sparser than the one 
produced by the block Lanczos method.
 that is slightly sparser than the one 
produced by the block Lanczos method.
 
 
 
 
 
 
 
 
