next up previous contents index
Next: Jacobi-Davidson Methods   G. Sleijpen Up: Band Lanczos Method   Previous: Algorithm   Contents   Index


A band Lanczos method for Hermitian matrices was first proposed in [375]. The Hermitian band Lanczos Algorithm 4.12 described here, however, is somewhat different in that it uses both the first $j$ Lanczos vectors (4.28) and the candidate vectors (4.30) for the following $p_c$ Lanczos vectors. This makes it possible to build only the $j\times j$ matrix $T_j^{\rm {(pr)}}$. The algorithm in [375] constructs the first $j+p_c$ Lanczos vectors, and as a result, it needs to build a somewhat larger $(j+p_c)\times j$ matrix, instead of  $T_j^{\rm {(pr)}}$. The version of the band Lanczos method described here was first derived in [177] as a special case of the general nonsymmetric band Lanczos method proposed in [166].

The version of the band Lanczos method stated as Algorithm 4.12 performs all multiplications with $A$ as matrix-vector products with single vectors. However, at any stage of the algorithm, it is also possible to precompute the next $p_c$ matrix-vector products, which will be needed in the next $p_c$ iterations, as a single matrix-matrix product $AV$ with a block $V$ of $p_c$ vectors. The procedure is as follows. Instead of performing step (7) in Algorithm 4.12, we compute the matrix-matrix product

A \left[ \begin{array}{ccccc}
v_j & \hat{v}_{j+1} & \hat{v}_{j+2} &
\cdots & \hat{v}_{j+p_c-1}
\end{array} \right].

This gives us the vector $\hat{v}_{j+p_c} = A v_j$, which is used in the remaining steps of iteration $j$, as well as the vectors
A \hat{v}_{j+1}, A \hat{v}_{j+2}, \ldots, A \hat{v}_{j+p_c-1}.
\end{displaymath} (52)

In the following $p_c-1$ iterations, we will need the vectors
A v_{j+1}, A v_{j+2}, \ldots, A v_{j+p_c^{\prime}-1}.
\end{displaymath} (53)

Here, $p_c^{\prime}$ is defined as $p_c$ minus the number of deflations that will have occurred during these following $p_c-1$ iterations. In particular, $p_c^{\prime}=p_c$ if no deflations will occur. The matrix-vector products (4.44) we need are not quite the ones we computed in (4.43). However, the vectors (4.43) and (4.44) are connected by the relation
&\!\!\!\!\! \left[ \begin{array}{cccc}
A ...
...ray} \right]
\end{array}\end{displaymath} (54)

Here, $T_{j+1:j+p_c^{\prime}-1,j-p_c+1:j-1}$ is the submatrix of $T_{j+p_c^{\prime}-1}$ that consists of the entries in rows $j+1,\ldots,j+p_c^{\prime}-1$ and columns $j-p_c+1,\ldots,j-1$ of $T_{j+p_c^{\prime}-1}$. If $p_c^{\prime}<p_c$, some of the $\hat{v}_k$'s in (4.45) will lead to deflated vectors. Let $\hat{v}_{j_1}, \hat{v}_{j_2},\ldots, \hat{v}_{j_{p_c^{\prime}-1}}$ be the vectors that are not deflated. Furthermore, let $\tilde{T}$ be the matrix obtained from $T_{j+1:j+p_c^{\prime}-1,j-p_c+1:j-1}$ by deleting the columns that correspond to the deflated $\hat{v}_k$ vectors in (4.45). With these notations, by (4.45), we have
\left[ \begin{array}{cccc}
A \hat{v}_{j_1} & A \ha...
...cdots & A v_{j+p_c^{\prime}-1}
\end{array} \right] \tilde{T}.
\end{displaymath} (55)

The matrix $\tilde{T}$ is nonsingular and upper triangular. Therefore, using (4.46), we can obtain each of the vectors $A v_{j+k}$, $k=1,2,\ldots,p_c^{\prime}-1$, in (4.44) as a suitable linear combination of $k$ of the precomputed vectors (4.43).

For Hermitian positive definite matrices $A$, yet another variant of Algorithm 4.12, based on coupled recurrences, was proposed in [35]. The motivation for this variant was the observation that for ill-conditioned Hermitian positive definite matrices $A$, the smallest eigenvalue obtained from the Lanczos matrix $T_j^{\rm {(pr)}}$ may become negative; see [34,35] for examples. In exact arithmetic, this would be impossible, since the positive definiteness of $A$ implies that $T_j^{\rm {(pr)}}=V_j^{\ast} A V_j$ is also positive definite. However, in finite precision arithmetic, roundoff may cause the matrix $T_j^{\rm {(pr)}}$ generated by Algorithm 4.12 to be slightly indefinite. This can be avoided easily by using coupled recurrences to generate a Cholesky factor of $T_j^{\rm {(pr)}}$, instead of $T_j^{\rm {(pr)}}$ itself. More precisely, the recurrences summarized in (4.32) are replaced by coupled recurrences of the form

A P_j & \!\!\! = V_j L_j D_j + \left[ \beg...
...}_j^{\rm {(dl)}},\\ [4pt]
V_j & \!\!\! = P_j U_j.
\end{displaymath} (56)

Here, $P_j$ is a second basis matrix whose columns span the same subspace as $V_j$, and it is constructed to be $A$-orthogonal:

P_j^{\ast} A P_j = D_j,

where $D_j$ is a positive definite diagonal matrix. Furthermore, in (4.47), the matrix $L_j$ is unit lower triangular, and the matrix $U_j$ is unit upper triangular. After $j$ iterations, the approximate eigenpairs of $A$ are then obtained by computing the eigensolutions of the Hermitian positive definite matrix

V_j^{\ast} A V_j = U_j^{\ast} D_j U_j,

which is given in terms of $D_j$ and $U_j$.

Finally, we note that the block Lanczos method is the more traditional way of extending the standard Lanczos algorithm for a single starting vector to multiple starting vectors. When the block Lanczos method is combined with deflation (as described, e.g., in [89]), then the resulting algorithm is mathematically equivalent to the band Lanczos method described here. However, the band Lanczos method has two distinct advantages over the block Lanczos method. First, deflation is extremely simple, since it requires only to check the norm of a single vector. In contrast, in the block Lanczos method, one needs to check if a whole block of $p_c$ vectors has full rank, and if not, find the linearly dependent columns. Typically, this requires the computation of a QR factorization of each block. Second, the band Lanczos method actually performs slightly fewer explicit orthogonalizations, resulting in a Lanczos matrix $T_j^{\rm {(pr)}}$ that is slightly sparser than the one produced by the block Lanczos method.

next up previous contents index
Next: Jacobi-Davidson Methods   G. Sleijpen Up: Band Lanczos Method   Previous: Algorithm   Contents   Index
Susan Blackford 2000-11-20