next up previous contents index
Next: Stopping Criteria and Accuracy Up: Symmetric Indefinite Lanczos Method Previous: Some Properties of Symmetric   Contents   Index


Algorithm

In order to use a Lanczos procedure to solve the generalized eigenproblem (8.21), we first convert it to an equivalent standard eigenvalue problem. Several transformations are possible; the most common ones are listed below.

(a)
If one wants a few of the eigenvalues of $\{A,B\}$ which are largest in magnitude, and if the matrix $B$ is nonsingular, we may multiply by $B^{-1}$ to obtain

\begin{displaymath}B^{-1} A x = \lambda x.\end{displaymath}

The matrix $B^{-1}A$ is symmetric with respect to $A$ or $B$.
(b)
If one wants a few of the eigenvalues closest to a given value $\sigma$, we apply the SI to obtain

\begin{displaymath}\left( (A - \sigma B)^{-1} B \right) x = (\lambda - \sigma)^{-1} x.\end{displaymath}

In particular, if the smallest eigenvalues in magnitude are desired, one could choose the shift value $\sigma=0$. Note that the matrix $(A - \sigma B)^{-1} B$ is symmetric with respect to $B$ or $(A-\sigma B)$.

We will focus on case (b) in the rest of this section and write the transformed problem as

\begin{displaymath}
H^{-1} B x = \mu x,
\end{displaymath}

where

\begin{displaymath}
H = A - \sigma B\quad \mbox{and} \quad
\mu = \frac{1}{\lambda-\sigma}.
\end{displaymath}

Note that the eigenvalues of $\{A,B\}$ may be complex even when $A$ and $B$ are real, so it may be necessary to use a value of $\sigma$ which is complex. The Lanczos recursion may still be implemented using real arithmetic even if a complex shift is used; see [362].

A symmetric indefinite Lanczos algorithm template is presented in Algorithm 8.4. The generated Lanczos vectors $\{q_j\}$ and scalars $\{ \alpha_j, \beta_j, \omega_j\}$ satisfy the following governing equations:

\begin{displaymath}
(H^{-1} B ) Q_j = Q_j (\Omega^{-1}_j T_j ) +
\frac{\beta_{j+1}}{\omega_{j+1}} q_{j+1} e^T_j,
\end{displaymath} (235)

where

\begin{displaymath}
T_j = \left[ \begin{array}{cccc}
\alpha_1 & \beta_2 & & \\ ...
...
\Omega_j = \mbox{diag}(\omega_1,\omega_2, \ldots, \omega_j).
\end{displaymath}

The Lanczos vectors $Q_j = \left[ \begin{array}{cccc}
q_1 & q_2 & \cdots & q_j \\
\end{array} \right]$ are $B$ orthogonal, i.e.,

\begin{displaymath}
Q^T_j B Q_j = \Omega_j, \quad\quad
Q^T_j B q_{j+1} = 0,
\end{displaymath}

and are normalized to have unit Euclidean norm, i.e., $\Vert q_j\Vert _2 = 1$.

The reduced eigenvalue problem is

\begin{displaymath}
T_j u^{(j)}_i = \theta^{(j)}_i \Omega_j u^{(j)}_i.
\end{displaymath}

It inherits the same numerical difficulties as the original problem. For example, Ritz value $\theta^{(j)}_i$ could be complex, and even defective. In other words, it may belong to a nondiagonal Jordan block of $\Omega^{-1}_j T_j$.

Once $\{\theta^{(j)}_i,u^{(j)}_i\}$ are computed, the right and left Ritz vectors are defined as

\begin{displaymath}
{x}^{(j)}_i := Q_j u^{(j)}_i \quad \mbox{and} \quad
{y}^{(j)}_i := B \bar{Q}_j \bar{u}^{(j)}_i = B \bar{x}^{(j)}_i,
\end{displaymath}

respectively. The quantities $\{\theta^{(j)}_i, x^{(j)}_i, y^{(j)}_i\}$ are referred to as Ritz triplets of the matrix $H^{-1} B$.

The (right) residual vectors $r^{(j)}_i$ corresponding to the Ritz pairs $\{\theta^{(j)}_i, x^{(j)}_i\}$ are

\begin{displaymath}
r^{(j)}_i = (H^{-1}B)x^{(j)}_i - \theta^{(j)}_ix^{(j)}_i
= \gamma^{(j)}_i q_{j+1},
\end{displaymath} (236)

where

\begin{displaymath}
\gamma^{(j)}_i = \frac{\beta_{j+1}}{\omega_{j+1}} (e_j^Tu^{(j)}_i).
\end{displaymath}

Note that the last equality in (8.23) is obtained by multiplying equation (8.22) on the right by $u^{(j)}_i$ and the definition of $x^{(j)}_i$. Note that the left residual vectors $(s^{(j)}_i)^{\ast}$ are related to the right residual vectors $r^{(j)}_i$ by $(s^{(j)}_i)^{\ast} = (r^{(j)}_i)^T B.$ It follows that
\begin{displaymath}
\Vert r^{(j)}_i\Vert _2 = \vert\gamma^{(j)}_i\vert
\end{displaymath} (237)

and
\begin{displaymath}
\Vert s^{(j)}_i\Vert _2 = \vert\gamma^{(j)}_i\vert\, \Vert B q_{j+1}\Vert _2.
\end{displaymath} (238)

Note that the vector $Bq_{j+1}$ is directly available in every iteration of the symmetric indefinite Lanczos algorithm. No extra call on $B$ is necessary. Therefore, one of the attractive features of the Lanczos algorithm is that the residual vectors can be calculated without explicitly computing Ritz vectors ${x}^{(j)}_i$ and ${y}^{(j)}_i$.


\begin{algorithm}{Symmetric Indefinite Lanczos Method
}
{
\begin{tabbing}
(nr)ss...
...> compute approximate eigenvectors ${x}^{(j)}_i$.
\end{tabbing}}
\end{algorithm}

We now comment on some steps of Algorithm 8.4:

(1)
The ideal initial vector is one which is a linear combination of the eigenvectors corresponding to the desired eigenvalues. If an approximation of such a vector is not available, then a random vector may be used. The initial vector $r$ is often preprocessed by multiplying it by $H^{-1} B$. This preprocessing step was suggested in [161] and is essential when $B$ is singular; see §8.6.4 for details.

(4) and (15)
The matrix-vector multiplication $Bq_j$ should be performed by a user-provided subroutine which can take advantage of the data structure of the matrix $B$.

(7)
The linear system of equations $H r = w$ must be solved for $r$. Let

\begin{displaymath}
H = {\cal L}{\cal D}{\cal L}^T
\end{displaymath}

be the sparse LDL$^{\rm T}$ factorization (see §10.3) computed before the for-loop. Then the vector $r$ can be efficiently computed in three steps:

$(7)^{\prime}$ solve ${\cal L} u = w $ for $u$ 

solve ${\cal D} v = u$ for $v$
solve ${\cal L}^T r = v $ for $r$

(12)
When $\Vert r\Vert _2 = 0$, it indicates that all of the eigenvalues of the $j\times j$ pencil $T_j - \lambda \Omega_j$ are also the eigenvalues of $H^{-1} B$, and the columns of $Q_j$ span an invariant subspace. One can either exit the algorithm or continue the algorithm by setting the value of $\beta_j$ to zero and selecting $r$ to be a random vector which is $B$ orthogonal to the columns of $Q_j$.

(13)
In finite precision arithmetic, the vectors $\{q_j\}$ are no longer $B$ orthogonal. The schemes for reorthogonalization which are discussed for the standard symmetric Lanczos procedure may be extended to the symmetric indefinite algorithm; see §4.4 for details.

(14)
In the symmetric Lanczos algorithm (see §4.4), the Lanczos vectors are scaled to be $B$ orthonormal, $q_j^TBq_j = 1$. However, in the symmetric indefinite algorithm, the vectors are scaled so that

\begin{displaymath}\Vert q_j\Vert _2 = 1. \end{displaymath}

There are two reasons for the change in scaling, both of which have to do with the indefiniteness of $B$. The first reason is that even when $A$ and $B$ are real, scaling the Lanczos vectors so that $q_j^TBq_j = 1$ may involve complex arithmetic. (By keeping track of the sign of $q_j^TBq_j $ it is possible to avoid complex arithmetic but still use $\vert q_j^TBq_j \vert^{1/2}$ as a norm. However, this algorithm is less stable.) The second reason is that when $B$ is indefinite, not every subspace has a $B$-orthonormal basis, and even if it does, it may be highly ill-conditioned [417,20]. The choice of scaling does not remedy the possibility of an ill-conditioned basis of Lanczos vectors. However, it does provide a convenient means of detecting when the basis is becoming ill-conditioned, specifically, a tiny value of $\vert\omega_{j+1}\vert$ [357].

(17)
If $\omega_{j+1}=0$, the Lanczos procedure suffers a breakdown similar to the breakdown which occurs in the non-Hermitian Lanczos procedure; see §7.8. The same remedies developed for the non-Hermitian Lanczos procedure, such as look-ahead [364,178] and adaptive blocking [29,298], can be employed.

(19)
The approximate eigenvalues of the matrix pencil $A - \lambda B$ are determined by solving the reduced $j\times j$ generalized eigenvalue problem

\begin{displaymath}T_j u = \theta \Omega_j u.\end{displaymath}

There are several algorithms available for solving this problem. If $j$ is small, one could apply the standard QZ algorithm to $\{T_j,\Omega_j\}$ or the QR algorithm to $\Omega_j^{-1}T_j$. However, neither of these algorithms take advantage of the structure of $\{T_j,\Omega_j\}$. A good survey of algorithms which exploit the symmetry of $\{T_j,\Omega_j\}$ is contained in [357], and some more recent work can be found in [406,407].

For even moderate values of $j$ (e.g., $j\approx 100$), solving the reduced problem becomes computationally significant. For long Lanczos runs, it is recommended to solve the reduced problem periodically, perhaps once in every five or ten Lanczos iterations.

A proper stopping criterion for the inner loop of the symmetric indefinite Lanczos method is

\begin{displaymath}
\max\left\{ \Vert r^{(j)}_i\Vert _2, \Vert s^{(j)}_i\Vert _2...
...\max \left\{ 1, \Vert Bq_{k+1}\Vert _2 \right\} \leq \epsilon,
\end{displaymath}

where $\epsilon$ is a user-given tolerance value. Note that $Bq_{k+1}$ is available at the end of a Lanczos run, and its Euclidean norm can be computed at the cost of one dot product. No extra call on $B$ is necessary for that term. An elaborate testing procedure for the convergence is discussed in the following §8.6.3.

(21)
Compute the approximate eigenvectors only when the corresponding Ritz values have converged according to the test described below.


next up previous contents index
Next: Stopping Criteria and Accuracy Up: Symmetric Indefinite Lanczos Method Previous: Some Properties of Symmetric   Contents   Index
Susan Blackford 2000-11-20