next up previous contents index
Next: Preconditioned Shifted Power Method Up: Preconditioned Eigensolvers   A. Knyazev Previous: Introduction   Contents   Index


General Framework of Preconditioning

Preconditioned methods are designed to handle the case when the only operation we can perform with matrices $A$ and $B$ of the pencil is multiplication of a vector by $A$ and $B$. To accelerate the convergence, we introduce a preconditioner $T$. It is also common to call the inverse $T^{-1} = M$ the preconditioner; see, e.g., the previous section. Applying the preconditioner $Tx$ to a vector $x$ usually involves solving a linear system $Mu=f$.

In many engineering applications, preconditioned iterative solvers for linear systems $Ax=b$ are already available, and efficient preconditioners $T \approx A^{-1}$ are constructed. We shall show that the same preconditioner $T$ can be used to solve an eigenvalue problem $Ax = \lambda x$ and $Bx = \mu Ax$. Moreover, existing codes for the system $Ax=b$ can often be just slightly modified to solve the partial eigenvalue problem with $A$.

We will assume that the preconditioner $T$ is symmetric positive definite. As $A$ is also symmetric positive definite, there exist positive constants $\delta_1 \geq \delta_0 >0$ such that

\begin{displaymath}
\delta_0 (M x,x) \leq (Ax,x) \leq \delta_1 (M x,x), \ M = T^{-1} .
\end{displaymath} (279)

The ratio $\delta_1 / \delta_0$ can be viewed as the spectral condition number $\kappa(TA)$ of the preconditioned matrix $TA$ and measures how well $M$ approximates, up to a scaling, the original matrix $A$. A smaller ratio $\delta_1 / \delta_0$ ensures faster convergence.

Indefinite preconditioners for symmetric eigenproblems are also possible, but not recommended. Iterative methods for nonsymmetric problems, e.g., based on minimization of the residual, should generally be used when the preconditioner is indefinite, which may increase computational costs considerably.

We will not assume that the preconditioner $T$ commutes with $A$, or $A^{-1}B$, despite the fact that such an assumption would greatly simplify the theory of preconditioned methods.

We first define, following [268], a preconditioned single-vector iterative solver for the pencil $B - \mu A$, as a generalized polynomial method of the following kind:

\begin{displaymath}
x^{(k)}=P_{m_k}(TA,TB) x^{(0)},
\end{displaymath} (280)

where $P_{m_k}$ is a polynomial of the $m_k$th degree of two independent variables, $ x^{(0)}$ is an initial guess, and $T$ is a fixed preconditioner.

We only need to choose a polynomial, either a priori or during the process of iterations, and use a recursive formula which leads to an iterative scheme. For an approximation $\mu^{(i)}$ to an eigenvalue of the pencil $B - \mu A$ for a given eigenvector approximation $x^{(i)}$, the Rayleigh quotient $\mu (x)$ (11.8) is typically used:

\begin{displaymath}
\mu(x^{(i)}) = \frac {(x^{(i)},Bx^{(i)})}{(x^{(i)},A x^{(i)})}.
\end{displaymath}

Thus, we have a complete description of a general preconditioned eigensolver for the pencil $B - \mu A$, as shown below:

THE PRECONDITIONED EIGENSOLVER FOR
  1. Start: select $x\sup{0}$.
  2. Iterate $m_k$ steps to compute $x^{(k)}=P_{m_k}(TA,TB) x^{(0)}$.
  3. Compute $\mu^{(k)} = {(x^{(k)},Bx^{(k)})}/{(x^{(k)},A x^{(k)})}$.

With $B=I$ and $\lambda = 1 / \mu$, we can obtain a similar algorithm for $A - \lambda I$. The polynomials can be chosen in a special way to force convergence of $x^{(k)}$ to an eigenvector other than the one corresponding to an extreme eigenvalue.

Similarly, we define general preconditioned block-iterative methods, where a group of vectors $x^{(k)}_j, \ j=1, \ldots, m$, is computed simultaneously:

\begin{displaymath}
x^{(k)}_j=P_{m_k}(TA,TB) x^{(0)}_j, \qquad j=1, \ldots, m,
\end{displaymath} (281)

where $P_{m_k}$ is a polynomial of the $m_k$th degree of two independent variables, and $ x^{(0)}_j$ are initial guesses. The polynomial $P_{m_k}$ does not have to be (and in practice is usually not) the same for different values of $j$. If it is the same, then (11.11) can be rewritten as the subspace iterations:
\begin{displaymath}
S^{(k)}=P_{m_k}(TA,TB) S^{(0)},
\end{displaymath} (282)

where $S^{(0)}$ and $S^{(k)}$ are $m$-dimensional subspaces. Such a method is easier to analyze theoretically; see an example of a preconditioned subspace iterations based on the power method with shift in [148,149], but may converge slower in practice.

In (11.11), the iterative subspace $S^{(k)}$ is defined as a span

\begin{displaymath}
S^{(k)} = {\rm span}\{ x^{(k)}_1, \ldots, x^{(k)}_m \}
\end{displaymath}

of individual vectors $ x^{(k)}_j$. It is common to use (11.11) recursively by combining it with a procedure for selecting individual vectors in $S^{(k)}$ as initial vectors for the next recursive step. The Rayleigh-Ritz method is the usual choice for such a procedure. One step of the recursion is shown below:

THE BLOCK PRECONDITIONED EIGENSOLVER FOR
  1. Start: select $x\sup{0}_j, \ j=1, \ldots, m $.

  2. Iterate $m_k$ steps to compute $\hat{x}^{(k)}_j=P_{m_k}(TA,TB) x^{(0)}_j, \ j=1, \ldots, m$.

  3. Use the Rayleigh-Ritz method for the pencil $B - \mu A$ in the subspace ${\rm span} \{ \hat{x}\sup{k}_1, \ldots, \hat{x}\sup{k}_m \}$, to compute the Ritz values $\mu^{(k)}_j$ and the corresponding Ritz vectors $ x^{(k)}_j$.

In the following sections, we consider particular examples of preconditioned eigensolvers.


next up previous contents index
Next: Preconditioned Shifted Power Method Up: Preconditioned Eigensolvers   A. Knyazev Previous: Introduction   Contents   Index
Susan Blackford 2000-11-20