next up previous contents index
Next: Preconditioned Simultaneous Iterations Up: Preconditioned Eigensolvers   A. Knyazev Previous: Methods with Preconditioned Inner   Contents   Index


Preconditioned Conjugate Gradient Methods

In this subsection, we describe our favorite method, in terms of practical efficiency: the locally optimal PCG method [266]. It combines the robustness and simplicity of Algorithm 11.6, the three-term recurrence of the preconditioned Lanczos algorithm 11.7, and, perhaps, the fast convergence of the preconditioned projection algorithm 11.8 and Davidson's method.

A simple variant of a PCG method for the pencil $B - \mu A$ can be written as

\begin{displaymath}
\begin{array}{l}
x^{(i+1)} = w^{(i)} + \tau^{(i)} x^{(i)} + ...
...mu^{(i)} A x^{(i)}), \\
\mu^{(i)} = \mu (x^{(i)}),
\end{array}\end{displaymath} (287)

with properly chosen scalar iteration parameters $\tau^{(i)}$ and $\gamma^{(i)}$. The easiest and most efficient choice of parameters is based on an idea of local optimality [266]; namely, select $\tau^{(i)}$ and $\gamma^{(i)}$ that maximize the Rayleigh quotient $\mu(x^{(i+1)})$ by using the Rayleigh-Ritz method; see Algorithm 11.10.


\begin{algorithm}
{PCG Method for GHEP
\index{conjugate gradient method!precondi...
...z vector corresponding to
the maximal Ritz value
\end{tabbing}}
\end{algorithm}

For the locally optimal version of the PCG method, we can trivially apply convergence rate estimate (11.13) as the method converges not slower than the preconditioned steepest ascent in terms of maximizing the Rayleigh quotient.

We have collected the dominant costs per iteration for Algorithm 11.10 in terms of storage and floating point operations, respectively, in the following tables.

Item Storage
Residual $1$ $n$-vector
Approximate eigenvector $2$ $n$-vectors
Temporary vectors 3-5 $n$-vectors

Action Major cost
Rayleigh-Ritz method $9$ dots and $2$ matrix-vector products,
Residual $r$ $2$ matrix-vector products, $1$ update
Preconditioned residual $w $ Depends on preconditioner $T$
Approximate eigenvector $1$ update

The conjugate gradient methods for eigenvalue problems were suggested by Bradbury and Fletcher [61] and have attracted attention recently; different versions have been suggested, e.g., [433,394,429,186,464,185,167,48]. They are usually constructed as general conjugate gradient methods, applied to minimization of the Rayleigh quotient, and do not utilize the specific property of the Rayleigh quotient that local minimization problems can be easily solved using the Rayleigh-Ritz method. They are typically based on (now standard for linear systems) two linked two-term recurrences, where one of the scalar parameters is chosen to make directions conjugate in some sense. The peculiarity of Algorithm 11.10 is that it uses a three-term recurrence, which allows us both to choose scalar parameters by solving a local minimization problem and to not worry about finding conjugate directions.

We finally note that Algorithm 11.10 is somewhat unstable since the basis of the trial subspace is almost linearly dependent. This can be cured by an orthogonalization prior to applying the Rayleigh-Ritz method.

We present a block version of Algorithm 11.10 in the next subsection.


next up previous contents index
Next: Preconditioned Simultaneous Iterations Up: Preconditioned Eigensolvers   A. Knyazev Previous: Methods with Preconditioned Inner   Contents   Index
Susan Blackford 2000-11-20