next up previous contents index
Next: Preconditioned Lanczos Methods Up: Preconditioned Eigensolvers   A. Knyazev Previous: Preconditioned Shifted Power Method   Contents   Index

Preconditioned Steepest Ascent/Descent Methods

These methods are similar to Algorithm 11.5 in the previous subsection, but the shifts are chosen in the process of iterations and do not require knowledge of any bounds. Historically, preconditioned steepest descent/ascent methods for eigenproblems were suggested earlier (see [225,393,365]) than the power method with an a priori choice of the shift of the previous subsection.

Here, we present the single-vector version of the preconditioned steepest ascent for the pencil $B - \mu A$ in Algorithm 11.6. For $B=I$, $\lambda = 1 / \mu$, the method becomes a steepest descent method for $A - \lambda I$.

\begin{figure}\begin{algorithm}
{Preconditioned Steepest Ascent Method for GHEP\...
...ert x\Vert _A $\end{tabbing}}
\vspace*{-12pt}%% help
\end{algorithm}\end{figure}

We note that the shift can be determined with the Rayleigh-Ritz method for the pencil $B - \mu A$ on the trial subspace ${\rm span}\{ x^{(i)}, w \}$. As the trial space is two-dimensional, the projection problem can be solved as a quadratic equation. Thus, one can derive explicit formulas for $\tau $ as roots of a quadratic polynomial.

By construction, the steepest ascent (descent) method provides maximization (minimization) of the Rayleigh quotient on every iteration as compared with the corresponding iterative Algorithm 11.5. Therefore, the convergence results from the previous section can be applied without any changes, and similar observations can be made.

We have collected the dominant costs per iteration for Algorithm 11.6 in terms of storage, and floating point operations in the following two tables, respectively:

Item Storage
Residual $1$ $n$-vector
Approximate eigenvector $1$ $n$-vector
Temporary vectors 3-5 $n$-vectors

Action Major cost
Rayleigh-Ritz method $6$ dots and $2$ matrix-vector products
Residual $r$ $2$ matrix-vector products, $1$ update
Preconditioned residual $w $ Depends on preconditioner $T$
Approximate eigenvector $1$ update

The main advantages of the preconditioned steepest descent/ascent method are its relative simplicity and a minimal cost of every iteration compared to other preconditioned eigensolvers considered below. Using the Rayleigh-Ritz method instead of an a priori chosen shift does not require knowledge of any bounds. On the negative side, every iteration may cost about twice that of Algorithm 11.5. Though convergence is usually better than that of Algorithm 11.5 it is still worse than linear convergence of other preconditioned eigensolvers we consider below.


next up previous contents index
Next: Preconditioned Lanczos Methods Up: Preconditioned Eigensolvers   A. Knyazev Previous: Preconditioned Shifted Power Method   Contents   Index
Susan Blackford 2000-11-20