next up previous contents index
Next: Conjugate Gradient Squared Up: Quasi-Minimal Residual (QMR) Previous: Convergence



The pseudocode for the Preconditioned Quasi Minimal Residual Method, with preconditioner , is given in Figure gif. This algorithm follows the two term recurrence  version without look-ahead, presented by Freund and Nachtigal [103] as Algorithm 7.1. This version of QMR is simpler to implement than the full QMR method with look-ahead, but it is susceptible to breakdown of the underlying Lanczos process. (Other implementational variations are whether to scale Lanczos vectors or not, or to use three-term recurrences instead of coupled two-term recurrences. Such decisions usually have implications for the stability and the efficiency of the algorithm.) A professional implementation of QMR with look-ahead is given in Freund and Nachtigal's QMRPACK, which is available through netlib; see Appendix gif.

We have modified Algorithm 7.1 in [103] to include a relatively inexpensive recurrence relation for the computation of the residual vector. This requires a few extra vectors of storage and vector update operations per iteration, but it avoids expending a matrix-vector product on the residual calculation. Also, the algorithm has been modified so that only two full preconditioning steps are required instead of three.

Computation of the residual is done for the convergence test. If one uses right (or post) preconditioning, that is , then a cheap upper bound for can be computed in each iteration, avoiding the recursions for . For details, see Freund and Nachtigal [proposition 4.1]FrNa:qmr. This upper bound may be pessimistic by a factor of at most .    

QMR has roughly the same problems with respect to vector and parallel  implementation as BiCG. The scalar overhead per iteration is slightly more than for BiCG. In all cases where the slightly cheaper BiCG method converges irregularly  (but fast enough), QMR may be preferred for stability reasons.

Jack Dongarra
Mon Nov 20 08:52:54 EST 1995