**Previous:** Convergence

**Up:** Quasi-Minimal Residual (QMR)

**Previous Page:** Convergence

**Next Page:** Conjugate Gradient Squared Method (CGS)

The pseudocode for the Preconditioned Quasi Minimal Residual Method, with preconditioner , is given in Figure . This algorithm follows the two term recurrence version without look-ahead, presented by Freund and Nachtigal [101] as Algorithm 7.1. This version of QMR is simpler to implement than the full QMR method with look-ahead, but it is susceptible to breakdown of the underlying Lanczos process. (Other implementational variations are whether to scale Lanczos vectors or not, or to use three-term recurrences instead of coupled two-term recurrences. Such decisions usually have implications for the stability and the efficiency of the algorithm.) A professional implementation of QMR with look-ahead is given in Freund and Nachtigal's QMRPACK, which is available through netlib; see Appendix .

We have modified the algorithm to include a relatively inexpensive recurrence relation for the computation of the residual vector. This requires a few extra vectors of storage and vector update operations per iteration, but it avoids expending a matrix-vector product on the residual calculation. Also, the algorithm has been modified so that only two full preconditioning steps are required instead of three.

Computation of the residual is done for the convergence test. If one uses right (or post) preconditioning, that is , then a cheap upper bound for can be computed in each iteration, avoiding the recursions for . For details, see Freund and Nachtigal [proposition 4.1]FrNa:qmr. This upper bound may be pessimistic by a factor of at most .

QMR has roughly the same problems with respect to vector and parallel implementation as BiCG. The scalar overhead per iteration is slightly more than for BiCG. In all cases where the slightly cheaper BiCG method converges irregularly (but fast enough), there seems little reason to avoid QMR.