Many iterative methods have been developed and it is impossible to cover them all. We chose the methods below either because they illustrate the historical development of iterative methods, or because they represent the current state-of-the-art for solving large sparse linear systems. The methods we discuss are:

- Jacobi
- Gauss-Seidel
- Successive Over-Relaxation (SOR )
- Symmetric Successive Over-Relaxation (SSOR )
- Conjugate Gradient (CG )
- Minimal Residual (MINRES ) and Symmetric LQ (SYMMLQ )
- Conjugate Gradients on the Normal Equations (CGNE and CGNR )
- Generalized Minimal Residual (GMRES )
- Biconjugate Gradient (BiCG )
- Quasi-Minimal Residual (QMR )
- Conjugate Gradient Squared (CGS )
- Biconjugate Gradient Stabilized (Bi-CGSTAB )
- Chebyshev Iteration

We do not intend to write a ``cookbook'', and have deliberately avoided the words ``numerical recipes'', because these phrases imply that our algorithms can be used blindly without knowledge of the system of equations. The state of the art in iterative methods does not permit this: some knowledge about the linear system is needed to guarantee convergence of these algorithms, and generally the more that is known the more the algorithm can be tuned. Thus, we have chosen to present an algorithmic outline, with guidelines for choosing a method and implementing it on particular kinds of high-performance machines. We also discuss the use of preconditioners and relevant data storage issues.

Mon Nov 20 08:52:54 EST 1995