Previous: Nonstationary Iterative Methods
Up: Iterative Methods
Next: A short history of Krylov methods
Previous Page: Implementation
Next Page: A short history of Krylov methods

Summary of the Methods

Implementing an effective iterative method for solving a linear system cannot be done without some knowledge of the linear system. If good performance is important, consideration must also be given to the computational kernels of the method and how efficiently they can be executed on the target architecture. This point is of particular importance on parallel architectures; see §.

In this section we summarize for each method

Table lists the storage required for each method (without preconditioning). Note that we are not including the original system and we ignore scalar storage.

  1. Jacobi Method
  2. Gauss-Seidel Method
  3. Successive Over-Relaxation (SOR)
  4. Conjugate Gradient (CG)
  5. Generalized Minimal Residual (GMRES)
  6. Biconjugate Gradient (BiCG)
  7. Quasi-Minimal Residual (QMR)
  8. Conjugate Gradient Squared (CGS)
  9. Biconjugate Gradient Stabilized (Bi-CGSTAB)
  10. Chebyshev Iteration

Selecting the ``best'' method for a given class of problems is largely a matter of trial and error. It also depends on how much storage one has available (GMRES), on the availability of (BiCG and QMR), and on how expensive the matrix vector products (and Solve steps with ) are in comparison to SAXPYs and inner products. If these matrix vector products are relatively expensive, and if sufficient storage is available then it may be attractive to use GMRES and delay restarting as much as possible.

Table shows the type of operations performed per iteration. Based on the particular problem or data structure, the user may observe that a particular operation could be performed more efficiently.



Previous: Nonstationary Iterative Methods
Up: Iterative Methods
Next: A short history of Krylov methods
Previous Page: Implementation
Next Page: A short history of Krylov methods