Computational Aspects of the Methods

next up previous contents index
Next: A short history Up: Iterative Methods Previous: Implementation

Computational Aspects of the Methods


Efficient solution of a linear system is largely a function of the proper choice of iterative method. However, to obtain good performance, consideration must also be given to the computational kernels of the method and how efficiently they can be executed on the target architecture. This point is of particular importance on parallel architectures; see §gif.

Iterative methods are very different from direct methods in this respect. The performance of direct methods, both for dense and sparse systems, is largely that of the factorization of the matrix. This operation is absent in iterative methods (although preconditioners may require a setup phase), and with it, iterative methods lack dense matrix suboperations. Since such operations can be executed at very high efficiency on most current computer architectures, we expect a lower flop rate for iterative than for direct methods. (Dongarra and Van der Vorst [74] give some experimental results about this, and provide a benchmark code for iterative solvers.) Furthermore, the basic operations in iterative methods often use indirect addressing, depending on the data structure. Such operations also have a relatively low efficiency of execution.

However, this lower efficiency of execution does not imply anything about the total solution time for a given system. Furthermore, iterative methods are usually simpler to implement than direct methods, and since no full factorization has to be stored, they can handle much larger systems than direct methods.

In this section we summarize for each method

Table: Summary of Operations for Iteration . ``a/b'' means ``a'' multiplications with the matrix and ``b'' with its transpose.

Table gif lists the storage required for each method (without preconditioning). Note that we are not including the storage for the original system and we ignore scalar storage.

Table: Storage Requirements for the Methods in iteration : denotes the order of the matrix.

  1. Jacobi Method
  2. Gauss-Seidel Method
  3. Successive Over-Relaxation (SOR)
  4. Conjugate Gradient (CG)
  5. Generalized Minimal Residual (GMRES)
  6. Biconjugate Gradient (BiCG)
  7. Quasi-Minimal Residual (QMR)
  8. Conjugate Gradient Squared (CGS)
  9. Biconjugate Gradient Stabilized (Bi-CGSTAB)
  10. Chebyshev Iteration

Selecting the ``best'' method for a given class of problems is largely a matter of trial and error. It also depends on how much storage one has available (GMRES), on the availability of (BiCG and QMR), and on how expensive the matrix vector products (and Solve steps with ) are in comparison to SAXPYs and inner products. If these matrix vector products are relatively expensive, and if sufficient storage is available then it may be attractive to use GMRES and delay restarting as much as possible.

Table gif shows the type of operations performed per iteration. Based on the particular problem or data structure, the user may observe that a particular operation could be performed more efficiently.

next up previous contents index
Next: A short history Up: Iterative Methods Previous: Implementation

Jack Dongarra
Mon Nov 20 08:52:54 EST 1995