**Previous:** Modified incomplete factorizations

**Up:** Point incomplete factorizations

**Previous Page:** Modified incomplete factorizations

**Next Page:** Block factorization methods

At first it may appear that the sequential time of solving a
factorization is of the order of the number of variables, but things
are not quite that bad. Consider the special case of central
differences on a regular domain of points. The variables
on any diagonal, that is, in locations with , depend
only on those on the previous diagonal, that is, with .
Therefore it is possible to have a vector computer * pipeline*
the operations on each diagonal, and a parallel computer can process
the elements of a diagonal simultaneously;
see Van der Vorst [200][198].

Another way of vectorizing the solution of the triangular factors is to use some expansion. If the lower triangular factor is normalized to the form (where is strictly lower triangular), then its inverse can be given as either of the following two series:

(The first series is called a ``Neumann expansion'', the second an ``Euler expansion''. Both series are finite, but their length prohibits practical use of this fact.) Parallel or vectorizable preconditioners can be derived from an incomplete factorization by taking a small number of terms in either series. Experiments indicate that a small number of terms, while giving high execution rates, yields almost the full precision of the more recursive triangular solution (see Axelsson and Eijkhout [15] and Van der Vorst [196]).

More radical approaches for increasing the parallelism in incomplete
factorizations are based on a renumbering of the problem variables.
For instance, on rectangular domains one could start numbering the
variables from all four corners simultaneously, thereby creating
four-fold parallelism
(see Dongarra, * et al.* [68],
Van der Vorst [199][197]).
The most extreme case is the red/black ordering
(or for more general matrices the multi-color ordering) which gives
the absolute minimum number of sequential steps.

Multi-coloring is also an attractive method for vector computers. Since points of one color are uncoupled, they can be processed as one vector; see Doi [65], Melhem [151], and Poole and Ortega [171].

However, for such ordering strategies there is usually a trade-off between the degree of parallelism and the resulting number of iterations. The reason for this is that a different ordering may give rise to a different error matrix, in particular the norm of the error matrix may vary considerably between orderings. See experimental results by Duff and Meurant [76] and a partial explanation of them by Eijkhout [82].