One reason that block methods are of interest is that they are potentially more suitable for vector computers and parallel architectures. Consider the block factorization

where
is the block diagonal matrix of pivot blocks,
and
,
are the block lower and upper triangle of the factorization;
they coincide with
,
in the case of a block tridiagonal
matrix.
We can turn this into an incomplete factorization by replacing the
block diagonal matrix of pivots
by the block diagonal matrix of
incomplete factorization pivots
, giving

For factorizations of this type (which covers all
methods in Concus, Golub and Meurant [57] and
Kolotilina and Yeremin [141]) solving
a linear system means solving
smaller systems with the
matrices.
Alternatively, we can replace
by a
the inverse of the block diagonal matrix of the
approximations to the inverses of the pivots,
, giving

For this second type (which
was discussed by Meurant [155], Axelsson and
Polman [21] and Axelsson and
Eijkhout [15]) solving
a system with
entails multiplying by the
blocks.
Therefore, the second type has a much higher potential for
vectorizability. Unfortunately, such a factorization is theoretically
more troublesome; see the above references or Eijkhout and
Vassilevski [90].