It gives eigenvalues close to a chosen shift point , and we get convergence after a much smaller number of steps . Even if the systems with the shifted matrix are more cumbersome to solve than those with needed in the direct variant, the smaller number of steps needed will most often compensate for this.

The basic recursion (5.9) is replaced by

The matrix is symmetric,

An eigensolution of ,

is used to get an approximate eigenvalue

and an approximate eigenvector

Its residual is

and its norm is small whenever the quantity

is small. This approximation is not a standard but a harmonic Ritz value of the original eigenproblem (5.1); see the discussion in §3.2.

As in the standard case, we can monitor and flag the eigenvalue as converged when it gets small, without actually performing the matrix-vector multiplication (5.20). We save this operation until the step when the estimate indicates convergence.

We get the following algorithm.

We note that only a few steps differ from the previous direct iteration algorithm. In actual implementations one can use the same program for both computations. Let us comment on those steps that differ:

- (8)
- Now we expect convergence for a rather small
number of vectors , and it is advised to use full reorthogonalization,
even if selective reorthogonalization gives results that are accurate enough.
Apply the Gram-Schmidt orthogonalization process,

until is -orthogonal to the basis . We need one extra matrix-vector multiplication with for each reorthogonalization, and we have to use the classical Gram-Schmidt orthogonalization process.We may save and use the auxiliary basis to save extra operations,

- (14)
- Now the basis is used to get the
eigenvector (5.20),

for each that is flagged as converged.

When we run a shift-and-invert operator (5.16),
we factor

During the actual iteration, we use the factors and ,
computing

If there are eigenvalues on both sides of the shift , the matrix is indefinite, and we cannot use simple Cholesky decomposition, but have to make a symmetric indefinite factorization, e.g., MA47 of Duff and Reid [141]. See §10.3. This type of algorithm makes block-diagonal with one by one and two by two blocks. It is easy to determine the inertia of such a matrix, and this gives information on exactly how many eigenvalues there are on each side of the shift . We may determine the number of eigenvalues in an interval by performing two factorizations with in either end point. This is used to make sure that all eigenvalues in the interval have been computed in the software based on [162] and [206].

We note that Algorithm 5.5 does not need any factorization of the matrix and can be applied also when is singular. This is not an uncommon case in practical situations, but special precautions have to be taken to prevent components of the null space of from entering the computation; see [161] or [340].