Next: What Operations Can One Up: Singular Value Decomposition Previous: Direct Methods   Contents   Index

# Iterative Algorithms   J. Demmel

Now we discuss the pros and cons of the methods for Hermitian eigenproblems from Chapter 4 as applied to , , or . We note that the special structure of may be exploited to make Lanczos more efficient, as described in §6.3.3 below.

For simplicity, we let denote any one of the above three Hermitian matrices in the discussion below. The basic tradeoffs from Table 4.1 remain true; the choice of method depends on the following:

• What operations can one afford to perform with the matrix and vectors? We measure cost in both time and space. Can one afford to solve for , where is a shift near an eigenvalue of (called SI in Table 4.1, for shift-and-invert)? Or can one only solve it approximately with a preconditioned iterative method (called Prec)? Or can one only afford to multiply vectors by and (called Dir)? Regarding storage and manipulation of vectors, the cheapest and least accurate option is local orthogonalization (called local); next is selective orthogonalization (called SO), and the most expensive and accurate option is full orthogonalization (called FO).

• Which singular values and vectors are desired? Does one want a few of the largest or smallest singular values of , which are far away (isolated) from the others (called IE); or more of the largest or smallest singular values, even though they are tightly clustered together (called CE); or some interior singular values, in the middle of the spectrum (called M)?

What differs from Chapter 4 is how the above decisions depend on the choice of .

Subsections

Next: What Operations Can One Up: Singular Value Decomposition Previous: Direct Methods   Contents   Index
Susan Blackford 2000-11-20