next up previous contents index
Next: Hermitian Eigenproblems   J. Up: Introduction Previous: Introduction   Contents   Index


Numerical Stability and Conditioning

Here we elaborate on how condition numbers are used to get error bounds. For simplicity, suppose we are just trying to evaluate a scalar function $f(x)$, using an algorithm $alg(x)$, and we want to bound the error $alg(x)-f(x)$. Further, suppose that we can show that $alg(x) = f(x+e)$ for some ``small'' $e$. If the algorithm enjoys this property, that it gets exactly the right answer $f(x+e)$ for a slightly perturbed argument $x+e$, then the algorithm is called numerically stable, or just stable for short. In this case we can estimate the error as follows:

\begin{displaymath}
{\rm error} = alg(x) - f(x) = f(x+e) - f(x) \approx f'(x) \cdot e,
\end{displaymath}

where we have assumed that $f(x)$ is differentiable. In this simple case we can therefore bound $\vert{\rm error}\vert \leq \vert f'(x)\vert \cdot \vert e\vert
$, where $\vert f'(x)\vert$ is the condition number and $e$ is the so-called backward error. Note that $f'(x)$ depends only on the function $f()$ and its argument $x$, whereas $e$ depends on the algorithm as well.

The same approach applies to getting error bounds for eigenproblems. In this case $x$ would be a matrix, $x+e$ would be a perturbed matrix, and $f(x+e)$ could be an eigenvalue, a set of eigenvalues, a set of eigenvalue/eigenvector pairs, or other quantities. Furthermore $\vert e\vert$ would be replaced by a norm of the matrix $e$.

So to get an error bound, we need both a condition number and a bound on the backward error $e$. There are two ways to get a bound on $e$. The first way is to prove that the algorithm $alg(x)$ always has a small backward error $e$. For the transformation methods discussed in this book, this is the case, with the norm of the backward error matrix $e$ roughly bounded by machine precision $\epsilon_M$ times the norm of the input matrix $x$. This kind of guaranteed stability does not hold for iterative methods.

For iterative methods, the story is more complicated. Here is a sketch, with the details left to Chapters 4 through 9. Suppose $\mu$ and unit vector $x$ form an approximate eigenvalue and eigenvector pair, that is, that the residual vector $Ax - \mu x \equiv r$ is small. Then it is easy to show that a smallest matrix $E$ such that $(A+E)x = \mu x$, i.e., where $\mu$ and $x$ are an exact eigenvalue/eigenvector pair of $A+E$, is $E = -r x^*$, with norm $\Vert E\Vert _2 = \Vert r\Vert _2$. So to bound the backward error all we have to do is compute the residual 2-norm $\Vert r\Vert _2 = \Vert Ax - \mu x\Vert _2$. So bounding the backward error for a single eigenvalue and eigenvector pair is very easy. The story is more complicated for a set of eigenvalue/eigenvector pairs $A x_i \approx \mu_i x_i$, $i=1,\ldots,m$. Even if each $A x_i - \mu_i x_i$ is small, that does not mean that the $\mu_i,x_i$ are approximations of $m$ different eigenpairs (some may approximate the same true eigenpair), or that the eigenvectors are orthogonal when they are supposed to be (when $A$ is Hermitian), or any other desirable property. Some iterative methods are more effective than others at guaranteeing such properties (for examples, see implicitly restarted Lanczos and Arnoldi methods described in §4.5 and §7.6).

The reader is referred to the textbooks by Golub and Van Loan [198] and Demmel [114] for an introductory study on numerical stability. The books by Higham [228] and Stewart and Sun [425] are excellent sources for the advanced study of numerical stability and conditioning for solving linear system of equations and eigenvalue problems, respectively.


next up previous contents index
Next: Hermitian Eigenproblems   J. Up: Introduction Previous: Introduction   Contents   Index
Susan Blackford 2000-11-20