 
 
 
 
 
 
 
 
 
 
Here we elaborate on how condition numbers are used
to get error bounds. For simplicity, suppose we
are just trying to evaluate a scalar function  ,
using an algorithm
,
using an algorithm  , and we want to bound
the error
, and we want to bound
the error  . Further, suppose that we can show
that
. Further, suppose that we can show
that 
 for some ``small''
 for some ``small''  .
If the algorithm enjoys this property, that it
gets exactly the right answer
.
If the algorithm enjoys this property, that it
gets exactly the right answer  for a
slightly perturbed argument
 for a
slightly perturbed argument  , then the
algorithm is called numerically stable,
or just stable for short. In this case
we can estimate the error as follows:
, then the
algorithm is called numerically stable,
or just stable for short. In this case
we can estimate the error as follows:
 
 is differentiable.
In this simple case we can therefore bound
 is differentiable.
In this simple case we can therefore bound 
 ,
where
,
where  is the condition number 
and
 is the condition number 
and  is the so-called backward error. 
Note that
 is the so-called backward error. 
Note that  depends
only on the function
 depends
only on the function  and its argument
 and its argument  , whereas
, whereas  depends on
the algorithm as well.
 depends on
the algorithm as well. 
The same approach applies to getting error bounds for eigenproblems.
In this case  would be a matrix,
 would be a matrix,  would be a perturbed
matrix, and
 would be a perturbed
matrix, and  could be an eigenvalue, a set of eigenvalues,
a set of eigenvalue/eigenvector pairs, or other quantities.
Furthermore
 could be an eigenvalue, a set of eigenvalues,
a set of eigenvalue/eigenvector pairs, or other quantities.
Furthermore  would be replaced by a norm of the matrix
 would be replaced by a norm of the matrix  .
.
So to get an error bound, we need both a condition number and
a bound on the backward error  . There are two ways to get
a bound on
. There are two ways to get
a bound on  . The first way is to prove that the algorithm
. The first way is to prove that the algorithm  always has a small backward error
always has a small backward error  . For the transformation
methods discussed in this book, this is the case, with the norm of the
backward error matrix
. For the transformation
methods discussed in this book, this is the case, with the norm of the
backward error matrix  roughly bounded by machine precision
 roughly bounded by machine precision
 times the norm of the input matrix
 times the norm of the input matrix  . This kind
of guaranteed stability does not hold for iterative methods.
. This kind
of guaranteed stability does not hold for iterative methods.
For iterative methods, the story is more complicated. Here is
a sketch, with the details left to Chapters 4 through
9. Suppose  and unit vector
 and unit vector  form an approximate
eigenvalue and eigenvector pair, that is, that the residual vector
 form an approximate
eigenvalue and eigenvector pair, that is, that the residual vector 
 is small.
Then it is easy to show that a smallest matrix
 is small.
Then it is easy to show that a smallest matrix 
 such that
 such that 
 , i.e., where
, i.e., where  and
 and  are an
exact eigenvalue/eigenvector pair of
 are an
exact eigenvalue/eigenvector pair of  , is
, is  , with norm
, with norm
 . So to bound the backward error all we have to do
is compute the residual 2-norm
. So to bound the backward error all we have to do
is compute the residual 2-norm 
 . So bounding
the backward error for a single eigenvalue and eigenvector
pair is very easy. The story is more complicated for a set of
eigenvalue/eigenvector pairs
. So bounding
the backward error for a single eigenvalue and eigenvector
pair is very easy. The story is more complicated for a set of
eigenvalue/eigenvector pairs 
 ,
,  .
Even if each
.
Even if each 
 is small, that does not mean
that the
 is small, that does not mean
that the  are approximations of
 are approximations of  different eigenpairs
(some may approximate the same true eigenpair), or that the
eigenvectors are orthogonal when they are supposed to be (when
 different eigenpairs
(some may approximate the same true eigenpair), or that the
eigenvectors are orthogonal when they are supposed to be (when
 is Hermitian), or any other desirable property. Some
iterative methods are more effective than others at
guaranteeing such properties (for examples, see implicitly
restarted Lanczos and Arnoldi methods described in §4.5 and
§7.6).
 is Hermitian), or any other desirable property. Some
iterative methods are more effective than others at
guaranteeing such properties (for examples, see implicitly
restarted Lanczos and Arnoldi methods described in §4.5 and
§7.6).
The reader is referred to the textbooks by Golub and Van Loan [198] and Demmel [114] for an introductory study on numerical stability. The books by Higham [228] and Stewart and Sun [425] are excellent sources for the advanced study of numerical stability and conditioning for solving linear system of equations and eigenvalue problems, respectively.
 
 
 
 
 
 
 
 
