next up previous contents index
Next: Improved Error Bounds Up: Further Details: How Error Previous: Further Details: How Error

Standard Error Analysis

 

  We illustrate standard error analysis with the simple example of evaluating the scalar function y=f(z). Let the output of the subroutine that implements f(z) be denoted tex2html_wrap_inline18005; this includes the effects of roundoff. If tex2html_wrap_inline18007, where tex2html_wrap_inline18009 is small, we say that tex2html_wrap_inline18011 is a backward stable     algorithm for f, or that the backward error tex2html_wrap_inline18009 is small.     In other words, tex2html_wrap_inline18005 is the exact value of f at a slightly perturbed input tex2html_wrap_inline18021.gif

Suppose now that f is a smooth function, so that we may approximate it near z by a straight line: tex2html_wrap_inline18033. Then we have the simple error estimate
displaymath17997
Thus, if tex2html_wrap_inline18009 is small and the derivative f'(z) is moderate, the error tex2html_wrap_inline18039 will be small.gif This is often written in the similar form
displaymath17998
This approximately bounds the relative error    tex2html_wrap_inline18045 by the product of the condition number of f at z, tex2html_wrap_inline18051, and the relative backward error tex2html_wrap_inline18053.     Thus we get an error bound by multiplying a condition  number and a backward error (or bounds for these quantities). We call a problem ill-conditioned  if its condition number is large, and ill-posed  if its condition number is infinite (or does not exist).gif

If f and z are vector quantities, then f'(z) is a matrix (the Jacobian). Hence, instead of using absolute values as before, we now measure tex2html_wrap_inline18009 by a vector norm tex2html_wrap_inline18065 and f'(z) by a matrix norm tex2html_wrap_inline18069. The conventional (and coarsest) error analysis uses a norm such as the infinity norm. We therefore call this normwise backward stability.     For example, a normwise stable method for solving a system of linear equations Ax=b will produce a solution tex2html_wrap_inline17482 satisfying tex2html_wrap_inline18075, where tex2html_wrap_inline18077 and tex2html_wrap_inline18079 are both small (close to machine epsilon). In this case the condition number is tex2html_wrap_inline18081 (see section 6.5).  

Almost all of the algorithms in ScaLAPACK (as well as LAPACK) are stable in the sense just describedgif: when applied to a matrix A they produce the exact result for a slightly different matrix A+E, where tex2html_wrap_inline18077 is of order tex2html_wrap_inline17202.

Condition numbers may be expensive to compute exactly. For example, it costs about tex2html_wrap_inline18115 operations to solve Ax=b for a general matrix A, and computing tex2html_wrap_inline18121 exactly costs an additional tex2html_wrap_inline18123 operations, or twice as much. But tex2html_wrap_inline18121 can be estimated in only tex2html_wrap_inline18127 operations beyond those tex2html_wrap_inline18115 necessary for solution, a tiny extra cost. Therefore, most of ScaLAPACK's condition numbers and error bounds are based on estimated condition numbers , using the method of [72, 80, 81].

The price one pays for using an estimated rather than an exact condition number is occasional (but very rare) underestimates of the true error; years of experience attest to the reliability of our estimators, although examples where they badly underestimate the error can be constructed [82]. Note that once a condition estimate is large enough (usually tex2html_wrap_inline18131), it confirms that the computed answer may be completely inaccurate, and so the exact magnitude of the condition estimate conveys little information.


next up previous contents index
Next: Improved Error Bounds Up: Further Details: How Error Previous: Further Details: How Error

Susan Blackford
Tue May 13 09:21:01 EDT 1997