In this chapter we explain our overall approach to obtaining error bounds and provide enough information to use the software. The comments at the beginning of the individual routines should be consulted for more details. It is beyond the scope of this chapter to justify all the bounds we present. Instead, we give references to the literature. For example, standard material on error analysis can be found in [71, 114, 84, 38].

To make this chapter easy to read, we have labeled parts
not essential for a first reading as **Further Details**. The sections
not labeled as **Further Details** should provide all the information
needed to understand and use the main error bounds computed by
ScaLAPACK. The **Further Details** sections provide mathematical background,
references, and tighter but more expensive error bounds, and may be read later.

Since ScaLAPACK uses the same overall algorithmic approach as LAPACK, its error bounds are essentially the same as those for LAPACK. Therefore, this chapter is largely analogous to Chapter 4 of the LAPACK Users' Guide [3]. Significant differences between LAPACK and ScaLAPACK include the following:

- Section 6.1 discusses how machine constants in a heterogeneous network of machines with differing floating-point arithmetics must be redefined. ScaLAPACK can also exploit arithmetic with , which is available in IEEE standard floating-point arithmetic.
- Section 6.2 discusses reliability problems that can arise on heterogeneous networks of machines and how to guarantee reliability on a homogeneous network.
- Section 6.5 discusses some routines that do Gaussian elimination on band matrices with the pivot order chosen for parallelism rather than numerical stability. These routines are numerically stable only when applied to matrices that are do not require partial pivoting for stability (such as diagonally dominant and symmetric positive definite matrices).
- Section 6.7 discusses PxSYEVX. In contrast to its LAPACK analogue, xSYEVX, PxSYEVX allows the user to trade off orthogonality of computed eigenvectors and runtime.

In section 6.1 we discuss the sources of numerical error, in particular roundoff error. We also briefly discuss IEEE arithmetic. Section 6.2 discusses the new sources of numerical error specific to parallel libraries, and the restrictions they impose on the reliable use of ScaLAPACK. Section 6.3 discusses how to measure errors, as well as some standard notation. Section 6.4 discusses further details of how error bounds are derived. Sections 6.5 through 6.9 present error bounds for linear equations, linear least squares problems, the symmetric eigenproblem, the singular value decomposition, and the generalized symmetric definite eigenproblem, respectively.

- Sources of Error in Numerical Calculations
- New Sources of Error in Parallel Numerical Computations
- How to Measure Errors
- Further Details: How Error Bounds Are Derived
- Error Bounds for Linear Equation Solving
- Error Bounds for Linear Least Squares Problems
- Error Bounds for the Symmetric Eigenproblem
- Error Bounds for the Singular Value Decomposition
- Error Bounds for the Generalized Symmetric Definite Eigenproblem

Tue May 13 09:21:01 EDT 1997