next up previous
Next: Factors Affecting ScaLAPACK Performance Up: ScaLAPACK: A Linear Previous: ScaLAPACK: A Linear

Introduction

  ScaLAPACK is a library of high-performance linear algebra routines for distributed-memory MIMD machines. It is a continuation of the LAPACK project, which has designed and produced an efficient linear algebra library for workstations, vector supercomputers, and shared-memory parallel computers [3]. Both libraries contain routines for the solution of systems of linear equations, linear least squares problems, and eigenvalue problems. The goals of the LAPACK project, which continue into the ScaLAPACK project, are efficiency, so that the computationally intensive routines execute as fast as possible; scalability as the problem size and number of processors grow; reliability, including the return of error bounds; portability across machines; flexibility so that users may construct new routines from well-designed components; and ease of use. Toward this last goal the ScaLAPACK software has been designed to look as much like the LAPACK software as possible.

Many of these goals have been attained by developing and promoting standards, especially specifications for basic computational and communication routines. Thus, LAPACK relies on the Basic Linear Algebra Subroutines BLAS [29, 16, 15], particularly the Level 2 and 3 BLAS, for computational efficiency; and ScaLAPACK [5] relies upon the Basic Linear Algebra Communications Subprograms (BLACS) [19] for efficiency of communication and uses a set of parallel BLAS, the PBLAS [9], which themselves call the BLAS and the BLACS. LAPACK and ScaLAPACK will run on any machine for which the BLAS and the BLACS are available. A PVM [22] version of the BLACS has been available for some time, and the portability of the BLACS has recently been further increased by the development of a version that uses MPI [32].



Jack Dongarra
Sat Feb 1 08:18:10 EST 1997