ScaLAPACK is a library of high performance linear algebra routines for distributed memory MIMD machines. It is a continuation of the LAPACK project, which has designed and produced an efficient linear algebra library for workstations, vector supercomputers and shared memory parallel computers [1]. Both libraries contain routines for the solution of systems of linear equations, linear least squares problems and eigenvalue problems. The goals of the LAPACK project, which continue into the ScaLAPACK project, are efficiency so that the computationally intensive routines execute as fast as possible; scalability as the problem size and number of processors grow; reliability, including the return of error bounds; portability across machines; flexibility so that users may construct new routines from well designed components; and ease of use. Towards this last goal the ScaLAPACK software has been designed to look as much like the LAPACK software as possible.

Many of these goals have been attained by developing and promoting standards, especially specifications for basic computational and communication routines. Thus LAPACK relies on the BLAS [26, 15, 14], particularly the Level 2 and 3 BLAS for computational efficiency, and ScaLAPACK [32] relies upon the BLACS [18] for efficiency of communication and uses a set of parallel BLAS, the PBLAS [9], which themselves call the BLAS and the BLACS. LAPACK and ScaLAPACK will run on any machines for which the BLAS and the BLACS are available. A PVM [20] version of the BLACS has been available for some time and the portability of the BLACS has recently been further increased by the development of a version that uses MPI [33].

The first part of this paper presents the design of ScaLAPACK. After a brief discussion of the BLAS and LAPACK, the block cyclic data layout, the BLACS, the PBLAS, and the algorithms used are discussed. We also outline the difficulties encountered in producing correct code for networks of heterogeneous processors; difficulties that we believe are little recognized by other practitioners.

The paper then discusses the performance of ScaLAPACK.
Extensive results on various platforms are presented. One
of our goals is to model and predict the performance of each
routine as a function of a few problem and machine parameters.
One interesting result is that for some algorithms, speed is
*not* a monotonic increasing function of the number of
processors. In other words, it can sometimes be beneficial to let
some processors remain idle.
Finally, we look at possible future directions and give some concluding
remarks.

Thu Jul 25 15:38:00 EDT 1996