ScaLAPACK is a software package provided by Univ. of Tennessee; Univ. of California, Berkeley; Univ. of Colorado Denver; and NAG Ltd..

Presentation

ScaLAPACK is a library of high-performance linear algebra routines for parallel distributed memory machines. ScaLAPACK solves dense and banded linear systems, least squares problems, eigenvalue problems, and singular value problems. The key ideas incorporated into ScaLAPACK include the use of

  1. a block cyclic data distribution for dense matrices and a block data distribution for banded matrices, parametrizable at runtime;

  2. block-partitioned algorithms to ensure high levels of data reuse;

  3. well-designed low-level modular components that simplify the task of parallelizing the high level routines by making their source code the same as in the sequential case.

The goals of the ScaLAPACK project are the same than the one’s of LAPACK, namely:

  • efficiency (to run as fast as possible),

  • scalability (as the problem size and number of processors grow),

  • reliability (including error bounds),

  • portability (across all important parallel machines),

  • flexibility (so users can construct new routines from well-designed parts),

  • and ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible).

Many of these goals, particularly portability, are aided by developing and promoting standards , especially for low-level communication and computation routines. We have been successful in attaining these goals, limiting most machine dependencies to three standard libraries called the BLAS, or Basic Linear Algebra Subprograms, LAPACK and BLACS, or Basic Linear Algebra Communication Subprograms. LAPACK will run on any machine where the BLAS are available, and ScaLAPACK will run on any machine where BLAS, LAPACK and the BLACS are available.

The library is currently written in Fortran (with the exception of a few symmetric eigenproblem auxiliary routines written in C). The name ScaLAPACK is an acronym for Scalable Linear Algebra PACKage, or Scalable LAPACK. The most recent version of ScaLAPACK is 2.2.0, released in February 2, 2022.

Acknowledgments

Since 2010, this material is based upon work supported by the National Science Foundation under Grant No. NSF-OCI-1032861. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). Until 2006, this material was based upon work supported by the National Science Foundation under Grant No. ASC-9313958, NSF-0444486 and DOE Grant No. DE-FG03-94ER25219. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF) or the Department of Energy (DOE).

Software

Licensing

ScaLAPACK is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web at http://www.netlib.org/scalapack . Thus, it can be included in commercial software packages (and has been). We only ask that proper credit be given to the authors.

The license used for the software is the modified BSD license, see:

Like all software, it is copyrighted. It is not trademarked, but we do ask the following:

  • If you modify the source for these routines we ask that you change the name of the routine and comment the changes made to the original.

  • We will gladly answer any questions regarding the software. If a modification is done, however, it is the responsibility of the person who modified the routine to provide support.

ScaLAPACK, version 2.2.0

ScaLAPACK, version 2.1.0

ScaLAPACK, version 2.0.2

ScaLAPACK, version 2.0.1

ScaLAPACK, version 2.0.0

Errata

ScaLAPACK Installer [for Linux]

Python based installer for ScaLAPACK. Download, configure, compile and install all libraries needed for ScaLAPACK (Ref BLAS, LAPACK, BLACS and ScaLAPACK)

ScaLAPACK for Windows

GitHub

The ScaLAPACK Git repository is available to get the latest bug fixed or propose fixes (Pull requests)

https://github.com/Reference-ScaLAPACK/scalapack/

Support

Contributors

ScaLAPACK is a community-wide effort. ScaLAPACK relies on many contributors.

If you are wishing to contribute, please have a look at the LAPACK Program Style. This document has been written to facilitate contributions to LAPACK/ScaLAPACK by documenting their design and implementation guidelines. You can submit code directly on github via Pull Requests ScaLAPACK on Github

LAPACK/ScaLAPACK Project Software Grant and Corporate Contributor License Agreement (“Agreement”) [Download]

Contributions are always welcome and can be sent to the ScaLAPACK team.

Documentation

Browse ScaLAPACK routines with on-line documentation browser

Explore ScaLAPACK code Here you will be able to browse through the many ScaLAPACK functions.

Release Notes

The ScaLAPACK Release Notes contain the history of the modifications made to the ScaLAPACK library between each new version.

Improvements

ScaLAPACK is a currently active project, we are striving to bring new improvements and new algorithms on a regular basis.

Please contribute to our wishlist if you feel some functionality or algorithms are missing by emailing the ScaLAPACK team.

FAQ

Consult ScaLAPACK Frequently Asked Questions.

Please contribute to our FAQ if you feel some questions are missing by emailing the ScaLAPACK team.

The LAPACK Users' Forum is also a good source to find answers.

LAWNS: LAPACK/ScaLAPACK Working Notes

Release History

PLASMA

The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project aims to address the critical and highly disruptive situation that is facing the Linear Algebra and High Performance Computing community due to the introduction of multi-core architectures.

PLASMA’s ultimate goal is to create software frameworks that enable programmers to simplify the process of developing applications that can achieve both high performance and portability across a range of new architectures.

The development of programming models that enforce asynchronous, out of order scheduling of operations is the concept used as the basis for the definition of a scalable yet highly efficient software framework for Computational Linear Algebra applications.

MAGMA

The MAGMA (Matrix Algebra on GPU and Multicore Architectures) project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current “Multicore+GPU” systems.

The MAGMA research is based on the idea that, to address the complex challenges of the emerging hybrid environments, optimal software solutions will themselves have to hybridize, combining the strengths of different algorithms within a single framework. Building on this idea, we aim to design linear algebra algorithms and frameworks for hybrid manycore and GPUs systems that can enable applications to fully exploit the power that each of the hybrid components offers.