LAPACK Frequently Asked Questions (FAQ)

( L   A    P    A    C    K)
( L  -A    P   -A    C   -K)
( L   A    P    A   -C   -K)
( L  -A    P   -A   -C    K)
( L   A   -P   -A    C    K)
( L  -A   -P    A    C   -K)
      (           l    l    l    l )
      (           a   -a    a   -a )
1/4 * ( p    p             -p   -p )
      ( a   -a             -a    a )
      ( c    c   -c   -c           )
      ( k   -k   -k    k           )
Version 3.0 LAPACK User Forum   |   lapack@cs.utk.edu # Accesses



[Home] [Contact] [FAQ] [Release Notes] [LAPACK Search Engine] [Individual Routines] [Quick Installation Guide] [LAPACK Installation Guide] [LAPACK Users' Guide] [LAPACK Working Notes] [What's New in Version 3.0?] [Related Projects] [Support] [Maintenance/Contributing Code] [Year 2000 Readiness Disclosure]

Many thanks to the netlib_maintainers@netlib.org from whose FAQ list I have patterned this list for LAPACK.

Table of Contents

LAPACK
1.1) What and where is LAPACK?
1.2) Are there legal restrictions on the use of LAPACK software?
1.3) How do I reference LAPACK in a scientific publication?
1.4) What revisions have been made since the last release?
1.5) When is the next scheduled release of LAPACK?
1.6) Where can I find out more information about LAPACK?
1.7) Where can I find Java LAPACK?
1.8) How do I obtain a copy of the LAPACK Users' Guide?
1.9) Why aren't BLAS routines included when I download an LAPACK routine?
1.10) Are prebuilt LAPACK libraries available?
1.11) Is there an LAPACK rpm available for RedHat Linux?
1.12) Is there an LAPACK deb file available for Debian Linux?
1.13) How do I install LAPACK under Windows 98/NT?
1.14) What is the naming scheme for LAPACK routines?
1.15) How do I find a particular routine?
1.16) Are there routines in LAPACK to compute determinants?
1.17) Are there routines in LAPACK for the complex symmetric eigenproblem?
1.18) Why aren't auxiliary routines listed on the index?
1.19) I can't get a program to work. What should I do?
1.20) How can I unpack lapack.tgz?
1.21) Where do I find details of the LAPACK Test Suite and Timing Suite?
1.22) What technical support for LAPACK is available?
1.23) How do I interpret LAPACK testing failures?
1.24) Problems running the BLAS test suite with an optimized BLAS library?
1.25) Problems compiling dlamch.f?

BLAS
2.1) What and where are the BLAS?
2.2) Are there legal restrictions on the use of BLAS reference implementation software?
2.3) Publications/references for the BLAS?
2.4) Is there a Quick Reference Guide to the BLAS available?
2.5) Are optimized BLAS libraries available? Where can I find vendor supplied BLAS?
2.6) Where can I find Java BLAS?
2.7) Is there a C interface to the BLAS?
2.8) Are prebuilt reference implementations of the Fortran77 BLAS available?
2.9) What about shared memory machines? Are there multithreaded versions of the BLAS available?

1) LAPACK

1.1) What and where is LAPACK?

LAPACK provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.

Release 3.0 of LAPACK introduces new routines, as well as extending the functionality of existing routines. For detailed information on the revisions, please refer to revisions.info.

The LAPACK library is available from netlib

LAPACK vendor library List
(Last update: April 18, 2006)

Vendor

URL

AMD ACML
Apple Velocity Engine
Compaq CXML
Cray libsci manual, chapter 3
HP MLIB
IBM ESSL / Parallel ESSL (Overview)
ESSL / Parallel ESSL (Documentation)
Intel MKL
NEC PDLIB/SX
SGI SCSL
SUN Sun Performance Library

ISV

URL

NAG Numerical Librairies
Matlab Matlab
Octave Octave

Linux Distribution

URL

Red-Hat RPM
Debian Lapack Debian package
Cygwin Pick up 'lapack' from the 'Math' category during installation.
Ubuntu Lapack Ubuntu package
Gentoo Lapack Gentoo package
Fedora Core 5 Lapack Fedora Core package

The original goal of the LAPACK project was to make the widely used EISPACK and LINPACK libraries run efficiently on shared-memory vector and parallel processors. On these machines, LINPACK and EISPACK are inefficient because their memory access patterns disregard the multi-layered memory hierarchies of the machines, thereby spending too much time moving data instead of doing useful floating-point operations. LAPACK addresses this problem by reorganizing the algorithms to use block matrix operations, such as matrix multiplication, in the innermost loops. These block operations can be optimized for each architecture to account for the memory hierarchy, and so provide a transportable way to achieve high efficiency on diverse modern machines. We use the term "transportable" instead of "portable" because, for fastest possible performance, LAPACK requires that highly optimized block matrix operations be already implemented on each machine.

LAPACK routines are written so that as much as possible of the computation is performed by calls to the Basic Linear Algebra Subprograms (BLAS). While LINPACK and EISPACK are based on the vector operation kernels of the Level 1 BLAS, LAPACK was designed at the outset to exploit the Level 3 BLAS -- a set of specifications for Fortran subprograms that do various types of matrix multiplication and the solution of triangular systems with multiple right-hand sides. Because of the coarse granularity of the Level 3 BLAS operations, their use promotes high efficiency on many high-performance computers, particularly if specially coded implementations are provided by the manufacturer.

Highly efficient machine-specific implementations of the BLAS are available for many modern high-performance computers. The BLAS enable LAPACK routines to achieve high performance with transportable software. Although a model Fortran implementation of the BLAS in available from netlib in the BLAS library, it is not expected to perform as well as a specially tuned implementation on most high-performance computers -- on some machines it may give much worse performance -- but it allows users to run LAPACK software on machines that do not offer any other implementation of the BLAS.

1.2) Are there legal restrictions on the use of LAPACK software?

LAPACK is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web at http://www.netlib.org/lapack . Thus, it can be included in commercial software packages (and has been). We only ask that proper credit be given to the authors.

Like all software, it is copyrighted. It is not trademarked, but we do ask the following:

If you modify the source for these routines we ask that you change the name of the routine and comment the changes made to the original.

We will gladly answer any questions regarding the software. If a modification is done, however, it is the responsibility of the person who modified the routine to provide support.

1.3) How do I reference LAPACK in a scientific publication?

We ask that you cite the LAPACK Users' Guide, Third Edition.

@BOOK{laug,
      AUTHOR = {Anderson, E. and Bai, Z. and Bischof, C. and
                Blackford, S. and Demmel, J. and Dongarra, J. and
                Du Croz, J. and Greenbaum, A. and Hammarling, S. and
                McKenney, A. and Sorensen, D.},
      TITLE = {{LAPACK} Users' Guide},
      EDITION = {Third},
      PUBLISHER = {Society for Industrial and Applied Mathematics},
      YEAR = {1999},
      ADDRESS = {Philadelphia, PA},
      ISBN = {0-89871-447-8 (paperback)} }

1.4) What revisions have been made since the last release?

For detailed information on the revisions since the previous public release, please refer to release_notes.html.

1.5) When is the next scheduled release of LAPACK?

LAPACK, version 3.0 was announced June 30, 1999. The update to this release update.tgz was posted to netlib in November, 1999.
The most significant new routines are:

  1. a faster singular value decomposition (SVD), computed by divide-and-conquer (xGESDD)
  2. faster routines for solving rank-deficient least squares problems:
  3. new routines for the generalized symmetric eigenproblem:
  4. faster routine for the symmetric eigenproblem using "relatively robust eigenvector algorithm" (xSTEGR, xSYEVR/xHEEVR, SSTEVR)
  5. new simple and expert drivers for the generalized nonsymmetric eigenproblem (xGGES,xGGEV,xGGESX,xGGEVX), including error bounds
  6. solver for generalized Sylvester equation (xTGSYL), used in 5)
  7. computational routines (xTGEXC, xTGSEN, xTGSNA) used in 5))
  8. a blocked version of xTZRQF (xTZRZF), and associated xORMRZ/xUNMRZ

The LAPACK Users' Guide, Third Edition is available from SIAM, as well as in HTML form, LAPACK Users' Guide, Third Edition.

1.6) Where can I find more information about LAPACK?

A variety of working notes related to the development of the LAPACK library were published as LAPACK Working Notes and are available in postscript or pdf format at:

http://www.netlib.org/lapack/lawns/
http://www.netlib.org/lapack/lawnspdf/

1.7) Where can I find Java LAPACK?

The first public release of the Java LAPACK (version 0.3 beta) is available for download at the following URL:

http://www.netlib.org/java/f2j/
The JavaNumerics webpage provides a focal point for information on numerical computing in Java!

1.8) How do I obtain a copy of the LAPACK Users' Guide?

An html version of the LAPACK Users' Guide is available for viewing on netlib.

The printed version of the LAPACK Users' Guide, Third Edition is available from SIAM (Society for Industrial and Applied Mathematics). The list price is $39.00 and the SIAM Member Price is $31.20. The order code for the book is SE09. Contact SIAM for additional information.

The royalties from the sales of this book are being placed in a fund to help students attend SIAM meetings and other SIAM related activities. This fund is administered by SIAM and qualified individuals are encouraged to write directly to SIAM for guidelines.

1.9) Why aren't BLAS routines included when I download an LAPACK routine?

It is assumed that you have a machine-specific optimized BLAS library already available on the architecture to which you are installing LAPACK. If this is not the case, you can download a Fortran77 reference implementation of the BLAS from netlib.

Although a model implementation of the BLAS in available from netlib in the blas directory, it is not expected to perform as well as a specially tuned implementation on most high-performance computers -- on some machines it may give much worse performance -- but it allows users to run LAPACK software on machines that do not offer any other implementation of the BLAS.

Alternatively, you can automatically generate an optimized BLAS library for your machine, using ATLAS

http://www.netlib.org/atlas/
.

1.10) Are prebuilt LAPACK libraries available?

Yes, prebuilt LAPACK libraries are available for a variety of architectures. Refer to

http://www.netlib.org/lapack/archives/
for a complete list of available prebuilt libraries.

1.11) Is there an LAPACK rpm available for RedHat Linux?

Yes! Refer to the

http://www.netlib.org/lapack/rpms
directory on netlib for the LAPACK rpms for RedHat Linux.

1.12) Is there an LAPACK deb file available for Debian Linux?

Yes! Refer to LAPACK deb file for Debian Linux.

1.13) How do I install LAPACK under Windows 98/NT?

Separate zip files are available for installation using Digital Fortran or Watcom Fortran 77/32 compiler version 11.0. Both zip files use Microsoft nmake. Refer to the lapack-pc-df.zip or lapack-pc-wfc.zip files on the lapack index.

Otherwise, the lapack.tgz distribution file requires unix-style make and /bin/sh commands in order to install on a Windows system. A fairly complete unix-style environment is available free of charge at the cygnus website,

http://www.cygwin.com/

From this website, you can download the package, get installation instructions, etc. You will want to download the "full" version of cygwin, which includes compilers, shells, make, etc. You will need to download the fortran compiler separately.

The installation is quite simple, involving downloading an executable and installing with Windows' usual install procedure (you can remove it from your machine with Windows' ADD/REMOVE if you later decide you don't want it).

IMPORTANT:

Windows 95/98 does a poor job of process load balance. If you change the focus from the cygnus window, performance will immediately drop by at least 1/3, and the timings will be inaccurate. When doing timings, it is recommended that you leave the focus on the window throughout the entire timing suite. This is not necessary for Windows NT.

Because people often miss them in the install instructions, I repeat two very important pieces of information about the cygnus install here:

  1. If, after installing cygnus, you get the message:
        Out of environment space
    add the line 
        shell=C:\command.com /e:4096 /p
    to your c:\config.sys
    
  2. For installation, LAPACK needs to find /bin/sh, so you should (assuming you
    don't already have this directory made):
       mkdir -p /bin
    Then, you should copy sh.exe from the cygwin bin directory to this one.
    The location of the cygwin bin directory changes depending on where you
    did the install, what type of machine you have, and the version of cygnus.
    Here is an example:
        /cygnus/cygwin-b20/H-i586-cygwin32/bin
    the cygwin-b20 is a version number, so you might see cygwin-b21, if you have
    a newer release, for instance.  The i586 refers to your processor, you might
    expect to see i386, i486, i586 or i686, for instance.
    

1.14) What is the naming scheme for LAPACK routines?

The name of each LAPACK routine is a coded specification of its function (within the very tight limits of standard Fortran 77 6-character names).

All driver and computational routines have names of the form XYYZZZ, where for some driver routines the 6th character is blank.

The first letter, X, indicates the data type as follows:

 
    S  REAL
    D  DOUBLE PRECISION
    C  COMPLEX
    Z  COMPLEX*16  or DOUBLE COMPLEX

The next two letters, YY, indicate the type of matrix (or of the most significant matrix). Most of these two-letter codes apply to both real and complex matrices; a few apply specifically to one or the other.

The last three letters ZZZ indicate the computation performed. For example, SGEBRD is a single precision routine that performs a bidiagonal reduction (BRD) of a real general matrix.

1.15) How do I find a particular routine?

Indexes of individual LAPACK driver and computational routines are available. These indexes contain brief descriptions of each routine.

LAPACK routines are available in four types: single precision real, double precision real, single precision complex, and double precision complex.

NOTE: For brevity, LAPACK auxiliary routines are NOT listed on these indexes of routines.

1.16) Are there routines in LAPACK to compute determinants?

No. There are no routines in LAPACK to compute determinants. This is discussed in the "Accuracy and Stability" chapter in the LAPACK Users' Guide.

1.17) Are there routines in LAPACK for the complex symmetric eigenproblem?

About your question on the eigenvalue problem of a pair of complex symmetric matrices, there is no public domain software I know of for solving this problem directly. Three closest references are:

  1. in QMRpack (your can access it from www.netlib.org) there is a Lanczos method for finding a few eigenvalues/eigenvectors by exploring the complex symmetric structure.
  2. Back to few years ago, J. Cullum and ... published a paper on using QR iteration to find all eigenvalues and eigenvectors of a complex symmtric tridiagonal matrix. The paper was published in SIAM J. Matrix Analysis and Applications.
  3. A couple of years ago, Bar-on published a paper in SIAM J. of Sci. Comp. for full dense complex symmetric eigenvalue problems. He discussed how to use a variant of Householder reduction for tridiagonalization.

All these approaches try to explore the symmetric structure for saving in CPU time and storage. However, since there is no particular mathematical properties we can explore in a complex symmetric system, all above mentioned approaches are potentially numerical unstable! This also reflects why there is no high quality (black-box) math. software available.

1.18) Why aren't auxiliary routines listed on the index?

For brevity, LAPACK auxiliary routines are not listed on the indexes of routines.

However, the routines are contained in the respective directories on netlib. If you download a routine with dependencies, these auxiliary routines should be included with your request. Or, if for some reason you wish to obtain an individual auxiliary routine, and you already know the name of the routine, you can request that routine. For example, if I would like to obtain dlacpy.f, I would connect to the URL:

   http://www.netlib.org/lapack/double/dlacpy.f

1.19) I can't get a program to work. What should I do?

Technical questions should be directed to the authors at the LAPACK User Forum (preferred means of communication) or at lapack@cs.utk.edu

Please tell us the type of machine on which the tests were run, the compiler and compiler options that were used, details of the BLAS library that was used, and a copy of the input file if appropriate.

Be prepared to answer the following questions:

  1. Have you run the BLAS and LAPACK test suites?
  2. Have you checked the errata list on netlib?
  3. If you are using an optimized BLAS library, have you tried using the reference implementation from netlib?
Machine-specific installation hints can be found in release_notes.html, as well as the Quick Installation Guide.

1.20) How can I unpack lapack.tgz?

   gunzip -c lapack.tgz | tar xvf -

The compression program gzip (and gunzip) is Gnu software. If it is not already available on your machine, you can download it via anonymous ftp:

   ncftp prep.ai.mit.edu
   cd pub/gnu/
   get gzip-1.2.4.tar

1.21) Where do I find details of the LAPACK Test Suite and Timing Suite?

Full details of the LAPACK Test Suite and Timing Suite can be found in LAPACK Working Note 41: "Installation Guide to LAPACK" available via the URL:

1.22) What technical support for LAPACK is available?

Technical questions and comments should be directed to the authors at the LAPACK User Forum (preferred means of communication) or at lapack@cs.utk.edu

See Question 1.19

1.23) How do I interpret LAPACK testing failures?

Installation hints for various architectures are maintained in the http://www.netlib.org/lapack/release_notes.html file on netlib. Click on "Machine-Specific Installation Hints".

The only known testing failures are in condition number estimation routines in the generalized nonsymmetric eigenproblem testing. Specifically in sgd.out, dgd.out, cgd.out and zgd.out. The cause for the failures of some test cases is that the mathematical algorithm used for estimating the condition numbers could over- or under-estimate the true values in a certain factor in some rare cases. Further details can be found in LAPACK Working Note 87.

In addition, LAPACK, version 3.0, introduced new routines which rely on IEEE-754 compliance. Refer to the Installation Guide for complete details. As a result, two settings were added to LAPACK/SRC/ilaenv.f to denote IEEE-754 compliance for NaN and infinity arithmetic, respectively. By default, ILAENV assumes an IEEE machine and does a test for IEEE-754 compliance. If you are installing LAPACK on a non-IEEE machine, you MUST modify ILAENV, as this test inside ILAENV will crash! Note that there are also specialized testing/timing versions of ILAENV located in LAPACK/TESTING/LIN/, LAPACK/TESTING/EIG/, LAPACK/TIMING/LIN/, and LAPACK/TIMING/EIG/, that must also be modified. Be aware that some compilers have IEEE-754 compliance by default, and some compilers require a separate compiler flag.

Testing failures can be divided into two categories. Minor testing failures, and major testing failures.

A minor testing failure is one in which the test ratio reported in the LAPACK/TESTING/*.out file slightly exceeds the threshold (specified in the associated LAPACK/TESTING/*.in file). The cause of such failures can mainly be attributed to differences in the implementation of math libraries (square root, absolute value, complex division, complex absolute value, etc). These failures are negligible, and do not affect the proper functioning of the library.

A major testing failures is one in which the test ratio reported in the LAPACK/TESTING/*.out file is on the order of E+06. This type of testing failure should be investigated. For a complete discussion of the comprehensive LAPACK testing suite, please refer to LAPACK Working Note 41. When a testing failure occurs, the output in the LAPACK/TESTING/*.out file will tell the user which test criterion failed and for which type of matrix. It is important to note if the error only occurs with a specific matrix type, a specific precision, a specific test criterion, and the number of tests failed. There can be several possible causes of such failures:

The first question/suggestion is, if you are using an optimized BLAS library, did you run the BLAS test suite? Also, have you tried linking to the reference implementation BLAS library to see if the error disappears? There is a reference implementation BLAS library included with the LAPACK distribution. This type of problem will typically cause a lot of test failures for only a specific matrix type.

A compiler optimization bug will typically also cause a lot of test failures for only a specific matrix type. If a compiler optimization problem is suspected, the user should recompile the entire library with no optimization and see if the error disappears. If the error disappears, then the user will need to pinpoint which routine causes the optimization problem. This search can be narrowed by noticing which precision caused the error and for which matrix type.

In some rare cases, naive implementations of functions such as complex absolute value and complex division can result in major testing failures. Refer to the discussion of the LAPACK/SRC/slabad.f and dlabad.f routines to restrict the range of representable numbers to be used in testing (LAPACK Working Note 41).

An isolated test failure that is not affected by the level of optimization or the BLAS library used, should be reported to the authors at the LAPACK User Forum (preferred means of communication) or at lapack@cs.utk.edu

Installation hints for various architectures are maintained in the http://www.netlib.org/lapack/release_notes.html file on netlib. Click on "Machine-Specific Installation Hints".

1.24) Problems running the BLAS test suite with an optimized BLAS library?

If you encounter difficulties running the BLAS Test Suite with an optimized BLAS library, it may be that you need to disable "input error checking" in the BLAS Test Suite. Most optimized BLAS libraries do NOT perform input error checking. To disable "input error checking" in the BLAS testers, you need to modify line 7 of the data files LAPACK/BLAS/*blat2.in and LAPACK/BLAS/*blat3.in by setting the "T" to "F".

F        LOGICAL FLAG, T TO TEST ERROR EXITS.

1.25) Problems compiling dlamch.f?

The routine dlamch.f (and its dependent subroutines dlamc1, dlamc2, dlamc3, dlamc4, dlamc5) MUST be compiled without optimization. If you downloaded the entire lapack distribution this will be taken care of by the LAPACK/SRC/Makefile. However, if you downloaded a specific LAPACK routine plus dependencies, you need to take care that slamch.f (if you downloaded a single precision real or single precision complex routine) or dlamch.f (if you downloaded a double precision real or double precision complex routine) has been included.

2) BLAS


2.1) What and where are the BLAS?

The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. The Level 1 BLAS perform scalar, vector and vector-vector operations, the Level 2 BLAS perform matrix-vector operations, and the Level 3 BLAS perform matrix-matrix operations. Because the BLAS are efficient, portable, and widely available, they are commonly used in the development of high quality linear algebra software, LAPACK for example.

The publications given in Section 3 define the specifications for the BLAS, and a Fortran77 reference implementation of the BLAS is located in the blas directory of Netlib, together with testing and timing software. For information on efficient versions of the BLAS, see Section 5.


2.2) Are there legal restrictions on the use of BLAS reference implementation software?

The reference BLAS is a freely-available software package. It is available from netlib via anonymous ftp and the World Wide Web. Thus, it can be included in commercial software packages (and has been). We only ask that proper credit be given to the authors.

Like all software, it is copyrighted. It is not trademarked, but we do ask the following:

If you modify the source for these routines we ask that you change the name of the routine and comment the changes made to the original.

We will gladly answer any questions regarding the software. If a modification is done, however, it is the responsibility of the person who modified the routine to provide support.


2.3) Publications/references for the BLAS?

  1. C. L. Lawson, R. J. Hanson, D. Kincaid, and F. T. Krogh, Basic Linear Algebra Subprograms for FORTRAN usage, ACM Trans. Math. Soft., 5 (1979), pp. 308--323.

  2. J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 14 (1988), pp. 1--17.

  3. J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 14 (1988), pp. 18--32.

  4. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 1--17.

  5. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 18--28.

New BLAS
  1. L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), ACM Trans. Math. Soft., 28-2 (2002), pp. 135--151.

  2. J. Dongarra, Basic Linear Algebra Subprograms Technical Forum Standard, International Journal of High Performance Applications and Supercomputing, 16(1) (2002), pp. 1--111, and International Journal of High Performance Applications and Supercomputing, 16(2) (2002), pp. 115--199.

2.4) Is there a Quick Reference Guide to the BLAS available?

Yes, the Quick Reference Guide to the BLAS is available in postscript and pdf.


2.5) Are optimized BLAS libraries available? Where can I find optimized BLAS libraries?

YES! Machine-specific optimized BLAS libraries are available for a variety of computer architectures. These optimized BLAS libraries are provided by the computer vendor or by an independent software vendor (ISV) (see list below). For further details, please contact your local vendor representative.

Alternatively, the user can download ATLAS to automatically generate an optimized BLAS library for his architecture. Some prebuilt optimized BLAS libraries are also available from the ATLAS site. Goto BLAS is also available for a given set of machines. Efficient versions of the Level 3 BLAS, based on an efficient matrix matrix multiplication routine, are provided by the GEMM-Based BLAS.

If all else fails, the user can download a Fortran77 reference implementation of the BLAS from netlib. However, keep in mind that this is a reference implementation and is not optimized.

BLAS vendor library List
Last updated: July 20, 2005

Vendor

URL

AMD ACML
Apple Velocity Engine
Compaq CXML
Cray libsci
HP MLIB
IBM ESSL
Intel MKL
NEC PDLIB/SX
SGI SCSL
SUN Sun Performance Library


2.6) Where can I find Java BLAS?

Yes, Java BLAS are available. Refer to the following URLs: Java LAPACK and JavaNumerics The JavaNumerics webpage provides a focal point for information on numerical computing in Java.


2.7) Is there a C interface to the BLAS?

Yes, a C interface to the BLAS was defined in the BLAS Technical Forum Standard. The source code is also available.


2.8) Are prebuilt Fortran77 ref implementation BLAS libraries available?

Yes, you can download a prebuilt Fortran77 reference implementation BLAS library or compile the Fortran77 reference implementation source code of the BLAS from netlib.

Note that this is extremely slow and thus we do not recommend it: you should use optimized BLAS whenever possible, see FAQ 5.


2.9) What about shared memory machines? Are there multithreaded versions of the BLAS available?

ATLAS, Goto BLAS (two threads only) and most of the BLAS library available via vendors are multithreaded. These libraries can be used with LAPACK to take benefit of shared memory systems.

LAPACK User Forum   |   lapack@cs.utk.edu