MPI: A Message-Passing Interface Standard

MPI: A Message-Passing Interface Standard

Message Passing Interface Forum

The Message Passing Interface Forum (MPIF), with participation from over 40 organizations, has been meeting since November 1992 to discuss and define a set of library interface standards for message passing. MPIF is not sanctioned or supported by any official standards organization.

The goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs. As such the interface should establish a practical, portable, efficient, and flexible standard for message passing.

This is the final report, Version 1.0, of the Message Passing Interface Forum. This document contains all the technical features proposed for the interface. This copy of the draft was processed by LaTeX on Mon Nov 21 14:56:38 1994.

Please send comments on MPI to mpi-comments@cs.utk.edu. Your comment will be forwarded to MPIF committee members who will attempt to respond.

(c) 1993, 1994 University of Tennessee, Knoxville, Tennessee. Permission to copy without fee all or part of this material is granted, provided the University of Tennessee copyright notice and the title of this document appear, and notice is given that copying is by permission of the University of Tennessee.

Index to MPI Standard
Errata to MPI Standard

Acknowledgments

The technical development was carried out by subgroups, whose work was reviewed by the full committee. During the period of development of the Message Passing Interface ( MPI), many people served in positions of responsibility and are listed below.


The following list includes some of the active participants in the MPI process not mentioned above.

The University of Tennessee and Oak Ridge National Laboratory made the draft available by anonymous FTP mail servers and were instrumental in distributing the document.

MPI operated on a very tight budget (in reality, it had no budget when the first meeting was announced). ARPA and NSF have supported research at various institutions that have made a contribution towards travel for the U.S. academics. Support for several European participants was provided by ESPRIT.


Contents
  • Introduction to MPI
  • Overview and Goals
  • Who Should Use This Standard?
  • What Platforms Are Targets For Implementation?
  • What Is Included In The Standard?
  • What Is Not Included In The Standard?
  • Organization of this Document
  • MPI Terms and Conventions
  • Document Notation
  • Procedure Specification
  • Semantic Terms
  • Data Types
  • Opaque objects
  • Array arguments
  • State
  • Named constants
  • Choice
  • Addresses
  • Language Binding
  • Fortran 77 Binding Issues
  • C Binding Issues
  • Processes
  • Error Handling
  • Implementation issues
  • Independence of Basic Runtime Routines
  • Interaction with signals in POSIX
  • Point-to-Point Communication
  • Introduction
  • Blocking Send and Receive Operations
  • Blocking send
  • Message data
  • Message envelope
  • Blocking receive
  • Return status
  • Data type matching and data conversion
  • Type matching rules
  • Type MPI_CHARACTER
  • Data conversion
  • Communication Modes
  • Semantics of point-to-point communication
  • Buffer allocation and usage
  • Model implementation of buffered mode
  • Nonblocking communication
  • Communication Objects
  • Communication initiation
  • Communication Completion
  • Semantics of Nonblocking Communications
  • Multiple Completions
  • Probe and Cancel
  • Persistent communication requests
  • Send-receive
  • Null processes
  • Derived datatypes
  • Datatype constructors
  • Address and extent functions
  • Lower-bound and upper-bound markers
  • Commit and free
  • Use of general datatypes in communication
  • Correct use of addresses
  • Examples
  • Pack and unpack
  • Collective Communication
  • Introduction and Overview
  • Communicator argument
  • Barrier synchronization
  • Broadcast
  • Example using MPI_BCAST
  • Gather
  • Examples using MPI_GATHER, MPI_GATHERV
  • Scatter
  • Examples using MPI_SCATTER, MPI_SCATTERV
  • Gather-to-all
  • Examples using MPI_ALLGATHER, MPI_ALLGATHERV
  • All-to-All Scatter/Gather
  • Global Reduction Operations
  • Reduce
  • Predefined reduce operations
  • MINLOC and MAXLOC
  • User-Defined Operations
  • Example of User-defined Reduce
  • All-Reduce
  • Reduce-Scatter
  • Scan
  • Example using MPI_SCAN
  • Correctness
  • Groups, Contexts, and Communicators
  • Introduction
  • Features Needed to Support Libraries
  • MPI's Support for Libraries
  • Basic Concepts
  • Groups
  • Contexts
  • Intra-Communicators
  • Predefined Intra-Communicators
  • Group Management
  • Group Accessors
  • Group Constructors
  • Group Destructors
  • Communicator Management
  • Communicator Accessors
  • Communicator Constructors
  • Communicator Destructors
  • Motivating Examples
  • Current Practice #1
  • Current Practice #2
  • (Approximate) Current Practice #3
  • Example #4
  • Library Example #1
  • Library Example #2
  • Inter-Communication
  • Inter-communicator Accessors
  • Inter-communicator Operations
  • Inter-Communication Examples
  • Example 1: Three-Group ``Pipeline"
  • Example 2: Three-Group ``Ring"
  • Example 3: Building Name Service for Intercommunication
  • Caching
  • Functionality
  • Attributes Example
  • Formalizing the Loosely Synchronous Model
  • Basic Statements
  • Models of Execution
  • Static communicator allocation
  • Dynamic communicator allocation
  • The General case
  • Process Topologies
  • Introduction
  • Virtual Topologies
  • Embedding in MPI
  • Overview of the Functions
  • Topology Constructors
  • Cartesian Constructor
  • Cartesian Convenience Function: MPI_DIMS_CREATE
  • General (Graph) Constructor
  • Topology inquiry functions
  • Cartesian Shift Coordinates
  • Partitioning of Cartesian structures
  • Low-level topology functions
  • An Application Example
  • MPI Environmental Management
  • Implementation information
  • Environmental Inquiries
  • Tag values
  • Host rank
  • IO rank
  • Error handling
  • Error codes and classes
  • Timers
  • Startup
  • Profiling Interface
  • Requirements
  • Discussion
  • Logic of the design
  • Miscellaneous control of profiling
  • Examples
  • Profiler implementation
  • MPI library implementation
  • Systems with weak symbols
  • Systems without weak symbols
  • Complications
  • Multiple counting
  • Linker oddities
  • Multiple levels of interception
  • Language Binding
  • Introduction
  • Defined Constants for C and Fortran
  • C bindings for Point-to-Point Communication
  • C Bindings for Collective Communication
  • C Bindings for Groups, Contexts, and Communicators
  • C Bindings for Process Topologies
  • C bindings for Environmental Inquiry
  • C Bindings for Profiling
  • Fortran Bindings for Point-to-Point Communication
  • Fortran Bindings for Collective Communication
  • Fortran Bindings for Groups, Contexts, etc.
  • Fortran Bindings for Process Topologies
  • Fortran Bindings for Environmental Inquiry
  • Fortran Bindings for Profiling
  • Index


  • Return to MPI Standard Index
    Return to MPI home page