next up previous contents
Next: 3 The VAMPIR Environment Up: VAMPIR: Visualization and Analysis Previous: 1 Introduction

2 The Message Passing Interface (MPI)

The growing interest in parallel computing, and notably in the message-passing programming model, pushes the demand for a standardized application programming interface supported by all major parallel system vendors. Starting in 1993, a group of computer vendors, library writers and application programmers from the US and Europe collaborated to design a standard portable message-passing interface called MPI. The final specification of this interface was published in May 1994 and updated in June 1995 ([MPI95]); [GLS94] gives a good introduction from the application programmer's point of view.

A number of portable and vendor-specific MPI implementations have since been developed, showing that MPI can indeed be implemented efficiently on the currently available parallel computer platforms. There are three public-domain implementations of MPI, and most parallel system vendors have announced MPI implemetations of their own.

MPI draws from a number of other message-passing interfaces, including IBM's EUI, PVM, Intel's NX, and PARMACS, adding some advanced features:

Therefore, MPI enables portable programs and libraries to be written. Of course, mere functional portability is not sufficient in practice: efficiency of an application or library must be the second focus of interest. In spite of using a standardized interface, parallel programs will not show equal performance on different hardware platforms -- same as with sequential programs. Thus, careful adjustments - performance tuning - are necessary to optimize a parallel application for a given parallel system.

For parallel programs on massively parallel systems, performance tuning is much more complicated than in the sequential case, because additional system parameters like the ratio of computational power to communication speed come into play, and currently no automatic tools analogous to optimzing compilers are available. To reap the maximum benefit from MPI, powerful and easy to handle performance analysis and visualization tools are of increasing importance.

Users working with different message-passing libraries on several parallel systems will not have the time to fully understand and tune their message-passing codes for every platform. With the dissemination of MPI this will probably change. The powerful features of MPI will offer a range of flexibility that allows to get the maximum performance from any kind of parallel hardware supporting message passing. To achieve this in a convenient way users will ask for tools able to display the communication structure of their programs at almost every time scale. VAMPIR will have the functionality to satisfy these demands.


next up previous contents
Next: 3 The VAMPIR Environment Up: VAMPIR: Visualization and Analysis Previous: 1 Introduction

top500@rz.uni-mannheim.de
Tue May 28 14:38:25 PST 1996