next up previous contents
Next: U.S. Dominance of Up: No Title Previous: No Title

Introduction

In 1994 high performance computing in the U.S. has experienced a major transition. With the introduction of powerful new ``massively parallel processorsgif" (MPP) by established vendors in the high performance computing field such as the T3D by Cray Research, the SP-2 by IBM, and microprocessor based SMPs (symmetric multiprocessors) such as the SGI Power Challenge and the Exemplar SPP by Convex, there are strong indications that commodity microprocessor based systems are now the mainstream technology for scientific supercomputing. Among existing vendors, Intel's Supercomputing Systems Division and NCube seem to have fallen behind in technology innovation, and both Kendall Square, and Thinking Machines went out of the (hardware) business. The fact that two major players closed their doors in 1994, often has been interpreted as a set-back for parallel systems. I view these events on the contrary as a sign of strength of the market. It is precisely those companies who did not rely on commodity processor technology, who failed; and it is exactly those companies who relied on microprocessors who succeeded in 1994.

Interestingly, while this transition away from vector mainframes is happening the actual demand for MPP systems is declining! In his 1993 study ``High Performance Technical Computing Market Review and Forecast," Chris Willard from IDC considers the following four market segments:

The following Table 1 shows the 1993 and projected 1998 worldwide market share of these different segments.

 
Table 1:   High Performance Computing Market Segmentation

The world wide market in 1993 was estimated to be about $2.4 Billion, with overall growth of the market by very modest aggregate rate of only 1.4% in five years until 1998. These projections imply a very fierce competition also in the future, since the number of vendors in this relatively small market continues to be too large. This can seen from the list of vendors in Table 2, which is updated from Smaby [11]. The consequences of such a large number of vendors competing for such a small (but highly visible and important) market are widely discussed [10]. Compared to 1994 this table has three companies less in the ``Currently active" category, and no serious newcomers.

 
Table 2:   Commerical HPC Vendors in the U.S (early 1994)

At the same time the federal High Performance Computing and Communications Program (HPCCP) is winding down. After considerable progress has been made as documented in the famous ``Blue Book" [4] the focus of federal programs has shifted now to the NII (National Information Infrastructure). The discussion about HPC in the commercial and in the government market place continues to be based on beliefs and impressions, and often lacks hard data. Claims in the early years of the HPCC that a Teraflop/s performance on significant applications will come to pass by 1996, is almost certainly not going to happen. However, this was the wrong metric to pursue from the very beginning. It continues to surprise that a field such as HPC that is deemed so critically important to the national agenda lacks almost completely any quantitative assessment of its progress.

This report attempts to shed some light on recent developments in HPC in the U.S. and present some quantitative data on the type and distribution of HPC technology. All the information here is based on the TOP500 list of November 1994. The report [7] ranks the 500 top performing supercomputers worldwide. The measure of performance is the maximal achieved Rmax value for the computer on the LINPACK benchmark as reported in [6]. Using this measure the cutoff to make the list of the TOP500 systems worldwide is a performance of 1.114 Gflop/s on the LINPACK benchmark. Three Cray Y-MP M94 computers take on the ranks from 498 to 500. The top ranked machine is made by Fujitsu: the 140 processor specially built computer for the Numerical Windtunnel project at NAL in Japan is rated at 170.4 Gflop/s. The number 2 worldwide is the biggest system in the U.S.: an Intel Paragon XP/S140 with 3680 processors at Sandia National Laboratories in Albuquerque, NM. Research teams from Sandia and NAL were also the top prize winners at the 1994 Gordon Bell Prize Awards [9]. Both teams demonstrated for the first time applications performance in excess of 100 Gflop/s, which is a major milestone. Also in the NAS Parallel Benchmarks [2] now parallel machines are clearly ahead. Therefore one can label 1994 as the banner year, ``The year when parallel processing succeeded" [1].

Before investigating some of the data in [7] in more detail, it is important to understand the limitations of the TOP500 study. These limitations can be summarized as follows:

In spite of these inherent limitations the TOP500 can provide extremely useful information, and valuable insights. It is more accurate than many marketing studies, and the possible sources of error discussed above are probably statistically insignificant, if we consider only summary statistics, and not individual data. All Mflop/s or Gflop/s performance figures here refer to performance in terms of LINPACK Rmax.

In the analysis of geographical distribution, machines in Canada have been included in the figures for the U.S., and the figures for Europe include all European countries, not just EC members. The other country category includes mostly countries of the Pacific Rim with the exclusion of Japan, and a few Latin American Countries.



next up previous contents
Next: U.S. Dominance of Up: No Title Previous: No Title



top500@rz.uni-mannheim.de
Tue Nov 14 15:00:18 PST 1995