For most users, the performance of codes generated by a compiler is what actually matters. This can be inferred from running HPF version of PARKBENCH codes described in chapter 4 and 5. For HPF compiler developers and implementators, however, an additional benchmark suite may be very useful: the benchmark suite that can evaluate specific HPF compilation phases and the compiler runtime support. For that purpose, the relevant metric is the ratio of execution times of compiler generated to hand coded programs as a function of the problem size and number of processors engaged in the computation.
The compilation process can be logically divided into several phases, and each of them influence the efficiency of the resulting code. The initial stage is parsing of a source code which results in an internal representation of the code. It is followed by compiler transformations, like data distribution, loop transformations, computation distribution, communication detection, sequentialization, insertion of calls to a runtime support, and others. This we will call a HPF-specific phase of compilation. The compilation is concluded by code generation phase. For portable compilers that output Fortran 77 + message passing code, the node compilation is factorized out and the efficiency of the node compiler can be evaluated separately.
This benchmark suite addresses the HPF-specific phase only. Thus, it is well suited for performance evaluation of both translators (HPF to Fortran 77 + message passing) and genuine HPF compilers. The parsing phase is an element of the conventional compiler technology and it is not of interest in this context. The code generation phase involves optimization techniques developed for sequential compilers (in particular, Fortran 90 compilers) as well as micro-grain parallelism or vectorization. The object codes for specific platforms may be strongly architecture dependent (e.g., may be very different for processors with vector capabilities than for those without it). Evaluation of performance of these aspects require different techniques than these proposed here.
It is worth noting, that the HPF-phase strongly affects the possibility of optimization of the node codes. For example, insertions of calls to the communication library may prohibit the node compiler from performing many standard optimizations without expensive interprocedural analysis. Therefore, its capability to exploit opportunities for optimizations at HPF level and to generate the output code in such a way that it can be further optimized by the node compiler is an important element of evaluation of HPF compilers. Nevertheless, evaluation of the HPF-phase separately is very valuable since the hand coded programs face the same problems. We will address these issues in future releases of the benchmark suite.
Compilers for massively parallel and distributed systems are still the object of research and laboratory testing rather than commercial products. The parallel compiler technology as well as methods of evaluating it are not mature yet. Nevertheless, the advent of the HPF standard gives opportunity to develop systematic benchmarking techniques.
The current definition of HPF  cannot be recognized as an ultimate solution for parallel computing. Its limitations are well known, and many researchers are working on extensions to HPF to address a broader class of real life, commercial and scientific applications. We expect new language features to be added to the HPF definition in future versions of HPF, and we will extend the benchmark suite accordingly. On the other hand, new parallel languages based on languages other than Fortran, notably C++, are becoming more and more popular. Since the parallelism is inherent in a problem and not its representation, we anticipate many commonalities in the parallel languages and corresponding compiler technologies, notably sharing the runtime support. Therefore, we decided to address this benchmark suite to these aspects of the compilation process that are inherent to parallel processing in general, rather than testing syntactic details of HPF.