next up previous
Next: Evaluation of PTLIB Software Up: Evaluation of High-Performance Computing Previous: Introduction

Approach

 

Our approach differs in several respects from the traditional presentations of comparative evaluations.

We decided that users would benefit most if we concentrated our evaluations on the software with broadest applicability. For this reason we have focused our evaluations on parallel systems software and tools, and on mathematical software. Many packages selected for evaluation were drawn from the collection of software already available through Netlib and the NHSE. We also solicited other promising packages not yet available from our repositories.

Our first step in designing a systematic, well-defined evaluation criteria was to use a high-level set of criteria that can be refined as needed to particular domains. Our starting point for establishing the high-level set of criteria was to build on the software requirements described in the Baseline Development Environment [5]. The criteria were appropriately tailored to a particular domain by those doing the evaluations and by others with expertise in the domain. We expect that the evaluation criteria for a given domain will evolve over time as we take advantage of author and user feedback, and as new evaluation resources such as new tools and problem sets become available.

The NHSE software evaluation process consists of the following steps.

  1. Reviewers and other domain experts refine the high-level evaluation criteria to this domain.
  2. We select software packages within this domain and assign each to an NHSE project member knowledgeable in the field for evaluation.
  3. The reviewer evaluates the software package systematically, typically using a well-defined evaluation criteria checklist. Assessment of certain criteria will necessarily be subjective. To facilitate comparisons, the reviewer assigns a numerical score for each of those criteria based on his judgment of how well the criterion was met. Assessment of criteria that can be easily measured are typically reported directly as those measured results.
  4. We solicit feedback from the package author, giving him the opportunity to make corrections, additions, or comments on the evaluation. In effect we ask him to review our review, and we revise the review to correct any errors or omissions.
  5. We make the review and the author's feedback available via the Web.
  6. We add to the evaluation and author feedback any comments users wish to submit through the NHSE Web pages.

next up previous
Next: Evaluation of PTLIB Software Up: Evaluation of High-Performance Computing Previous: Introduction

Jack Dongarra
Sat Nov 16 05:50:03 EST 1996