next up previous
Next: Evaluation of HPC-Netlib Software Up: Evaluation of High-Performance Computing Previous: Approach

Evaluation of PTLIB Software

 

So far our evaluation of PTLIB software has covered parallel debuggers and performance analyzers. We give a detailed description of the evaluation criteria below. Note that it is has been refined and expanded to a level of detail to enable it to serve as an evaluation checklist.

Performance
Includes accuracy, efficiency, and scalability.
Accuracy
A performance monitoring tool is accurate if it does not cause too great a change in the behavior and timing of the program it is monitoring.

Efficiency
The software runs fast enough, in that slow speed does not make it an ineffective tool.

Scalability
A parallel tool is scalable if its overhead grows in a reasonable manner with increases in system and problem sizes. In some cases, linear growth may not be acceptable.

Capabilities
The tool has adequate functionality to effectively accomplish its intended tasks.

Versatility
Includes heterogeneity, interoperability, portability, and extensibility
Heterogeneity
A heterogeneous tool can simultaneously be invoked on and/or have its components running on all platforms in a heterogeneous system.

Interoperability
A parallel tool is interoperable if its design is based on open interfaces and if it conforms to applicable standards.

Portability
A parallel tool is portable if it works on different parallel platforms and if platform dependencies have been isolated to specific parts of the code.

Extensibility
A performance analysis tool is extensible if new analysis methods and views can be added easily.

Maturity
Includes robustness, level of support, and size of user base.
Robustness
A parallel tool is robust if it handles error conditions without crashing and by reporting them and recovering from them appropriately.

Level of support
The timeliness and quality of responses to questions from users or the reviewer should be adequate for typical package use.

Size of user base
Indicators include the existence of newsgroups or mailing lists for the package, and the number of downloads of the package.

Ease of use
The software has an understandable user interface and is easy to use for a typical NHSE user.

The software characteristics described in the criteria above are most appropriately assessed by reviewer judgment rather than by measured results. Each PTLIB software evaluation therefore contains a set of reviewer-assigned numerical scores indicating how well the package met the criteria.

Currently over 20 parallel debuggers and performance analyzers have been evaluated according to the above criteria. These packages include AIMS, DAQV, LCB, MQM, NTV, Pablo, Pangaea, Paradyn, ParaGraph, ParaVision, PGPVM, PVaniM, TotalView, Upshot, VAMPIR, VT, Xmdb, XMPI, and XPVM. We have solicited author feedback on these evaluations, and the initial evaluations have been updated based on the feedback received. Web access to the evaluations is available through the PTLIB homepage at http://www.nhse.org/ptlib/. See http://www.nhse.org/sw_catalog/ for descriptions of the PTLIB software packages.


next up previous
Next: Evaluation of HPC-Netlib Software Up: Evaluation of High-Performance Computing Previous: Approach

Jack Dongarra
Sat Nov 16 05:50:03 EST 1996