Click here to see the number of accesses to this library.

for	ACM Server Notice

by	Jack J. Dongarra and Eric Grosse
title	Distribution of Mathematical Software via Electronic Mail
for	A large collection of public-domain mathematical software
,	is now available via electronic mail.  The new system, netlib,
,	provides quick, easy, and efficient distribution
,	of public-domain software to the scientific computing community
,	on an as-needed basis.
,	The netlib service provides its users with features
,	not previously available:
,	there are no administrative channels to go through;
,	since no human processes the request, it is
,	possible to get software at any time, even in the middle of the night.
,	the most up-to-date version is always available,
,	and individual routines or pieces of a package can be obtained
,	instead of a whole collection.

by	Jack Dongarra & Bill Rosener,
title	NA-NET: Numerical Analysis NET,
ref	University of Tennessee Technical Report CS-91-146,
,	September 1991.
for	The NA-NET is a mail facility created to allow
,	numerical analysts (na) an easy method of
,	communicating with one another.  The main advantage of
,	the NA-NET is uniformity of addressing.  All mail is
,	addressed to the Internet host ``'' at
,	Oak Ridge National Laboratory.  Hence, members of the
,	NA-NET do not need to remember complicated addresses
,	or even where a member is currently located.  This
,	paper describes the software.

title	Netlib Services and Resources, (Rev. 1)
by	S. Browne, J. Dongarra, S. Green, E. Grosse, K. Moore, T. Rowan, 
,	and R. Wade
for	The Netlib repository, maintained by the University of Tennessee and
,	Oak Ridge National Laboratory, contains freely available software,
,	documents, and databases of interest to the numerical, scientific
,	computing, and other communities.  This report includes both the
,	Netlib User's Guide and the Netlib System Manager's Guide, and
,	contains information about Netlib's databases, interfaces, and system
,	implementation. The Netlib repository's databases include
,	the Performance Database, the Conferences Database, and
,	the NA-NET mail forwarding and Whitepages Databases.  A variety of
,	user interfaces enable users to access the Netlib repository in the
,	manner most convenient and compatible with their networking
,	capabilities.  These interfaces include the Netlib email interface,
,	the Xnetlib X Windows client, the netlibget command-line TCP/IP
,	client, anonymous FTP, anonymous RCP, and gopher.

by	Jack J. Dongarra, Thomas H. Rowan, and Reed C. Wade
title	Software Distribution Using XNETLIB
ref	Oak Ridge National Laboratory Technical Report ORNL/TM-12318
,	June, 1993
for	Xnetlib is a new tool for software distribution.  Whereas its
,	predecessor netlib uses e-mail as the user interface
,	to its large collection of public-domain mathematical software,
,	Xnetlib uses an X Window interface and socket-based communication.
,	Xnetlib  makes it easy to search through a large distributed
,	collection of software and to retrieve requested software in seconds.

title	PDS: A Performance Database Server
by	Michael W. Berry, Jack J. Dongarra, and Brian H. LaRose
for	The process of gathering, archiving, and distributing computer
,	benchmark data is a cumbersome task usually performed by computer
,	users and vendors with little coordination.  Most important,
,	there is no publicly-available central depository of performance data
,	for all ranges of machines from personal computers to supercomputers.
,	We present an Internet-accessible performance database server (PDS) which
,	can be used to extract current benchmark data and literature.  As
,	an extension to the X-Windows-based user interface (Xnetlib)
,	to the Netlib archival system, PDS provides an on-line catalog of
,	public-domain computer benchmarks such as the LINPACK Benchmark,
,	Perfect Benchmarks, and the NAS Parallel Benchmarks.  PDS does not
,	reformat or present the benchmark data in any way that conflicts with the
,	original methodology of any particular benchmark; it is thereby
,	devoid of any subjective interpretations of machine performance.
,	We feel that all branches (research laboratories, academia, and industry) of the
,	general computing community can use this facility to archive performance metrics
,	and make them readily available to the public.  PDS can provide a more
,	manageable approach to the development and support of a large
,	dynamic database of published performance metrics.

by	Shirley Browne, Jack Dongarra, Stan Green, Keith Moore,
,	Tom Rowan, Reed Wade, Geoffrey Fox and Ken Hawick
title	Prototype of the National High-Performance Software Exchange
ref	University of Tennessee Technical Report CS-94-263,
,	December, 1994
for	This report describes a short-term effort to construct a prototype
,	for the National High-Performance Software Exchange (NHSE).
,	The prototype demonstrates how
,	the evolving National Information Infrastructure (NII) can be used
,	to facilitate sharing of software and information among members of the
,	High Performance Computing and Communications (HPCC) community.
,	Shortcomings of current information searching and retrieval tools
,	are pointed out, and recommendations are given for areas in need
,	of further development.
,	The hypertext home page for the NHSE is accessible at

title	Location-Independent Naming for Virtual Distributed Software 
,	Repositories
by	Shirley Browne, Jack Dongarra, Stan Green,
,	Keith Moore, Theresa Pepin, Tom Rowan, Reed Wade, Eric Grosse 
for	A location-independent naming system for network resources
,	has been designed to facilitate organization and description
,	of software components accessible through a virtual distributed
,	repository.
,	This naming system enables easy and efficient searching and retrieval,
,	and it addresses many of the
,	consistency, authenticity, and integrity issues involved with
,	distributed software repositories by providing mechanisms for
,	grouping resources and for authenticity and integrity checking.
,	This paper details the design of the naming system, describes
,	a prototype implementation of some of the capabilities, and
,	describes how the system fits into the development of the National
,	HPCC Software Exchange, a virtual software repository that has the goal
,	of providing access to reusable software components for
,	high-performance computing.

title	National HPCC Software Exchange
by	Shirley Browne, Jack Dongarra, Stan Green, Keith Moore, Tom Rowan, 
,	Reed Wade, Geoffrey Fox, Ken Hawick, Ken Kennedy, Jim Pool,
,	Rick Stevens, Bob Olson, and Terry Disz
for	This report describes an effort to construct a
,	National HPCC Software Exchange (NHSE).  This system shows how
,	the evolving National Information Infrastructure (NII) can be used
,	to facilitate sharing of software and information among members of the
,	High Performance Computing and Communications (HPCC) community.
,	To access the system use the URL:

title	Digital Software and Data Repositories for Support of
,	Scientific Computing
by	Ronald Boisvert, Shirley Browne, Jack Dongarra, and Eric Grosse
for	This paper discusses the special characteristics and needs of
,	software repositories and describes how these needs have been
,	met by some existing repositories.  These repositories include
,	Netlib, the National HPCC Software Exchange,
,	and the GAMS Virtual Repository.
,	We also describe some systems that provide on-line access
,	to various types of scientific data.
,	Finally, we outline a proposal for integrating software and data 
,	repositories into the world of digital document libraries, in
,	particular CNRI's ARPA-sponsored Digital Library project.

file    srwn10.html
title	Distributed Information Management in the National HPCC
,	Software Exchange
by	Shirley Browne, Jack Dongarra, Geoffrey C. Fox, Ken Hawick,
,	Ken Kennedy, Rick Stevens, Robert Olson, Tom Rowan
for	The National HPCC Software Exchange
,	is a collaborative effort by member institutions of the
,	Center for Research on Parallel Computation
,	to provide network access to HPCC-related software, documents,
,	and data.
,	Challenges for the NHSE include identifying, organizing, filtering,
,	and indexing the rapidly growing wealth of relevant information
,	available on the Web.
,	The large quantity of information necessitates performing these
,	tasks using automatic techniques, many of which make use of parallel
,	and distribution computation, but human intervention is needed for
,	intelligent abstracting,
,	analysis, and critical review tasks.  Thus, major goals of
,	NHSE research are to find the right mix of
,	manual and automated techniques, and to leverage the results of
,	manual efforts to the maximum extent possible.  This paper describes
,	our current information gathering and
,	processing techniques, as well as our future plans for integrating
,	the manual and automated approaches.
,	The NHSE home page is accessible at

title	Management of the NHSE - A Virtual Distributed Digital Library
by	Shirley Browne, Jack Dongarra, Ken Kennedy, Tom Rowan
for	The National HPCC Software Exchange (NHSE) is a distributed collection
,	of software, documents, and data of interest to the high performance
,	computing community.  Our experiences with the design and initial
,	implementation of the NHSE are relevant to a number of general digital
,	library issues, including the publication process, quality control,
,	authentication and integrity, and information retrieval.
,	This paper describes an authenticated submission process that is
,	coupled with a multilevel review process.
,	Browsing and searching tools for aiding with
,	information retrieval are also described.

title	Netlib Internal Data Flow
by	Eric Grosse
for	The netlib repository consists of a dozen cooperating machines,
,	with most coordination done by cron tasks and daemons.  This internal
,	netlib memo describes the flow of files and auxiliary information.
,	It aims to help local authors appreciate which files they may edit
,	directly and which are derived automatically.  It may be of
,	interest to administrators of large, replicated repositories.

title	Repository Mirroring
by	Eric Grosse
for	Distributed administration of network repositories demands
,	a low overhead procedure for cooperating repositories around the
,	world to ensure they hold identical contents.  Netlib has
,	adopted some refinements on the widespread scheme of anonymous
,	ftp and ls-lR.  Checksum files and a two small C programs give
,	an easily maintained system that copes with communication
,	breakdowns and subtle changes in repository contents.  The
,	packaging of these C programs inside a shell pipeline provides
,	an explicit command stream that can readily be checked before
,	execution.  Protecting files, keeping logs, and so forth becomes
,	effortless and reliable.  The same tools, applied on a smaller
,	scale, allow more people to participate in the editorial work of
,	maintaining a high-quality repository, by eliminating the need
,	for directly manipulating files at remote sites.
ref	ACM TOMS 21:1 (Mar 1995) 89-97

title	Software Reuse in High Performance Computing
by	Shirley Browne, Jack Dongarra, Geoffrey Fox, Ken Hawick, Tom Rowan
for	Although high performance computing architectures in the form of
,	distributed memory multiprocessors have become available, these
,	machines have not achieved widespread use outside of academic
,	research environments.  The slowness in adopting high performance
,	architectures appears to be caused by the difficulty and cost of
,	programming applications to run on these machines.  Economical use
,	of high performance computing and subsequent adoption by industry
,	will only occur if widespread reuse of application code can be
,	achieved.  To accomplish this goal,
,	we propose strategies for achieving reuse of application
,	code across different machine architectures and for using portable
,	reusable components as building blocks for applications.

title	Means of Achieving Cross-Program Focus, Coordination,
,	and Technology Transfer
by	Shirley Browne, Jack Dongarra, Geoffrey Fox, and Ken Kennedy
for	This paper is in response to the National Science and Technology
,	Council (NSTC) Committee on Information and Communications R\&D
,	(CIC) call for white papers on the CIC's Strategic Implementation
,	Plan for America in the Age of Information.
,	We agree with the identification of strategic focus areas and
,	the need for efficient coordination of activities.
,	However, the Strategic Plan provides only a high level description
,	of how the focusing and coordination will be carried out
,	through the use of information technology.
,	The Strategic Plan emphasizes the importance of accelerating
,	the maturation and broad deployment of technologies, but again
,	nowhere is it spelled out how this acceleration will be achieved.
,	We propose the use of domain-specific modeling of complex
,	hardware and software systems, coupled with formation of
,	domain-specific but interoperable resource and knowledge bases for
,	achieving the needed focus, coordination, and technology transfer.
,	We further propose the selection of a particular strategic
,	focus area, namely high performance/scalable systems, and a particular
,	application category, namely high performance applications, in
,	which to initially investigate the effectiveness of the proposed
,	approach.  If this initial focus is successful,
,	the lessons learned and the methods and tools produced
,	will be applicable to the other focus areas.

lib	srwn16
title	Evolving Software Repositories into Interactive Problem
,	Solving Environments
by	Ron Boisvert, Jack Dongarra, and Eric Grosse
for	Poster presentation for ARPA/CSTO Principal Investigator meeting,
,	Ft. Lauderdale, July 10-13, 1995

lib	srwn17
title	A Scalable File Replication Scheme for the World Wide Web
by	Keith Moore, Jason Cox, Stan Green, and Reed Wade
for	The World Wide Web has reached the point where many popular
,	file servers are overloaded, resulting in degradation or unavailability
,	of service. Users are also often separated over a small-bandwidth
,	pipe from a file server they need to access, resulting in poor
,	response time. The Bulk File Distribution (BFD) system described
,	in this paper aims to alleviate these problems by providing
,	mechanisms for registering and looking up alternative locations for
,	replicated files. The system also includes authentication and
,	integrity checking mechanisms. Unlike other proposed name
,	resolution systems, BFD provides a straightforward consistency
,	model for updates through the use of an intermediate file handle that
,	unambiguously identifies a particular sequence of bytes. We
,	describe a strategy for a gradual transition from using URLs to
,	using location-independent names which achieves the benefits of
,	replication while retaining the familiar URL syntax. We also
,	describe a service called SONAR that is intended to assist client
,	programs in choosing among alternative locations for a file, based
,	on a proximity measure. Finally, we describe a prototype
,	implementation of the BFD system. 

lib	srwn18
title	Resource Cataloging and Distribution System (RCDS)
by	Keith Moore, Shirley Browne, Stan Green, and Reed Wade
for     We describe an architecture for cataloging the characteristics of
,	Internet-accessible resources, for replicating such resources to
,	improve their accessibility, and for cataloging the current locations
,	of the resources so replicated.  Message digests and public-key
,	authentication are used to ensure the integrity of the files provided
,	to users.  The service is designed to provide increased functionality
,	with only minimal changes to either a client or a server.  Resources
,	can be named either by URNs or by existing URLs, and the service is
,	designed to facilitate long-term resolution of resource names.
title	Software Repository Interoperability
by	Shirley Browne, Jack Dongarra, Kay Hohn, and Tim Niesen
for	A number of academic, commercial, and government
,	software repositories currently exist that provide access to
,	software packages, reusable software components, and related documents,
,	either via the Internet or via intra-organizational intranets.
,	It is highly desirable, both for user convenience and savings in
,	duplication of effort, that these repositories interoperate.
,	This paper describes interoperability standards that have
,	already been developed as well as those under development by the
,	Reuse Library Interoperability Group (RIG).  These standards include
,	a data model for a common semantics for describing software resources,
,	as well as frameworks for describing software certification policies
,	and intellectual property rights.  The National HPCC Software Exchange
,	(NHSE) is described as an example of an organization that is achieving
,	interoperation between government and academic HPCC software 
,	repositories, in part through adoption of RIG standards.

title	Reuse Library Interoperability and the World Wide Web
by	Shirley Browne and James Moore
for	The Reuse Library Interoperability Group (RIG) was formed in 1991
,	for the purpose of drafting standards enabling the interoperation
,	of software reuse libraries.  At that time, prevailing wisdom
,	among many reuse library operators was that each should be a
,	stand-along operation.  Many operators saw a need for only a single
,	library, their own, and most strived to provide the most general
,	possible services to appeal to a broad community of users.
,	The ASSET program, initiated byt he Advanced Research Projects Agency
,	STARS program, was the first to make the claim that it should
,	properly by one part of a network of interoperating libraries.
,	Shortly thereafter, the RIG was formed, initially
,	as a collaboration between the STARS program and the Air Force
,	RAASP program, but growing within six months to a self-sustaining
,	cooperation among twelve chartering organizations.
,	The RIG has grown to include over twenty members from government,
,	industry, and academic reuse libraries.  It has produced a number
,	of technical reports and proposed interoperability standards, some
,	of which are described in this report.

file	srwn21.html
title	The Netlib Mathematical Software Repository
by      Shirley Browne, Jack Dongarra, Eric Grosse, Tom Rowan
for     The Netlib repository contains freely available software, documents, 
,	and databases of interest to the numerical, scientific computing,
,	and other research communities. The repository is maintained by 
,	AT&T Bell Laboratories, by the University of Tennessee and Oak 
,	Ridge National Laboratory, and by colleagues world-wide.
,	Many sites around the world mirror the collection and are automatically 
,	synchronized to provide reliable and efficient service to the 
,	global community through a variety of access mechanisms. 
ref	Appeared in DLIB Magazine, September 1995

title	Interactive and Dynamic Content in Software Repositories
by	Ronald Boisvert, Shirley Browne, Jack Dongarra, Eric Grosse, and
,	Bruce Miller
for     The goal of our software repository research is to improve access
,	to tools for doing computational science for both expert and non-expert
,	users. We are exploring the use of emerging Web and network technologies
,	for enhancing repository usability and interactivity.  Technologies
,	such as Java, Inferno/Limbo, and remote execution services
,	can interactively assist users in searching for, selecting, and
,	using scientific software and computational tools.
,	This paper describes experimental interfaces and services
,	we have developed for traversing a software classification
,	hierarchy, for selection of software and test problems,
,	and for remote execution of library software.
,	After developing and testing our research prototypes, we deploy
,	them in working network services useful to the computational
,	science community.