From owner-pbwg-lowlevel@CS.UTK.EDU  Wed Mar 24 07:03:07 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA07972; Wed, 24 Mar 93 07:03:07 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11483; Wed, 24 Mar 93 07:03:00 -0500
X-Resent-To: pbwg-lowlevel@CS.UTK.EDU ; Wed, 24 Mar 1993 07:02:59 EST
Errors-To: owner-pbwg-lowlevel@CS.UTK.EDU
Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11474; Wed, 24 Mar 93 07:02:50 -0500
Via: uk.ac.southampton.ecs; Wed, 24 Mar 1993 11:56:04 +0000
From: R.Hockney@parallel-applications-centre.southampton.ac.uk
Via: calvados.pac.soton.ac.uk (plonk); Wed, 24 Mar 93 11:48:57 GMT
Date: Wed, 24 Mar 93 11:54:47 GMT
Message-Id: <3347.9303241154@calvados.pac.soton.ac.uk>
Apparently-To: pbwg-lowlevel@cs.utk.edu

This is a test message, please tell me if you have received it
                           Roger Hockney
From owner-pbwg-lowlevel@CS.UTK.EDU  Wed Mar 24 07:10:01 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA09541; Wed, 24 Mar 93 07:10:01 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11675; Wed, 24 Mar 93 07:09:47 -0500
X-Resent-To: pbwg-lowlevel@CS.UTK.EDU ; Wed, 24 Mar 1993 07:09:46 EST
Errors-To: owner-pbwg-lowlevel@CS.UTK.EDU
Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA11667; Wed, 24 Mar 93 07:09:43 -0500
Via: uk.ac.southampton.ecs; Wed, 24 Mar 1993 12:07:26 +0000
From: R.Hockney@parallel-applications-centre.southampton.ac.uk
Via: calvados.pac.soton.ac.uk (plonk); Wed, 24 Mar 93 11:48:57 GMT
Date: Wed, 24 Mar 93 11:54:47 GMT
Message-Id: <3347.9303241154@calvados.pac.soton.ac.uk>
Apparently-To: pbwg-lowlevel@cs.utk.edu

This is a test message, please tell me if you have received it
                           Roger Hockney
From owner-pbwg-lowlevel@CS.UTK.EDU  Fri Apr 30 12:31:40 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA18426; Fri, 30 Apr 93 12:31:40 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA22998; Fri, 30 Apr 93 12:31:04 -0400
X-Resent-To: pbwg-lowlevel@CS.UTK.EDU ; Fri, 30 Apr 1993 12:31:03 EDT
Errors-To: owner-pbwg-lowlevel@CS.UTK.EDU
Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA22985; Fri, 30 Apr 93 12:31:00 -0400
Via: uk.ac.southampton.ecs; Fri, 30 Apr 1993 17:01:36 +0100
From: R.Hockney@parallel-applications-centre.southampton.ac.uk
Via: calvados.pac.soton.ac.uk (plonk); Fri, 30 Apr 93 16:54:21 BST
Date: Fri, 30 Apr 93 15:07:06 GMT
Message-Id: <15922.9304301507@calvados.pac.soton.ac.uk>
Original-Received: 
Pp-Warning: Illegal Received field on preceding line
Apparently-To: <pbwg-lowlevel@cs.utk.edu>

~r benlet2.asc
From owner-pbwg-lowlevel@CS.UTK.EDU  Fri Apr 30 12:31:43 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA18443; Fri, 30 Apr 93 12:31:43 -0400
Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA22988; Fri, 30 Apr 93 12:31:01 -0400
X-Resent-To: pbwg-lowlevel@CS.UTK.EDU ; Fri, 30 Apr 1993 12:30:46 EDT
Errors-To: owner-pbwg-lowlevel@CS.UTK.EDU
Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK)
	id AA22904; Fri, 30 Apr 93 12:30:25 -0400
Via: uk.ac.southampton.ecs; Fri, 30 Apr 1993 16:09:52 +0100
From: R.Hockney@parallel-applications-centre.southampton.ac.uk
Via: calvados.pac.soton.ac.uk (plonk); Fri, 30 Apr 93 16:01:59 BST
Date: Fri, 30 Apr 93 15:09:09 GMT
Message-Id: <15924.9304301509@calvados.pac.soton.ac.uk>
To: pbwg-lowlevel@cs.utk.edu
Subject: Draft Report Chapter-2

                   FIRST DRAFT LOW-LEVEL CHAPTER
                   -----------------------------

I have put together below my thoughts on the contents of the low-level
benchmarks, which are mostly what we have at Southampton and in my
own work. This is for comment and to merge with contributions from
other subcommittee members (none in yet), and finally for presentation to 
24 May 1993 meeting of PBWG.
     I believe we need basic multi-processor benchmarks to measure
various global operations in addition to SYNCH1 (e.g. global sum and 
multiply reduction, global 'or' and 'and', etc). Perhaps other members have 
these or would like to write benchmarks and suggest appropriate metrics 
to use.
     I heard some talk about the alpha and beta parameters at the last
meeting. Can someone tell me what they measure and send me the appropriate
reference. Thank you,
                   Yours sincerely
                         Roger Hockney 
            (leader low-level benchmark subcommittee)
There follows latex version of draft of low-level chapter
------------------------  cut here  -----------------------------------------
%file: lowlev1.tex
\chapter{Low-Level Benchmarks}
\footnote{assembled by Roger Hockney for low-level subcommittee}

\section{Single-Processor Benchmarks}

\subsection{Timer resolution: TICK1}

TICK1 measures the interval between ticks of the clock being used
in the benchmark measurements. That is to say the resolution of
the clock. A succession of calls to the timer routine are inserted
in a loop and executed many times. The differences between successive
values given by the timer are then examined.  If the changes in the
clock value (or ticks) occur less frequently than the time taken to 
enter and leave the timer routine, then most of these differences
will be zero. When a tick takes place, however, a difference equal 
to the tick value will be recorded, surrounded by many zero differences. 
This is the case with clocks of poor resolution, for example most UNIX 
clocks that tick typically every 10 ms. Such poor UNIX clocks can still
be used for low-level benchmark measurements if the benchmark is
repeated, say, 10,000 times, and the timer calls are made outside this 
repeat loop.  

With some computers, such as the
CRAY series, the clock ticks every cycle of the computer, that is to say
every 6ns on the Y-MP. The resolution of the CRAY clock is therefore
approximately one million times better than a UNIX clock, and that is
quite a difference! If TICK1 is used on such a computer the difference
between successive values of the timer is a very accurate measure of
how long it takes to execute the instructions of the timer routine, and
therefore is never zero. TICK1 takes the minimum of all such differences,
and all it is possible to say is that the clock tick is less than or
equal to this value. Typically this minimum will be several hundreds
of clock ticks. With a clock ticking every computer cycle, we can make
low-level benchmark measurements without a repeat loop. Such measurements
can even be made on a busy timeshared system (where many users are
contending for memory access) by taking the minumum time recorded from
a sample of, say, 10,000 single execution measurements. In this case,
the minimum can usually be said to apply to a case when there was no
memory access delay caused by other users.

TICK1 exists and forms part of the Genesis benchmarks v2.0 and v2.1.1
 ~\cite{Hey91}.

\subsection{Timer value: TICK2}

TICK2 confirms that correctness of the time values returned by the
computer clock, by comparing its measurement of a given time interval
with that of an external wall-clock (actually the benchmarker's 
wristwatch). Parallel benchmark performance can only be measured
using the elapsed wall-clock time, as the objective of
parallel execution is to reduce this time. Measurements made with a
CPU-timer (which only records time when its job is executing in the CPU) 
are clearly incorrect, because the clock does not record waiting time 
when the job is out of the CPU. TICK2 will immediately detect the 
incorrect use of a CPU-time-for-this-job-only clock. An example
of a timer that claims to measure elapsed time but is actually a
CPU-timer, is the returned value of the popular Sun UNIX timer ETIME. 
TICK2 also checks that the correct multiplier is being used in the 
computer system software to convert clock ticks to true seconds.  

TICK2 exists and will form part of the next release of the Genesis 
benchmarks v2.2 ~\cite{Hey91}.


\subsection{Most Reported Benchmark: LINPACKD (n=100)}

This well-known standard benchmark is a Fortran program
for the solution of (100x100) dense set of linear equations by
Gaussian elimination. It is distributed by Dr J. J. Dongarra of
the University of Tennessee. The results are quoted in Mflop/s
and are regularly published and available by electronic mail.
The main value of this benchmark is that results are known for 
more computers than any other benchmark. Most of the compute time 
is contained in vectorisable DO-loops such as the DAXPY (scalar 
times vector plus vector) and inner product. Therefore one expects 
vector computers to perform well on this benchmark. The weakness 
of the benchmark is that it tests only a small number of vector 
operations, but it does include the effect of memory access and
it is solving a complete (although small) real problem. 

\subsection{Performance Range: The Livermore Loops}

These are a set of 24 Fortran DO-loops (or kernels) 
extracted from operational codes used at the Lawrence Livermore 
National Laboratory \cite{Ma88}. They have been used since the early 
seventies to assess the arithmetic performance of computers and their 
compilers. They are a mixture of vectorisable and non-vectorisable
loops and test rather fully the computational capabilities of the
hardware, and the skill of the software in compiling efficient code,
and in vectorisation. The main value of the benchmark is the range
of performance that it demonstrates, and in this respect it 
complements the limited range of loops tested in the LINPACK benchmark. 
The benchmark provides the individual performance of each loop,
together with various averages (arithmetic, geometric, harmonic)
and the quartiles of the distribution. However, it is difficult to
give a clear meaning to these averages, and the value of the benchmark
is more in the distribution itself.  In particular, the maximum and
minimum give the range of likely performance in full applications.
The ratio of maximum to minimum performance has been called the
{\em instability} or the {\em speciality} ~\cite{Hoc91}, and is a measure 
of how difficult it is to obtain good performance from the computer,
and therefore how specialised it is. The minimum or worst performance
obtained on these loops is of special value, because there is much
truth in the saying that "the best computer to choose is that with 
the best worst-performance".


\subsection{Basic Arithmetic Operations: RINF1}

This benchmark takes a set of common Fortran DO-loops and analyses
their time of execution in terms of the two parameters \rnhalf
~\cite{Hoc77,HoJe81,Hoc82,Hoc83,Hoc87,HoJe88}. \rinf is the 
asymptotic performance rate in Mflop/s which 
is approached as the loop (or vector) length ,$n$, becomes longer. \nhalf 
(the half-performance length) expresses how rapidly, in terms
increasing vector length, the actual performance, $r$, approaches \rinf. 
It is defined as the vector length required to achieve a performance 
of one half of \rinf. This means that the time, $t$, for a DO-loop
corresponding to $v$ vector operations (i.e. with $v$ floating-point 
operations per element per iteration) is approximated by:
\begin{equation}
                  t = v * ( n + \nhalf ) / \rinf          \label{Eqn1}
\end{equation}      
         Then the performance rate is given by
\begin{equation}
        r =  \frac{v*n}{t} = \frac{\rinf}{(1+\nhalf /n)}   \label{Eqn2}
\end{equation}                                        
We can see from Eqn.(\ref{Eqn1}) that \nhalf is a way of measuring the
importance of vector startup overhead (=\nhalf/\rinf) in terms
of quantities known to the programmer (loop or vector length).
In the benchmark program, the two parameters are determined by a
least-squares fit of the data to the straight line defined by 
Eqn.(\ref{Eqn1}). A useful guide to the significance of \nhalf is to note
from Eqn.(\ref{Eqn2}) that 80 percent of the asymptotic performance is
achieved for vectors of length $4*\nhalf$. Generally speaking, \nhalf
values of upto about 50 are tolerable, whereas the performance
of computers with larger values of \nhalf is severely constrained
by the need to keep vector lengths significantly longer than \nhalf.
This requirement makes the computers difficult to program efficiently,
and often leads to disappointing performance, compared to the
asymptotic rate advertised by the manufacturer.  

RINF1 exists as part of the Hockney and Genesis benchmarks v2.0 and v2.1.1
~\cite{Hey91}. 
An independently written version forms module MOD1AC of the EuroBen
benchmarks ~\cite{StRi93}.


\subsection{Memory-Bottleneck Benchmarks: POLY1 and POLY2}

Even if the vector lengths are long enough to overcome the vector 
startup overhead, the peak rate of the arithmetic pipelines may not
be realised because of the delays associated with obtaining data
from the cache or main memory of the computer. The POLY1 benchmark attempts 
to quantify this dependence of computer performance on memory access
bottlenecks. The computational intensity, $f$, of a DO-loop is defined 
as the number of floating-point operations (flop) performed per memory 
reference (mref) to an element of a vector variable ~\cite{HoJe88}. 
The asymptotic performance, \rinf, of a computer is observed to increase 
as the computational intensity increases, because as this
becomes larger, the effects of memory access delays become negligible
compared to the time spent on arithmetic. This effect is characterised
by the two parameters (\rhat,\fhalf), where \rhat~ is the peak hardware
performance of the arithmetic pipeline, and \fhalf is the computational
intensity required to achieve half this rate. That is to say the
asymptotic performance is given by:
\begin{equation}
              \rinf = \frac{\rhat}{(1+\fhalf/f)}          \label{Eqn3}
\end{equation}                                        
If memory access and arithmetic are not overlapped, then \fhalf can
be shown to be the ratio of arithmetic speed (in Mflop/s) to memory
access speed (in Mword/s). The parameter \fhalf, like \nhalf, measures 
an unwanted overhead
and should be as small as possible. In order to vary $f$ and allow
the peak performance to be approached, we choose a kernel loop that
can be computed with maximum efficiency on any hardware. This is the
evaluation of a polynomial by Horner's rule, in which case the
computational intensity is the order of the polynomial, and both the
multiply and add pipelines are used. To measure \fhalf, the order of 
the polynomial is increased from one to ten, and the measured 
performance for long vectors is fitted to Eqn.(\ref{Eqn3}).

The POLY1 benchmark repeats the polynomial evaluation for each order
typically 1000 times for vector lengths upto 10,000, which would 
normally fit into the cache of a cache-based processor. Except for 
the first evaluation the data will therefore be found in the cache. 
POLY1 is therefore an {\em in-cache} test of the memory bottleneck 
between the arithmetic registers of the processor and its cache. 

POLY2, on the other hand, flushes the cache prior to each different
order and then performs only one polynomial evaluation,
for vector lengths from 10,000 upto 100,000, which would normally
exceed the cache size. Data will have to be brought from off-chip
memory, and POLY2 is an {\em out-of-cache} test of the memory
bottleneck between off-chip memory and the arithmetic registers.

The POLY1 benchmark exists as MOD1G of the EuroBen benchmarks 
~cite{StRi93}. POLY2 exists as part of the Hockney benchmarks.


\subsection{Arithmetic Benchmark Results}

As an indication of the type of results given by the proposed low-level
arithmetic benchmarks, Table-\ref{Table1} gives measurements made on a 
number of workstations, and microprocessor chips that are used as 
processing nodes in multiprocessor MIMD computers. 


\begin{table}
\centering
{\small
\parbox{5in}{\caption{\label{Table1}
Performance of some common numerical benchmarks on some common workstations 
and microprocessor chips used in MIMD computers. Measurements were made with 
the highest level of optimisation that ran, and are in Mflop/s for 64-bit 
precision, except where stated in parentheses. The units of \nhalf are vector 
length, and \fhalf are flop/mref (floating-point operations per memory 
reference). Results are for the best generally available compiler on the 
date shown. Those for the i860 are for the first Greenhills compiler which 
is known not to use many important i860 hardware features. 
Later more advanced compilers should give significantly better results.  
}}
\begin{tabular}{lccccccc}
\hline
               &Sun   &Solbourne&Stardent&Inmos&Intel&IBM RS/  &DEC   \\
 Benchmark     &Sparc1&System 5 &TS2025 &T800  &i860 &6000-530 &$\alpha$\\
               &      &         &       &20MHz &40MHz& 25MHz   &133MHz \\
\hline
      d/m/y &18/1/90&25/1/90 &8/8/89 &15/4/89&6/8/90&14/6/90&13/1/93  \\  
%           m/y & 1/90 &  1/90   &  8/89 &  4/89 & 8/90 & 6/90 & 1/93  \\  
\hline
   Linpackd    & 1.27 &  2.79   & 4.32  & 0.33  & 3.89 & 9.54 & 20.7  \\
     n=100 \\
\hline
   Livermore   & 2.36 &  4.64   &       & 0.72  & 8.76 & 31.8 & 46.6  \\
     Maximum \\
\hline
   Livermore   & 0.45 &  0.89   & 0.45  & 0.10  & 0.47 & 1.34 & 4.47  \\
     Minimum \\
\hline
   RINF1(32') \\
   \rinf      & 1.29 &  2.50   &19.29  &  0.34 & 4.62  & 5.13 & 33.8  \\
  (\nhalf)    &(0.30)& (1.00)  &(1.03) &   (0) &(3.61) &      &(12.2) \\
\hline
   POLY1 \\
   \rhat      & 2.50 &  5.18   &42.31  &        &10.59   &25.85 & 88.9\\
  (\fhalf)    &(0.77)& (0.60)  &(0.51) &        &(1.12)  &(0.34)& (0.71)\\
\hline
\hline
\end{tabular}
}
\end{table}

Table-\ref{Table1} shows that the DEC $\alpha$ chip outperforms
all other workstations and chips on all benchmarks by a significant 
margin, as befits the start of a new generation of chips. However, one 
cannot help being impressed by the figures. The remaining workstations 
and chips are compared with each other below.

Table-\ref{Table1} shows that the IBM RS/6000 chip set performs 
best on the LINPACKD100 benchmark, followed by the Stardent ST2025 
which has a vector architecture. 
The i860 performs significantly worse than the IBM 6000. However
the benchmark performance of both machines is expected to improve 
as their compilers develop.

Table-\ref{Table1} gives the maximum and minimum 
performance observed in the 24 Livermore loops. 
The minimum performance can be taken as giving the 
worst scalar arithmetic performance that is likely to be found, 
and the maximum gives the best performance that is likely to be 
seen on highly vectorisable loops. The computer with the best 
worst-performance, which is a very good metric to examine, is 
the IBM RS/6000 followed by the Solbourne. The best maximum 
performance is seen in the RS/6000 followed by the i860.

The RINF1 benchmark gives values of the \rnhalf parameters for the kernel 
A=B*C (vector = vector $\times$ vector), and shows the Stardent ST2025 
performing best with the
highest \rinf and lowest \nhalf, followed by the IBM RS/6000. 

The POLY1 benchmark shows 
the Stardent ST2025 with the highest peak performance, followed 
by the IBM RS/6000 and then the i860. Of these three, the value for 
the IBM is best, and the Stardent quite low, but the value greater 
than one for the i860 shows that there is a severe memory bottleneck 
problem with this chip that will prevent it from getting close to its
peak advertised performance on many problems.


\section{Multi-Processor Benchmarks}


\subsection{Communication Benchmarks: COMMS1 and COMMS2}

The purpose of the COMMS1, or {\em Pingpong}, benchmark \cite{Hoc88,Hoc91} 
is to measure the basic communication properties of a message-passing MIMD 
computer. A message of variable length, $n$, is sent from a master node to 
a slave node. The slave node receives the message into a Fortran data array, 
and immediately returns it to the master.  
Half the time for this `pingpong' exchange is recorded as the
time, $t$, to send a message of length, $n$.  
In the COMMS2 benchmark there is a message exchange in which two nodes 
simultaneously send messages to each other and return them. In this case
advantage can be taken of bidirectional links, and a greater bandwidth
obtained than is possible with COMMS1. 
In both benchmarks, the time as a function of
message length is fitted by least squares using the parameters
\rnhalf \cite{Hoc82,HoJe88} to the following linear timing model:
\begin{equation}
                     t = (n + \nhalf)/\rinf            \label{Eqn(4.1)}
\end{equation}
when the average communication rate is given by
\begin{equation}
    r = \frac {\rinf}{1+\nhalf/n} = \rinf \pipe (n/\nhalf) \label{Eqn(4.2)}
\end{equation}
\begin{equation}
\where \spten   \pipe (x) = \frac {1}{1 + 1/x}
\end{equation}
and the startup time is
\begin{equation}
                     t_0 = \nhalf/\rinf                \label{Eqn(4.3)}
\end{equation}
In the above equations, \rinf is the {\em asymptotic bandwidth} of 
communication
which is approached as the message length tends to infinity (hence the
subscript), and \nhalf is the message length required to achieve half
this asymptotic rate. Hence \nhalf is called the {\em half-performance
message length}. 

The importance of the parameter \nhalf is that it provides a yardstick with
which to measure message-length, and thereby enables one to distinquish 
the two regimes of short and long messages. 
For long messages $(n > \nhalf)$, the denominator in equation \ref{Eqn(4.2)}
is approximately unity and the average communication rate is approximately
constant at its asymptotic rate, \rinf 
\begin{equation}
                   r \approx \rinf                    \label{Eqn(4.3.5)}
\end{equation} 
For short messages $(n < \nhalf)$, the communication rate is best 
expressed in the algebraically equivalent form
\begin{equation}
                 r = \frac {\pi_0 n} {(1+ n/ \nhalf)}   \label{Eqn(4.4)}
\end{equation}
\begin{equation}
\where \spten     \pi_0 = t_0 ^{-1} = \rinf/\nhalf      \label{Eqn(4.5)}
\end{equation} 
For short messages, the denominator in equation \ref{Eqn(4.4)} will be 
approximately unity, so that
\begin{equation}
                 r \approx \pi_0 n = n / t_0            \label{Eqn(4.6)}
\end{equation}
In sharp contrast to the approximately constant rate in the long-message 
limit, the communication rate in the short message limit is seen to be 
approximately proportional to the message length. 
The constant of proportionality, $\pi_0$, is known
as the {\em specific performance}, and can be expressed conveniently
in units of kilobyte per second per byte (kB/s)/B or k/s. 

Thus, in general, we may say that \rinf characterises the long-message 
performance and $\pi_0$ the short-message performance. The COMMS1 benchmark 
computes all four of the above parameters, 
$(\rinf, \nhalf, t_0, \rmand \pi_0)$,
because each emphasises a different aspect of performance. However only 
two of them are independent. In the case that there are different modes
of transmission for messages shorter or longer than a certain length, the
benchmark can read in this breakpoint and perform a separate least-squares 
fit for the two regions. An example is the Intel iPSC/860 which has a different
message protocol for messages shorter than and longer than 100 byte. 

Because of the finite (and often large) value 
of $t_0$, the above is a {\em two-parameter} description of communication 
performance. It is therefore incorrect, and sometimes positively misleading,
to quote only one of the parameters (e.g. just \rinf, as is often done) to
describe the performance. The most useful pairs of parameters are \rnhalf
$(\pi_0,\nhalf)$ and $(t_0,\rinf)$, depending on whether one is concerned
with long vectors, short vectors or a direct comparison with hardware times. 
Note also that, although \nhalf is defined
as the  message length required to obtain half the asymptotic rate \rinf,
the two parameters \rnhalf are sufficient to calculate the average rate for
any message length via equation \ref{Eqn(4.2)}, or equivalently using 
$\pi_0$ instead of \rinf via \ref{Eqn(4.4)}.

The COMMS1 and COMMS2 benchmarks exist as part if the Genesis benchmarks
v2.0 and 2.2.1 ~\cite{Hey91}.

\subsection{Example Results for the COMMS1 benchmark}

We report below results for the COMMS1 benchmark on the \Suprenum,
and Intel iPSC/860 ~\cite{Hoc91} and Touchstone Delta ~\cite{HoCa92} 
computers. 

Table-\ref{Table4.1} gives the values obtained for the communication 
parameters, in the version of the benchmark using the native \Suprenum
extensions to the Fortran90 language. These include a SEND and RECEIVE
language statement with a syntax similar to that of the Fortran READ
and WRITE statement.
The asymptotic stream rate, or bandwidth, (\rinf) shows considerable
variation on the Suprenum, depending on how the data to be transferred
is specified in the I/O list of the SEND statement.  A variable length
array in Fortran90 syntax in single precision achieves 0.67 MB/s,
whereas the same statement specified in double precision achieves 4.8
MB/s.   This double-precision rate is about twice that observed
on the iPSC/860 with their CSEND Fortran subroutine, which sends an array 
whose length is specified in bytes. The principal difference between the 
two computers is the magnitude of the startup time, $t_0$, which is 
$73\mu s$ on the iPSC/860 compared with about 3ms on the Suprenum.  Since 
the startup time, via $\pi_0$, determines the transfer rate for short 
messages (say $<100$B), we see that the Suprenum is 45 times slower than 
the iPSC/860 for short messages. On the other hand the Suprenum has almost 
twice the stream rate for long messages (as seen by the value of \rinf), 
provided the most favourable format (i.e. double precision or 64-bit)
is used in the I/O list.  One may compute from these numbers that the
iPSC/860 is faster at transferring messages for all message lengths
less than 16,481 Byte. The longer startup time on Suprenum results in 
larger values of \nhalf, showing that longer messages are needed to 
achieve any given fraction of the asymptotic rate.

The results for the Touchstone Delta show that this computer has the
fastest short and long message performance, judged respectively by the
values of $\pi_0$ and \rinf. However the improvement of short message
performance over the iPSC/860 is only marginal, and the long message 
performance is only about one quarter of the advertised bandwidth of 
25MB/s. However harware and software improvements made since the 
measurements were made should have improved the results.

\begin{table}
\centering
{\small
\parbox{3.5in}{
\caption{\label{Table4.1}
          Values of (\rinf,\nhalf) for the communication of messages
          between two nodes of the same cluster on the Suprenum and 
          neighbouring nodes on the Intel iPSC/860 and Touchstone
          Delta computers. The Delta measurements were made on 17 Jan.
          1992, and should have been improved by subsequent hardware
          and software changes.}}
\begin{tabular}{llcccc}
\hline
Specification   & Range      & \rinf   & \nhalf   & $t_0$  & $\pi_0$   \\
                & B*         & MB/s    & B        & ms     & k/s       \\
\hline
   SUPRENUM                                                            \\
sp SEND A(1:N)  &            & 0.67    &  2041    & 3.05   & 0.328     \\
dp SEND A(1:N)  &            & 4.82    & 12740    & 2.64   & 0.378     \\
\hline
   INTEL iPSC/860                                                      \\
CSEND (,A,N,,)  & $N<100$    & 2.36    & 179      & 0.074  & 13.5      \\
                & $N>100$    & 2.80    & 560      & 0.200  &  5.0      \\
\hline
   INTEL Delta                                                         \\
CSEND (,A,N,,)  & $N<512$    & 3.48    & 213      & 0.061  & 16.3      \\
                & $N>512$    & 6.76    & 892      & 0.132  & 7.57      \\
\hline
* B - byte

\end{tabular}
}
\end{table}

% ----------------------------------------------------------------------------



\subsection{Total Saturation Bandwidth: COMMS3}

To complement the above communication benchmarks, there is a need for
a benchmark to measure the total saturation bandwidth of the complete
communication system, and to see how this scales with the number
of processors. A natural generalisation of the COMMS2 benchmark
could be made as follows, and be called the COMMS3 benchmark:
Each processor of a $p$-processor system send a message of length $n$
to the other $(p-1)$ processors. Each processor then waits to receive
the $(p-1)$ messages directed at it. The timing of this generalised
'pingping' ends when all messages have been sucessfully received by all
processors; although the process will be repeated many times to obtain
an accurate measurement, and the overall time will be divided by the
number of repeats. The time for the generalised pingping is the time to
send $p(p-1)$ messages of length $n$ and can be analysed in the same way
as COMMS1 and COMMS2 into values of \rnhalf. The value obtained for \rinf
is the required total saturation bandwidth, and we are interested in how
this scales up as the number of processors $p$ increases and with it the
number of available links in the system.

This benchmark does not exist, but Roger Hockney will develop a trial
version for the Intel iPSC, followed by PARMACS and PVM. Perhaps suitable 
and better benchmarks exist elsewhere. Please send in your suggestions.


\subsection{Communication Bottleneck: POLY3}

POLY3 assesses the severity of the communication bottleneck.
It is the same as the POLY1 benchmark except that the data for the
polynomial evaluation is stored on a neighbouring processor. The value of
\fhalf obtained therefore measures the ratio of arithmetic to communication
performance. Equation ~\ref{Eqn3} shows that the computational intensity 
of the calculation must be significantly greater than \fhalf (say 4 times
greater) if communication is not to be a bottleneck. In this case
the computational intensity is the ratio of arithmetic performed on a
processor to words tranferred to/from it over communication links.
In the common case that the amount of arithmetic is proportional to
the volume of a region, and the data communicated is proportional to
the surface of the region, the computational intensity is increased as
the size of the region (or granularity of the decomposition) is increased. 
Then the \fhalf obtained from this benchmark is directly related 
to the granularity that is required to make communication time unimportant.

The POLY3 benchmark does not exist, although native versions have been
used on transputer systems ~\cite{Hoc91}. A trial benchmark will be 
prepared for Intel iPSC computers by Roger Hockney, followed by PARMACS 
and PVM versions.


\subsection{Synchronisation Benchmarks: SYNCH1}

SYNCH1 measures the time to execute a barrier synchronisation statement
as a function of the number of processes taking part in the barrier.
The practicability of massively parallel computation with thousands
or tens of thousands of processors depends on this barrier time not
increasing too fast with the number of processors. The results are quoted
both as a barrier time, and as the number of barrier statements
executed per second (barr/s).

The SYNCH1 benchmark exists as part of Genesis v2.0 and v2.1.1 ~\cite{Hey91}.


\section{Summary of Benchmark Status}

The following Table-\ref{Table3} summarises the current state of the 
proposed low-level benchmarks, and the properties they are intended
to measure.

\begin{table}
\centering
{\small
\parbox{5in}{
\caption{\label{Table3}
          Status of proposed Low-Level benchmarks. Note we abbreviate
          performance (perf.), arithmetic (arith.), communication (comms.),
          operations (ops.).}}
\begin{tabular}{llcccc}
\hline
Benchmark       & Measures             & Parameters & Exists &  Author     \\
                &                      &            &        &             \\
\hline
SINGLE-PROCESSOR                                                           \\
   TICK1        & Timer resolution     & tick interval & Genesis & Hockney \\
   TICK2        & Timer value          & wall-clock check& Genesis & Hockney \\
   LINPACKD100  & typical arith. perf. & job Mflop/s  & Linpack & Dongarra  \\
   LIVERMORE    & Arith. perf. range   & loop Mflop/s & Livermore & McMahon \\
   RINF1        & Basic Arith. ops.    & \rnhalf    & Genesis & Hockney \\
   POLY1        & Cache-bottleneck     & \rfhalf    & EuroBen & Hockney \\
   POLY2        & Memory-bottleneck    & \rfhalf    & Hockney & Hockney \\
\hline
MULTI-PROCESSOR                                                         \\
   COMMS1       & Basic Message perf.  & \rnhalf    & Genesis & Hockney \\
   COMMS2       & Message exch. perf.  & \rnhalf    & Genesis & Hockney \\
   COMMS3       & Saturation Bandwidth & \rnhalf    & No      & Hockney \\
   POLY3        & Comms. Bottleneck    & \rfhalf    & No      & Hockney \\
\hline

\end{tabular}
}
\end{table}

% ----------------------------------------------------------------------------

From owner-parkbench-lowlevel@CS.UTK.EDU Wed Aug 17 12:41:13 1994
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.8t-netlib)
	id MAA18664; Wed, 17 Aug 1994 12:41:13 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA27760; Wed, 17 Aug 1994 12:40:56 -0400
X-Resent-To: parkbench-lowlevel@CS.UTK.EDU ; Wed, 17 Aug 1994 12:40:55 EDT
Errors-to: owner-parkbench-lowlevel@CS.UTK.EDU
Received: from ursa.cis.umassd.edu by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA27746; Wed, 17 Aug 1994 12:40:49 -0400
Received: by ursa.cis.umassd.edu id AA19853
  (5.65c/IDA-1.4.4 for parkbench-lowlevel@cs.utk.edu); Wed, 17 Aug 1994 12:40:43 -0400
From: Ardsher Ahmed <ahmed@cis.umassd.edu>
Message-Id: <199408171640.AA19853@ursa.cis.umassd.edu>
Subject: send lowlevel.archive from parkbench
To: parkbench-lowlevel@CS.UTK.EDU
Date: Wed, 17 Aug 94 12:40:42 EDT
X-Mailer: ELM [version 2.3 PL11]

send lowlevel.archive from parkbench
From owner-parkbench-lowlevel@CS.UTK.EDU Fri Sep  8 16:37:08 1995
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA14454; Fri, 8 Sep 1995 16:37:08 -0400
Received: from localhost by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA04662; Fri, 8 Sep 1995 16:37:27 -0400
X-Resent-To: parkbench-lowlevel@CS.UTK.EDU ; Fri, 8 Sep 1995 16:37:25 EDT
Errors-to: owner-parkbench-lowlevel@CS.UTK.EDU
Received: from franklin.seas.gwu.edu by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id QAA04655; Fri, 8 Sep 1995 16:37:24 -0400
Received: from felix.seas.gwu.edu (abdullah@felix.seas.gwu.edu [128.164.9.3]) by franklin.seas.gwu.edu (v8) with ESMTP id QAA10191 for <parkbench-lowlevel@cs.utk.edu>; Fri, 8 Sep 1995 16:37:22 -0400
Received: (from abdullah@localhost) by felix.seas.gwu.edu (8.6.12/8.6.12) id QAA07194 for parkbench-lowlevel@cs.utk.edu; Fri, 8 Sep 1995 16:37:18 -0400
Date: Fri, 8 Sep 1995 16:37:18 -0400
From: Abdullah Meajil <abdullah@seas.gwu.edu>
Message-Id: <199509082037.QAA07194@felix.seas.gwu.edu>
To: parkbench-lowlevel@CS.UTK.EDU
Subject: subscribe

subscribe

From owner-parkbench-comm@CS.UTK.EDU Fri May  2 15:53:02 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id PAA00358; Fri, 2 May 1997 15:53:02 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA13341; Fri, 2 May 1997 15:44:43 -0400
Received: from blueberry.cs.utk.edu (BLUEBERRY.CS.UTK.EDU [128.169.92.34]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id PAA13327; Fri, 2 May 1997 15:44:36 -0400
Received:  by blueberry.cs.utk.edu (cf v2.11c-UTK)
          id TAA08348; Fri, 2 May 1997 19:44:04 GMT
From: "Erich Strohmaier" <erich@CS.UTK.EDU>
Message-Id: <9705021544.ZM8346@blueberry.cs.utk.edu>
Date: Fri, 2 May 1997 15:44:03 -0400
X-Face: ,v?vp%=2zU8m.23T00H*9+qjCVLwK{V3T{?1^Bua(Ud:|%?@D!~^v^hoA@Z5/*TU[RFq_n'n"}z{qhQ^Q3'Mexsxg0XW>+CbEOca91voac=<YfvQ8HrQFkH>P/w]>n_nS]V_ZL>XRSYWi:{MzalK9Hb^=B}Y*[x*MOX7R=*V}PI.HG~2
X-Mailer: Z-Mail (3.2.0 26oct94 MediaMail)
To: parkbench-comm@CS.UTK.EDU
Subject: ParkBench Committee Meeting
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii

Dear Colleague,

Here is the revised agenda.
Please send me ASAP a short email if you come
so that we can arrange for a meeting room.

-------------------
The ParkBench (Parallel Benchmark Working Group)
will meet in Knoxville, Tennessee on
May 9th, 1997.

The meeting site will be the Knoxville Downtown Hilton Hotel.
We have made arrangements with the Hilton Hotel in Knoxville.

  Hilton Hotel
  501 W. Church Street
  Knoxville, TN
  Phone:  423-523-2300

When making arrangements tell the hotel you are associated with
the 'ParkBench'. The rate about $79.00/night.
You can download a postscript map of the area by looking at
http://www.netlib.org/utk/people/JackDongarra.html.

----------------
The tentative agenda for the meeting is:

  1. Minutes of last meeting (MBe)

     Changes to Current release:
  2. Low Level (ES, VG, RS)
     comms1, comms2, comms3, poly2
  3. Linear Algebra (ES)
  4. Compact Applications - NPBs (SS, ES)

     New benchmarks:
  5. HPF Low Level benchmarks (MBa)
  6. Java Low-Level Benchmarks (VG)
  7. New I/O benchmark benchmarks (MBa)
  8. New performance database design and new benchmark output format
     Update of GBIS with new Web front-end (MBa,TH)

     Report from other benchmark activities
  9. ASCI Benchmark Codes (AH)
 10. SPEC-HPG (RE, JD)

     ParkBench:
 11. ParkBench Bibliography
 12. ParkBench Report 2

     Other Activities:
 13. Discussion of the ParkBench Workshop 11/12 September, UK (TH, MBa)
 14. PEMCS - "Electronic Benchmarking Journal" - status report - (TH, MBa)
 15. Status of Funding proposals (JD, TH)

 15. Miscellaneous -

 16. Date and venue for next meeting -


  (MBa) Mark Baker          Univ. of Portsmouth
  (MBe) Michael Berry       Univ. of Tennessee
  (JD)  Jack Dongarra       Univ. of Tenn./ORNL
  (RE)  Rudi Eigenmann      SPEC
  (VG)  Vladimir Getov      Univ. of Westminister
  (TH)  Tony Hey            Univ. of Southampton
  (AH)  Adolfy Hoisie       LLNL
  (SS)  Subhash Saini       NASA Ames
  (RS)  Ron Sercely         HP/CXTC
  (ES)  Erich Strohmaier    Univ. of Tennessee


Jack Dongarra
Erich Strohmaier


From owner-parkbench-comm@CS.UTK.EDU Tue May  6 14:46:45 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id OAA04480; Tue, 6 May 1997 14:46:45 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id OAA25737; Tue, 6 May 1997 14:34:05 -0400
Received: from punt-2.mail.demon.net (relay-11.mail.demon.net [194.217.242.137]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id OAA25715; Tue, 6 May 1997 14:33:58 -0400
Received: from minnow.demon.co.uk ([158.152.73.63]) by punt-2.mail.demon.net
           id aa1000641; 6 May 97 19:07 BST
Message-ID: <UOrwADAXM3bzEwfw@minnow.demon.co.uk>
Date: Tue, 6 May 1997 19:06:15 +0100
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Parkbench Meeting Documents
In-Reply-To: <9705021544.ZM8346@blueberry.cs.utk.edu>
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.01 <kRL7V2isFfDmnKSZb08I5Tyfx$>

AGENDA ITEM:

>     Changes to Current release:
>  2. Low Level (VG)
>     comms1, comms2,

Two documents will be submitted to the committee on this item by Roger
Hockney and Vladimir Getov (Westminster University, UK). They can be
downloaded as postscript files from:

"New COMMS1 Benchmark: Results and Recommendations"
http://www.minow.demon.co.uk/Pbench/comms1/PBPAPER2.PS
 
"New COMMS1 Benchmark: The Details"
http://www.minow.demon.co.uk/Pbench/comms1/PBPAPER3.PS

The papers will be presented by Vladimir who will bring some paper
copies with him.

Best wishes
Roger and Vladimir
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Tue May  6 17:54:47 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id RAA07526; Tue, 6 May 1997 17:54:46 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA17012; Tue, 6 May 1997 17:48:50 -0400
Received: from punt-1.mail.demon.net (relay-7.mail.demon.net [194.217.242.9]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA17003; Tue, 6 May 1997 17:48:47 -0400
Received: from minnow.demon.co.uk ([158.152.73.63]) by punt-1.mail.demon.net
           id aa0623986; 6 May 97 21:37 BST
Message-ID: <IQsX3CAKQ5bzEw9M@minnow.demon.co.uk>
Date: Tue, 6 May 1997 21:26:50 +0100
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Parkbench Meeting Documents (Correction)
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.01 <kRL7V2isFfDmnKSZb08I5Tyfx$>

I am resending this because there was a typo in the URLs:
There are two MM in "minnow". 

Also if you took PBPAPER2.PS before receiving this repeat message,
please take it again as I have corrected two errors in the graphs.

SORRY 
Roger
************************
AGENDA ITEM:

>     Changes to Current release:
>  2. Low Level (VG)
>     comms1, comms2,

Two documents will be submitted to the committee on this item by Roger
Hockney and Vladimir Getov (Westminster University, UK). They can be
downloaded as postscript files from:

CORRECTED URLs:

"New COMMS1 Benchmark: Results and Recommendations"
http://www.minnow.demon.co.uk/Pbench/comms1/PBPAPER2.PS
              
"New COMMS1 Benchmark: The Details"
http://www.minnow.demon.co.uk/Pbench/comms1/PBPAPER3.PS

The papers will be presented by Vladimir who will bring some paper
copies with him.

Best wishes
Roger and Vladimir
-- 
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Mon May 12 05:36:41 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id FAA24086; Mon, 12 May 1997 05:36:41 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA10068; Mon, 12 May 1997 05:18:21 -0400
Received: from haven.EPM.ORNL.GOV (haven.epm.ornl.gov [134.167.12.69]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id FAA10051; Mon, 12 May 1997 05:18:18 -0400
Received: (from worley@localhost) by haven.EPM.ORNL.GOV (8.8.3/8.8.3) id FAA29262; Mon, 12 May 1997 05:18:16 -0400 (EDT)
Date: Mon, 12 May 1997 05:18:16 -0400 (EDT)
From: Pat Worley <worley@haven.EPM.ORNL.GOV>
Message-Id: <199705120918.FAA29262@haven.EPM.ORNL.GOV>
To: parkbench-comm@CS.UTK.EDU
Subject: Gordon Conference on HPC and NII 
Forwarding: Mail from 'Tony Skjellum <tony@aurora.cs.msstate.edu>'
     dated: Sat, 10 May 1997 16:32:12 -0500 (CDT)
Cc: worley@haven.EPM.ORNL.GOV

Just in case you haven't received information on this already, here is a
blurb on the 1997 Gordon conference in high performance computing. 
Unlike previous years, there is not an explicit emphasis on performance
evaluation in this year's stated themes, but you can't (shouldn't) discuss
future architectures and their impacts without discussing how to
evaluate performance, and I am hoping that some benchmarking-minded people
will show up and keep the discussion honest.

---------- Begin Forwarded Message ----------

The deadline for applying to attend the 1997 Gordon conference in high
performance computing is June 1. If you are interested in attending,
please apply as soon as possible. The simplest way to apply is to download
the application form from the web site indicated below, or to use the online
registration option. If you have any problems with either of these,
please contact the organizers at tony@cs.msstate.edu and worleyph@ornl.gov.

-------------------------------------------------------------------------------
The 1997 Gordon Conference on High Performance Computing and
Information Infrastructure: "Practical Revolutions in HPC and NII"

Chair, Anthony Skjellum, Mississippi State University, tony@cs.msstate.edu,
       601-325-8435
Co-Chair, Pat Worley, Oak Ridge National Laboratory, worleyph@ornl.gov,
       615-574-3128

Conference web page: http://www.erc.msstate.edu/conferences/gordon97

July 13-17, 1997
Plymouth State College
Plymouth NH

The now bi-annual Gordon conference series in HPC and NII commenced in 1992
and has had its second meeting in 1995.  The Gordon conferences are an
elite series of conferences designed to advance the state-of-the-art in
covered disciplines. Speakers are assured of anonymity and
referencing presentations done at Gordon conferences is prohibited by
conference rules in order to promote science, rather than publication
lists.  Previous meetings have had good international participation,
and this is always encouraged. Experts, novices, and technically
interested parties from other fields interested in HPC and NII are
encouraged to apply to attend.

All attendees, including speakers, poster presenters, and session chairs
must apply to attend. We *strongly* encourage all poster presenters to have
their poster proposals in by May 13, 1997, though we will consider poster
presentations up to six weeks prior to the conference.  Application to
attend the conference is also six weeks in advance.

More information on the conference can be found at the web page
listed above, including the list of speakers and poster presenters
and information on applying for attendance.


----------- End Forwarded Message -----------


From owner-parkbench-comm@CS.UTK.EDU Tue May 13 13:58:00 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id NAA20879; Tue, 13 May 1997 13:57:59 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA11997; Tue, 13 May 1997 13:33:14 -0400
Received: from timbuk.cray.com (timbuk-fddi.cray.com [128.162.8.102]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id NAA11983; Tue, 13 May 1997 13:33:10 -0400
Received: from ironwood.cray.com (root@ironwood-fddi.cray.com [128.162.21.36]) by timbuk.cray.com (8.8.5/CRI-gate-news-1.3) with ESMTP id MAA20939 for <parkbench-comm@CS.UTK.EDU>; Tue, 13 May 1997 12:33:07 -0500 (CDT)
Received: from magnet.cray.com (magnet [128.162.173.162]) by ironwood.cray.com (8.8.4/CRI-ironwood-news-1.0) with ESMTP id MAA16428 for <parkbench-comm@CS.UTK.EDU>; Tue, 13 May 1997 12:33:06 -0500 (CDT)
From: Charles Grassl <cmg@cray.com>
Received: by magnet.cray.com (8.8.0/btd-b3)
          id RAA20181; Tue, 13 May 1997 17:33:04 GMT
Message-Id: <199705131733.RAA20181@magnet.cray.com>
Subject: Parkbench directions
To: parkbench-comm@CS.UTK.EDU
Date: Tue, 13 May 1997 12:33:04 -0500 (CDT)
X-Mailer: ELM [version 2.4 PL24-CRI-d]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit


To:   ParkBench Group
From: Charles Grassl

Date: May 13, 1997

(Long)

I appreciated the meeting this past week and wish to thank Eric and Jack 
for hosting it.  I am aware of the great effort of many individuals
have contributed to developing and implementing the ParkBench suite.
In spite of this, I feel that we need to evaluate and correct our course.

ParkBench should not merge with or use benchmarks from the SPEC/HPG
(High Performance Group) group.  SGI/Cray and IBM have already
withdrawn from the SPEC/HPG group and Fujitsu and NEC are no longer
participating.  The reasons for these companies and other institutions
no longer participating should indicate to us (ParkBench) that
something is amiss with the SPEC/HPG benchmarks and paradigm.

Several of the reasons for the supercomputer manufacturers not
supporting the SPEC/HPG effort are listed below.  I list these reasons
so that the ParkBench group can learn from them and avoid the same
problems.

- Relevance.  The particular benchmark programs being used by SPEC/HPG
  are not relevant or appropriate for supercomputing.  The programs in
  the current SPEC/HPG suite do not represent any leading edge software
  which is more typical of usage for high performance systems.

- Redundancy.  The programs being developed by SPEC/HPG
  are not qualitatively or quantitatively different from the SPEC/OSG
  programs and as such, it is viewed as redundant and expensive.

- Methodology.  The methodology being used by SPEC/HPG to
  procure, develop and run benchmarks lacks scientific and technical
  basis and hence results have a vague and arbitrary interpretation.

- Programming model.  Designing benchmarks for portability across
  systems is a convenient idea but does not reflect actual constraints
  or usage.  More often than not, compatibility with a PREVIOUS model
  of computer is more important than compatibility ACROSS computers.

- Expense.  Some of the large data cases for the SPEC/HPG programs
  will requires hours or days to run with little new data or
  information gained by the exercise.  These exercises are extremely
  expensive both in time and capital equipment and in logistics.

- Ergonomics.  The cumbersome design of SPEC/HPG Makefiles and build
  procedures make the programs difficult and expensive to test,
  maintain and analyze.

We in the ParkBench group must acknowledge the above items if we are to
maintain interest and participation from computer vendors.  I believe
that reorganizing and refocusing the group could revitalize high
performance computer benchmarking and and re-invigorate the ParkBench
group.

As the ParkBench suite now stands, there are too many programs and they
are difficult to build, test and maintain.  This situation impedes
usage and participation.  Here are a few suggestions for our future
practices and directions:

- Design and write benchmarks programs.  Don't borrow or solicit old
  code.  The borrowed or solicited code is never quite appropriate and
  usually obsolete.  Our greatest asset is that we have scientist who
  are capable of designing experiments (benchmarks).  (Build value.)

- Monitor and evaluate accuracy.  Though we mention accuracy in
  ParkBench Report 1, we haven't applied it to the current programs
  (Scientifically validate, or invalidate, our experiments.)

- Make it simple.  Write and develop simple programs which do not need
  elaborate build procedures and which easier to test and to maintain.
  (Keep It Simple, Stupid.)

- Build a better user interface.  The belabored "run rules" and the
  interface with layers of Makefiles, includes and embedded relative
  file paths is unacceptable.  An acceptable interface might require
  binary distribution and hence a desirable emphasis on designing and
  running rather than building and porting the benchmarks.  (Make the
  product more attractive to more users.)

- Make the suite truly modular.  The current structure makes the
  simplest one CPU program as difficult to build and run as the most
  complicated program with Makefile includes, special compilers, source
  file includes, special libraries, suite libraries, etc. (Make it
  manageable.)

- Drop the connection with SPEC/HPG and with NPB.  This "grand
  unifying" scheme make redundant code.  It has had the opposite effect
  of focusing benchmarking attention on ParkBench because it is yet
  another collection of benchmarks used by other organizations.  (Be
  distinguishable and identifiable.)

- Emphasis what ParkBench is associated with:  benchmarking distributed
  memory parallel computers.  We should write and develop benchmark
  programs which measure and instrument the parallel processing aspect
  of MPP systems.  (Keep our focus.)


I volunteer to develop and write a suite of message passing test
programs which measure the performance and variance of message passing
communication schemes.  I have much experience with writing such a
programs and believe that such suite would be useful for others and for
the computer industry in general.

I hesitate to contribute such programs to the present structure for
several reasons:

- The network test suite does not logically fit into the current
  "hierarchy" and hence might further clutter the ParkBench suite and
  make it further unfocused.

- The current ParkBench structure is not manageable.  Testing and
  maintenance would be extremely expensive in the current structure.

- My company's effort may be interpreted as an endorsement of the
  current structure and model.  The suite is not popular with vendors
  for reasons outlined above.  Participation is currently discouraged.


Discussion?  


Regards,
Charles Grassl
SGI/Cray
Eagan, Minnesota  USA

From owner-parkbench-comm@CS.UTK.EDU Wed May 21 17:25:15 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id RAA27513; Wed, 21 May 1997 17:25:15 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA07579; Wed, 21 May 1997 17:18:07 -0400
Received: from rastaman.rmt.utk.edu (root@TCHM11A6.RMT.UTK.EDU [128.169.27.188]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id RAA07571; Wed, 21 May 1997 17:18:02 -0400
Received: from rastaman.rmt.utk.edu (localhost [127.0.0.1]) by rastaman.rmt.utk.edu (8.7.6/8.7.3) with SMTP id RAA01108; Wed, 21 May 1997 17:24:43 -0400
Sender: mucci@CS.UTK.EDU
Message-ID: <3383681A.D98C5FB@cs.utk.edu>
Date: Wed, 21 May 1997 17:24:42 -0400
From: "Philip J. Mucci" <mucci@CS.UTK.EDU>
Organization: University of Tennessee, Knoxville
X-Mailer: Mozilla 3.01 (X11; I; Linux 2.0.28 i586)
MIME-Version: 1.0
To: parkbench-comm@CS.UTK.EDU
CC: "PVM Developer's Mailing List" <pvmspankers@msr.epm.ornl.gov>
Subject: Mesg Passing Benchmarks
References: <199705131733.RAA20181@magnet.cray.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi all,

Charles Grassl in his last message to this committee volunteered
to write a suite of message passing benchmarks to replace the Low
Levels...Before any action on his or this committee's part, I would
recommend that you all have a look at version 3 of my pvmbench
package. It now does MPI as well and can easily support other
message passing primitives with a few #defines. 

Version 3 along with some sample results can be found at
http://www.cs.utk.edu/~mucci/pvmbench.

Note that this has not been tested on any MPP's with UTK PVM.

This benchmark will generate and graph the following:

bandwidth
gap time (to buffer an outgoing message)
roundtrip (latency /2)
barrier/sec
broadcast
summation reduction

Other tests can easily be added...I would highly recommend before any 
action done that this code be examined. It is less than a year old, 
version 3 available on that page is in beta, i.e. it has not been
released to the general public. Let me know what you think...

-Phil

-- 
/%*\ Philip J. Mucci | GRA in CS under Dr. JJ Dongarra /*%\
\*%/ http://www.cs.utk.edu/~mucci  PVM/Active Messages \%*/

From owner-parkbench-comm@CS.UTK.EDU Fri May 23 12:03:04 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id MAA06549; Fri, 23 May 1997 12:03:03 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id LAA15901; Fri, 23 May 1997 11:05:32 -0400
Received: from berry.cs.utk.edu (BERRY.CS.UTK.EDU [128.169.94.70]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id LAA15895; Fri, 23 May 1997 11:05:30 -0400
Received: from cs.utk.edu by berry.cs.utk.edu with ESMTP (cf v2.11c-UTK)
          id LAA01370; Fri, 23 May 1997 11:05:31 -0400
Message-Id: <199705231505.LAA01370@berry.cs.utk.edu>
to: parkbench-comm@CS.UTK.EDU
Subject: Minutes of May ParkBench Meeting
Date: Fri, 23 May 1997 11:05:31 -0400
From: "Michael W. Berry" <berry@CS.UTK.EDU>

Here are the minutes from the recent ParkBench meeting in Knoxville.
Best regards,
Mike

-----------------------------------------------------------------
Minutes of ParkBench Meeting - Knoxville Hilton, May 9, 1997
-----------------------------------------------------------------

ParkBench Attendee List:

     (MBa) Mark Baker          Univ. of Portsmouth   mab@sis.port.ac.uk
     (MBe) Michael Berry       Univ. of Tennessee    berry@cs.utk.edu
           Shirley Browne      Univ. of Tennessee    browne@cs.utk.edu
     (JD)  Jack Dongarra       Univ. of Tenn./ORNL   dongarra@cs.utk.edu
           Jeff Durachta       Army Res. Lab MSRC    durachta@arl.mil
     (VG)  Vladimir Getov      Univ. of Westminister getovv@wmin.ac.uk
     (CG)  Charles Grassl      SGI/Cray              cmg@cray.com
     (TH)  Tony Hey            Univ. of Southampton  ajgh@ecs.soton.ac.uk
     (AH)  Adolfy Hoisie       Los Alamos Nat'l Lab  hoisie@lanl.gov
     (CK)  Charles Koelbel     Rice University       chk@cs.rice.edu
     (PM)  Phil Mucci          Univ. of Tennessee    mucci@cs.utk.edu
           Erik Riedel         GENIAS Software GmbH  erik@genias.de
     (SS)  Subhash Saini       NASA Ames             saini@nas.nasa.gov
     (RS)  Ron Sercely         HP-Convex             sercely@convex.hp.com
           Alan Stagg          CEWES                 stagga@wes.army.mil
     (ES)  Erich Strohmaier    Univ. of Tennessee    erich@cs.utk.edu
     (PW)  Pat Worley          Oak Ridge Nat'l Lab   worleyph@ornl.gov

SPEC-HPG Visitors:

           Don Dossa           DEC                   dossa@eng.pko.dec.com
     (RE)  Rudi Eigenmann      Purdue University     eigenman@ecn.purdue.edu
           Greg Gaertner       DEC                   ggg@zko.dec.com
           Jean Suplick        HP                    suplick@rsn.hp.com
           Joe Throp           Kuck & Associates     throp@kai.com

At 9:05am EST, TH opened the meeting and ask that all the attendees
introduce themselves.  After a brief overview of the proposed agenda,
MBe reviewed the minutes from the last ParkBench meeting in October
of '96.  The minutes were unanimously accepted and TH asked VG to
present the proposed changes to the low-level benchmarks (9:20am).

VG reviewed the original COMMS1 (ping-pong or simplex communication) and
the COMMS2 (duplex communication) low-level benchmarks.  He discussed
some of the problems with the previous versions.  These included the
omission of calculated bandwidth, large message length problems, and
large errors in the asymptotic fit.   In collaboration with RS and CG,
a number of improvements have been made to these benchmarks:

	1. Measured bandwidth is provided in output.
	2. Time for shortest message is provided.
	3. Maximum measured bandwidth and the corresponding message
	   length is now provided.
	4. The accuracy of the least-squares 2-parameter fit has been
	   improved (sum of squares of the "relative" and not absolute
	   error is now used).
	5. New 3-parameter variable-power fit for certain cases added.
	6. Can report parametric fits if the error is less than some
   	   user-specified tolerance.
	7. Introduce KDIAG parameter to invoke diagnostic outputs.
	8. Modifications fo ESTCOM.f (as suggested by RS).
    
CG pointed out that it may not always be possible to interpret zero-length 
messages for these codes.  On the Cray machines, such messages force an 
immediate return (i.e., no synchronization).  He proposed that allowing zero-
length messages be removed for the COMMS benchmarks.  RS showed an actual
COMMS1 performance graph demonstrating the difficulty of data extrapolation
(if used to get latency for zero-length message-passing).  RS pointed out,
however, that zero-length message are defined w/in MPI, and suggested that
a simple return (as in the case of Cray machines) is not standard.

VG displayed some of the observed COMMS1/2 performance obtained on the
Cray T3E.  The 3-parameter fit yielded a 7% relative error for messages
ranging from 8 to 1.E+7 bytes.  CG questioned how the breakpoints were
determined?  He indicated the input parameters to the program required
previous knowledge of where breakpoints occur (although implementations
could change constantly).  TH suggested that the parametric fitting should
not be the default for these benchmarks, i.e., separate the analysis from
the actual benchmarking (this concept was seconded by CG).  RS suggested
that the fitting routines could be placed on the WWW/Internet and the
COMMS1/2 codes simply produce data.  CK, however, stressed that the codes
should maintain some minimal parametric fitting for clarity and
consistency of output interpretations.  

The minimal message length shown for the T3E results shown by VG was 8 and
the corresponding minimal message length for a Convex CXD set of
COMMS benchmarks was 1.  The lack of similar ranges of messages could
pose problems for comparisons.  JD strongly felt that users will return
to the notion of "latency" and want zero-length message overheads.  Users
may be primarily interested in start-up time for message-passing.  RS pointed
out that MPI does process zero-length messages.  JD suggested that
the minimal message length for the COMMS benchmarks be 8 bytes and RS proposed
that the minimal message-passing time and corresp. message length be
an output.  After more discussion, the following COMMS changes/outputs were 
unanimously agreed upon:

	1.  Maximum bandwidth with corresp. message size.
	2.  Minimum message-passing time with corresp. message size.
	3.  Time for minimum message length (could be 0, 1, 8, or 32 bytes
            but must be specified).
	4.  The software will be split into two program: one to report
	    the spot measurements and the other for the analysis.


At 10:00 am, SPEC-HPG members joined the ParkBench meeting for a joint
session.  CK reviewed the DoD Modernization Program.  He indicated that
the program is based on 3 primary components:

	1. CHSSI (Commonly Highly Scalable Software Initiative)
	2. DREN (Defense Research & Engineering Network)
	3. Shared Resource Centers (4 Major Shared Resource Centers or
           MSRC's and 20 Distributed Centers or DC's)

Benchmarking is part of the mission of the MSRC's, especially for
system integration and the Programming Environment & Training (PET)
team.  CK mentioned that the resources available at the MSRC's include:

256-proc. Cray T3E, SGI Power Challenge (CEWES), 256 proc. IBM SP/2 and
SGI Origin 2000 at ASC, SGI 790 at NAVO, and a collection of {SGI Origin,
Cray Titan, J90} at the Army Research Lab.

The benchmarking needs of the DoD program can be categorized as either
contractual or training.  The contractual needs are specified as PL1
(evaluation of initial machines), PL2 (upgrade to gain 3 times the
performance of PL1), and PL3 (upgrade to gain 10 times the performance
of PL1).  CK mentioned that the MSRC's are planning for the PL2 phase
later this year with PL3 scheduled in approx. 3 years.
The training needs include: the evaluation of programming paradigms,
the evaluation of performance trade-offs, templates for designing new
codes, and benchmarks for training examples.

The contractual benchmarks comprise 30 benchmarks (22 programs) some
of which are export-controlled or proprietary (data may not be used
in the public domain in some cases).  The run rules specify the number
of iterations for each benchmark in the suite.  Each MSRC uses a different
number of iterations per benchmark.  Code modifications are allowed (parallel
directives and message-passing can be used but no assembler) and algorithm
substitutions are permitted provided the problem does not become specialized.
The only performance metric reported for these benchmarks is the elapsed
time for the entire suite.  Benchmarks can be upgraded to reflect current
workloads of the MSRCs but they must be compared head-to-head with 
previous systems.

Example codes included in the DoD benchmark suite include: CTH (finite
volume shock simulation), X3D (explicit finite element code), OCEAN-O2 (an
ocean modeling code), NIKE3D (implicit nonlinear 3D FEM), and Aggregate
I/O benchmark.

Planned benchmarking activites for the DoD Modernization Program include:

	1. benchmarks for evaluating programming techniques (determine what
           works; develop decision trees)
	2. benchmarks for teaching (classes on "worked" examples; template
           modification)

This effort currently has 1 FTE and over 50 University personnel (in PET
program) involved (although they are not primarily responsible for
benchmarking work).

At 10:35am, TH asked AH from Los Alamos Nat'l Lab to overview their ASCI
benchmark suite.  He began by pointing out that these codes formulate the
"Los Alamos set of" ASCI Benchmarks.  Before presenting the list of codes,
AH noted that the philosophy of this activity was to achieve 
"experiment ahead" capability especially with immature computing platforms.
Los Alamos is also interested in developing performance modes as well as
kernels.  The list of active/research codes and compact applications 
comprising this suite are:

Code		Language(s)	Parallelism 	Description           
    
*HEAT(RAGE)	f77, f90	MPI(f90)	Eulerian adaptive mesh
				MPIfSM(f77)	refinement based on
						Riemann solvers; coupled
						physics-CFD; particle &
						radiative transport

EULER		f90		MPI		Admissable fluid (for SIMD);
				SIMD(SP		unstructured mesh, explicit
				vector)		solution; high-speed fluids;
						SP=single processor

NEUT		f77		MPI,SM,		Monte-Carlo, particle
				SHMEM

SWEEP3D		f90		MPI, SHMEM	Inner/outer iteration (kernel)
                                                (compact application)

HYDRO(T)	f77		Serial          (compact application)

TBON		f77		MPI		Material science; quantum
						mechanics; polymer age    
						simulation

*TECOLOTE	C++		MPI		Mixed call hydro. with regular
						structured grid

*TELURIDE	f90		MPI		Casting simulation; irregular
						structured grid; Krylov solution
						methods

*DANTE		HPF		MPI

* = export controlled

The codes and compact apps above vary in size from 2,000 to 35,000 lines.
AK noted that LANL could provide support for future ASCI-based ParkBench codes. 
The ASCI benchmark suite presented might include in the future tri-lab
(Livermore, Sandia, Los Alamos) contributions.  The ASCI application suite can
be set up with data sets leading to varying run-times.  AH mentioned that Los 
Alamos' ASCI benchmarking efforts are focused on high performance computing,
leading edge architectures, algorithms, and applications.  They are 
particularly concentrating in developing expertise in distributed shared-memory
performance evaluation and modeling.  AH expressed the hope that the efforts of
ParkBench will follow similar directions.

At 11:05am, SS reviewed some of the most recent NAS Parallel Benchmarks results.
He began with vendor-optimized CG Class B results using row and column 
distribution blocking.  Results for different numbers of processors of the T3D 
were reported along with results for the NEC SX-4, SGI Origin 2K, Convex SPP2K,
Fujitsu VPP700, and IBM P2SC.  He also showed results for FT Class B and BT 
Class B (all machines reported performed well on this benchmark).  For BT, it 
was pointed out that 4 of the machines (Cray T3E, DEC Alpha, IBM P2SC, and NEC 
SX-4) essentially are based on the same processor but achieve widely-varying
results.  SS also reported HPF Class A MG results on 16 processors of the IBM 
SP2.  The HPF version (APR-HPF/Portland Group compiled) was only 3 times slower
than the MPI-based (f77) implementation.  This is indeed a significant result 
given that two years ago the HPF version was as much as 10 times slower than 
the comparable MPI version.  An HPF version of the Class A FT benchmark on 64 
processors was shown to be faster than the MPI version (1.6 times faster) when
optimized libraries are used in both versions.  For the Class A SP benchmark
(on 64 processors of the SP/2), the APR- and PGI-compiled HPF versions were 
within a factor of 2 of the MPI versions.  Finally, the HPF Class A BT code on 
64 processors of the Cray T3D was within a factor of 0.5 of the MPI version.

At 11:35am, TH invited RE to overview current SPEC-HPG activities.  The SPEC-HPG
benchmarks define a suite of real-world high-performance computing applications
designed for comparisons across different platforms (serial and message-
passing).  RE pointed out the history of the SPEC-HPG effort as a merger between
the PERFECT and SPEC benchmarking activities.  The current SPEC-HPG suite is
comprised of 2 codes: SPECchem96 and SPECseis96.  The SPECchem96 code evolved
from the GAMES code used in pharmaceutical and chemical industries.  It
comprises 109,389 lines of f77 (21% comments), 865 subroutines and
functions.  The wave functions are written to disk.  The SPECseis96 code
is derived from the ARCO benchmark suite which consists of four phases: data
generation, stack data, time migration, and depth migration.  This code
decomposes the domain into n equal parts (for n processors) with each part
processed independently.  It is have over 15K lines of code made up of
230 Fortran subroutines and 199 C functions for I/O and systems utilities.
SPECseis96 uses 32-bit precision, FFT's, Kirchoff integrals, and finite
differences.

The very first set of SPEC-HPG benchmark results were approved on May 8,
1997 (preceding day).  New benchmarks being considered are PMD (Parallel
Molecular Dynamics) and MM5 (NCAR Weather Processing C code).  The decision
on whether or not to accept these two potential SPEC-HPG codes will be made
in about 5 months.  The SPEC-HPG run rules permit the use of compiler
switches, source code changes, optimized libraries (which have been
disclosed to customers).  Only approved algorithmic changes will be disclosed.
RE gave the URL for the SPEC-HPG effort: http://www.specbench.org/hpg.  He
also referred to a recent article by himself and S. Hassanzadeh in "IEEE
Computational Science & Engineering" and two email reflectors for SPEC-HPG
communication: comments@specbench.org and info@specbench.org.

JD then gave a brief history of ParkBench and SPEC-HPG interactions and
suggested that the two efforts might consider sharing results and software.
The biggest difference in the two efforts is in the availability of
software as ParkBench code is freely available and SPEC-HPG software
has some restrictions.  A forum to publish both sets of results was discussed
and it was agreed that both efforts should at least share links on their
respective webpages.  RE pointed out that anyone can get the SPEC-HPG CD
of benchmarks without actually being a SPEC member.

JD stressed that the process of running codes (for any suite) needs to
be simplified so that building executables for different platforms is not
problematic.  Modifications for porting should be restricted to driver programs.
RS indicated that he has Perl scripts that runs all low_level, including 
COMMS3 for 2 to N procs, and produces a summary of the results. 

*** ACTION ITEM ***
JD, RE, AH, and CK will discuss a potential joint effort to simplify the
running of benchmark codes (contact RS also about his Perl scripts).

MBa noted that the SPEC-HPG members should be added to the ParkBench
email list (parkbench-comm@cs.utk.edu).  He also indicated that European
benchmarking workshop scheduled next Fall might coordinate with the
European SPEC group (scheduled for Sept. 11-12).

At 12:10pm, the attendees went to the lunch (Soup Kitchen).

After lunch (1:30pm), TH asked ES and VG to coordinate changes to the
COMMS benchmarks discussed above (*** ACTION ITEM ***).  ES then discussed
modifications to poly2 for the ParkBench V2.2 suite.  The proposed changes
include
	1. enlarged arrays A(1000000), B(1000000)
	2. removal of arrays C and D
	3. avoid cache flush (use a sliding vector), i.e., 

             DO I=1,N               DO I=NMIN,NMAX
                         becomes       ...

                                       NMIN=NMIN+N+INC

           where INC=17 by default (avoids reuse of the old cache line).

PM then discussed a program for determining parameters for memory subsystems.
Characteristics of this software include the use of tight loops, independent
memory references, maximized register use.  He showed graphs of memory
hierarchy bandwidth (reads and writes) depicting memory size (ranging from 4Kb
to 4Mb) versus Mb/sec transfer rates.  Some curves illustrated the effective 
cache size quite well.  PM pointed out that dynamically-scheduled processors
pose a significant problem for this type of modeling.  The program can be
run with or without a calibration loop exploiting known memory transfer data.
CG suggested that it would be nice to have such a program to measure latency
at all levels of the hierarchy.  PM's webpages for this program are:

	http://www.cs.utk.edu/~mucci/cachebench and
	http://www.cs.utk.edu/~mucci/parkbench.

CK suggested that an uncalibrated version of PM's benchmark would be more
useful to users (more reflective of real codes).  JD pointed out that the
output of the program could be tabulated bandwidths, latencies, etc.  CG
felt this program would be a very useful tool.  PM noted that the calibration
will not be used by default.  TH suggested that the ParkBench effort might
want to develop a future "ParkBench Tool Set" which contains progams like
this one developed by PM.

With regard to the Linalg Kernels, ES noted that although many of the
routines have calls to Scalapack routines, Scalapack will not be included
in future software releases.  Users will have to ge their own copies of
the source (or binaries) for Scalapack.  The size of these particular
kernel benchmarks drops by a factor of one-third by removing Scalapack.

*** ACTION ITEM ***
ES will report the most recent Linalg benchmark performance results at the
next ParkBench meeting.

TH then asked for discussions on new benchmarks with MBa leading the
discussion on HPF benchmarks.  MBa indicated that a new mail reflector
(parkbench-hpf@cs.utk.edu) had been set up for this cause with himself
as moderator for low-level codes (CK will moderate kernels and SS will
moderate discussions on HPF compact applications).  MBa noted that there
is limited manpower for the HPF benchmarking activities.  CK noted that
he had discussed this effort at recent the HPFF meeting (and other
users meetings).  A draft document on the ParkBench HPF benchmarks is
available at http://www.sis.port.ac.uk/~mab/ParkBench.  MBa felt strongly
that without manpower support this particular activity will die and that
a lead site is needed.

*** ACTION ITEM ***
CK and SS will investigate interest in HPF compact application development.

JD indicated that wrappers are being used to create HPF versions of the
Linalg kernels.  The procedure involves writing wrappers for the current
Scalapack driver programs.  Eventually, these programs may be completely
rewritten in HPF (this will start in the summer).  TH suggested that HPF
kernel benchmark performance be reported at the ParkBench meeting 
in September (at Southampton Performance Workshop).

MBa went on to report on the status of I/O benchmarks.  Basically, not
much progress has been made on the ParkBench I/O initiative.  A new I/O 
project between ECMWF, FECIT, and the Univ. of Southampton was launched
this past February.  They are looking at the I/O  in the IFS code from
the ECMWF (European Weather Forecasting).  David Snelling is the FECIT
leader who has also participated in ParkBench activities.  This I/O
project has 1 FTE at Southampton and 1.5 FTE at FECIT along with several
personnel at ECMWF.  One workshop, two technical meetings for the 1-year
project is planned.  The goals are: to develop instrumented I/O
benchmarks and build on top of MPI-IO (test, characterize parallel
systems).  Their methodology is very similar to that of ParkBench.
Codes in f90 and ANSI C are being considered (stubs for VAMPIR and
PABLO).  Regular reports to Fujitsu (sponsor of activity) are planned
and a full I/O test suite is planned by February 1998.

MBa also reported on the status of the ParkBench graphical database.
Currently, the performance data is kept in a relational DBMS.  A
frontend Java applet has been written to query the DBMS on-the-fly.
A backend is also in development which will automate the extraction
of new performance data and insertion into the DBMS (via an http
server).  By September, a more complete prototype which will allow
MS access and JDBC between 2 different machines should be ready.

VG then discussed the development of Java-based low-level benchmarks.
He presented a Java-to-C Interface Generator which would allow Java
benchmarks to call existing C libraries on remote machines.  He
presented sample Java+C NAS PB results on a 16-processor IBM SP/2
(Class A IS Benchmark):

           Version        1 Proc  2Procs  4 Procs  8 Procs  16 Procs
           NASA (C)        29.1    17.4     9.4     5.2        2.8
                 C         40.5    24.9     13.1    9.3       15.6
              Java         ----   132.5     64.7   37.9       33.5

At 2:50pm, TH reported other ParkBench activities including the
new PEMCS (Performance Evaluation and Modeling for Computer Systems)
electronic journal.  Suggested articles/authors include:

       *1. ParkBench Report No. 2 (ES, MBe)
       *2. NAS PB
	3. SPEC-HPG
       *4. Top 500
	5. AutoBench (M. Ginsburg)
       *6. Euroben (van der Steen)
	7. RAPS
	8. Europort
       *9. Cache benchmarks
       10. ASCI benchmarks (DoD)
      *11. PERFORM
       12. R. Hockney
      *13. PEPS
       14. C3I/Rome Labs

Those articles possible for Summer '97 are marked via *.  JD suggested
that articles be available in Encapsulated Postscript, PDF (Adobe),
and HTML.  TH noted that EU funding will provide a host computer and
some administration.  Possible publishers are Oxford Univ. Press and
Elsevier.

At 3:10pm, ES requested more items for the ParkBench bibliography
which will be available on the WWW.  PW suggested that authors should
be able to submit links to ParkBench-related applications.  JD then
briefly discussed WebBench which is a website focused on benchmarking
and performance evaluation.  Data is presented on platform,s applications,
organizations, vendors, conferences, papers, newsgroups, FAQ's, and
repositories (PDS, Top500, Linpack, etc.).  The WebBench URL is
http://www.netlib.org/benchweb.

MBa reminded attendees of the Fall Performance Workshop/ParkBench
meeting on (Thursday and Friday) Sept. 11 and 12.  This meeting
will be held at Venue, County Hotel, Southampton, UK.  Invited
and contributed talks will be presented.

With regard to ParkBench funding, JD indicated that the UT/ORNL/NASA
Ames proposal was not selected for funding but that it could be re-
submitted next year.  Expected funding from Rome lab was not received.
TH and VG did not succeed this past year either although some funding
from Fujitsu is possible.

TH adjourned the meeting at 3:25pm EST.

From owner-parkbench-comm@CS.UTK.EDU Tue May 27 10:32:45 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id KAA25239; Tue, 27 May 1997 10:32:45 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA05022; Tue, 27 May 1997 10:12:02 -0400
Received: from exu.inf.puc-rio.br (exu.inf.puc-rio.br [139.82.16.3]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA05013; Tue, 27 May 1997 10:11:53 -0400
Received: from obaluae (obaluae.inf.puc-rio.br) by exu.inf.puc-rio.br (4.1/SMI-4.1)
	id AA20170; Tue, 27 May 97 11:11:00 EST
From: maira@inf.puc-rio.br (Maira Tres Medina)
Received: by obaluae (SMI-8.6/client-1.3)
	id LAA16226; Tue, 27 May 1997 11:10:58 -0300
Date: Tue, 27 May 1997 11:10:58 -0300
Message-Id: <199705271410.LAA16226@obaluae>
To: parkbench-comments@CS.UTK.EDU
Subject: Benchmarks
Cc: parkbench-comm@CS.UTK.EDU, maira@CS.UTK.EDU, victal@CS.UTK.EDU
X-Sun-Charset: US-ASCII

Hello 

I'm a graduate student at the Computer Science Department of PUC-Rio
(Catholic University of Rio de Janeiro). I'm  currently studing
Low_Level benchmarks for measuring basic computer characteristics.

I have had same problems trying to run some of the benchmarks.
For example, the benchmark comms1 for PVM, prints the following errors messages
and stops.
 
    n05.sp1.lncc.br:/u/renata/maira/ParkBench/bin/RS6K>comms1_pvm
      Number of nodes =          2
      Front End System (1=yes, 0=no) =          0
      Spawning done by process (1=yes, 0=no) =          1
      Spawned           0  processes OK...
      libpvm [t4000c]: pvm_mcast(): Bad parameter
      TIDs sent...benchmark progressing...
 
 
   n05.sp1.lncc.br:/u/renata/maira/ParkBench> bin/RS6K/comms1_pvm 
     1525-006 The OPEN request cannot be processed because STATUS=OLD was coded 
     in the OPEN statement but the file comms1.dat does not exist. The program 
     will continue if ERR= or IOSTAT= has been coded in the OPEN statement.
     1525-099 Program is stopping because errors have occurred in an I/O request 
     and ERR= or IOSTAT= was not coded in the I/O statement.
 
 
I would like to know how I can execute the benchmarks only for  PVM.
Can you help me?
 
I have not had problems with benchmarks sequentials (tick1, tick2 ...).
 
Thank you very much for your attention.
 
Maira Tres Medina
Phd. Student
Pontificial Catholic University
Rio de Janeiro, Brazil
 

From owner-parkbench-comm@CS.UTK.EDU Wed May 28 16:36:07 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA15377; Wed, 28 May 1997 16:36:06 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA16158; Wed, 28 May 1997 16:26:41 -0400
Received: from rastaman.rmt.utk.edu (root@TCHM03A16.RMT.UTK.EDU [128.169.27.60]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id QAA16150; Wed, 28 May 1997 16:26:37 -0400
Received: from rastaman.rmt.utk.edu (localhost [127.0.0.1]) by rastaman.rmt.utk.edu (8.7.6/8.7.3) with SMTP id QAA00226; Wed, 28 May 1997 16:33:33 -0400
Sender: mucci@CS.UTK.EDU
Message-ID: <338C968B.124F15AA@cs.utk.edu>
Date: Wed, 28 May 1997 16:33:33 -0400
From: "Philip J. Mucci" <mucci@CS.UTK.EDU>
Organization: University of Tennessee, Knoxville
X-Mailer: Mozilla 3.01 (X11; I; Linux 2.0.28 i586)
MIME-Version: 1.0
To: Maira Tres Medina <maira@inf.puc-rio.br>
CC: parkbench-comments@CS.UTK.EDU, parkbench-comm@CS.UTK.EDU
Subject: Re: Benchmarks
References: <199705271410.LAA16226@obaluae>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi,

You need to make sure the dat files are in the executable directory.
They should be installed in $PVM_ROOT/bin/$PVM_ARCH.

-Phil

-- 
/%*\ Philip J. Mucci | GRA in CS under Dr. JJ Dongarra /*%\
\*%/ http://www.cs.utk.edu/~mucci  PVM/Active Messages \%*/

From owner-parkbench-comm@CS.UTK.EDU Thu Jun  5 11:30:41 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id LAA11302; Thu, 5 Jun 1997 11:30:41 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA14227; Thu, 5 Jun 1997 10:53:09 -0400
Received: from haven.EPM.ORNL.GOV (haven.epm.ornl.gov [134.167.12.69]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id KAA14220; Thu, 5 Jun 1997 10:53:07 -0400
Received: (from worley@localhost) by haven.EPM.ORNL.GOV (8.8.3/8.8.3) id KAA06499; Thu, 5 Jun 1997 10:53:06 -0400 (EDT)
Date: Thu, 5 Jun 1997 10:53:06 -0400 (EDT)
From: Pat Worley <worley@haven.EPM.ORNL.GOV>
Message-Id: <199706051453.KAA06499@haven.EPM.ORNL.GOV>
To: parkbench-comm@CS.UTK.EDU
Subject: Gordon conference deadline extended
Forwarding: Mail from 'Pat Worley <worley>'
     dated: Thu, 5 Jun 1997 10:48:07 -0400 (EDT)
Cc: worley@haven.EPM.ORNL.GOV, tony@cs.msstate.edu

(Our apologies if you receive this multiple times.)

There is still room for additional attendees at the Gordon Conference on High
Performance Computing, and the Gordon Research Conference administration has
agreed to extend the application deadline. As a practical matter,
applications need to be submitted no later than JULY 1. We will also stop
accepting applications before that date if the maximum meeting size is
reached, so please apply as soon as possible if you are interested in
attending.   

The simplest way to apply is to download the application form from the web
site 

http://www.erc.msstate.edu/conferences/gordon97

or to use the online registration option available at the same site.
If you have any problems with either of these, please contact the organizers
at tony@cs.msstate.edu and worleyph@ornl.gov. 

Complete information on the meeting is available from the Web site or its
links, but a short summary of the meeting follows:

--------------------------------------------------------------------------

The 1997 Gordon Conference on High Performance Computing and
Information Infrastructure: "Practical Revolutions in HPC and NII"

Chair, Anthony Skjellum, Mississippi State University, tony@cs.msstate.edu,
       601-325-8435
Co-Chair, Pat Worley, Oak Ridge National Laboratory, worleyph@ornl.gov,
       615-574-3128

Conference web page: http://www.erc.msstate.edu/conferences/gordon97

July 13-17, 1997
Plymouth State College
Plymouth NH

The now bi-annual Gordon conference series in HPC and NII commenced in 1992
and has had its second meeting in 1995.  The Gordon conferences are an
elite series of conferences designed to advance the state-of-the-art in
covered disciplines. Speakers are assured of anonymity and
referencing presentations done at Gordon conferences is prohibited by
conference rules in order to promote science, rather than publication
lists.  Previous meetings have had good international participation,
and this is always encouraged. Experts, novices, and technically
interested parties from other fields interested in HPC and NII are
encouraged to apply to attend.

The conference consists of technical sessions in the morning and evening,
with afternoons free for discussion and recreation. Each session consists of
2 or 3 one hour talks, with ample time for questions and discussion. All
speakers are invited and there are no parallel sessions. All attendees are
both encouraged and expected to actively participate, via discussions during
the technical sessions or via poster presentations. 

All attendees, including speakers, poster presenters, and session chairs,
must apply to attend. Poster presenters should indicate their poster
proposals on their applications. While all posters must be approved,
successful applicants should assume that their posters have been accpeted
unless they hear otherwise. 

Meeting Themes:
  Networks: Emerging capabilities and the practical implications
          : New types of networking  
  Real-Time Issues
  Multilevel Multicomputers
  Processors-in-Memory and Other Fine Grain Computational Architectures
  Impact of Evolving Hardware on Applications
  Impact of Software Abstractions on Performance

Confirmed Speakers:
  Ashok K. Agrawala		University of Maryland
  Kirstie Bellman		DARPA/SISTO
  James C. Browne		University of Texas at Austin
  Andrew Chien			University of Illiniois, Urbana-Champaign
  Thomas H. Cormen		Dartmouth College
  Jean-Dominique Decotignie	CSEM
  David Greenberg		Sandia National Laboratories
  William Gropp			Argonne National Laboratory
  Don Heller			Ames Laboratory
  Jeff Koller			Information Sciences Institute
  Peter Kogge			University of Notre Dame
  Chris Landauer		The Aerospace Corporation
  Olaf M. Lubeck		Los Alamos National Laboratory
  Andrew Lumsdaine		University of Notre Dame
  Lenore Mullins		SUNY, Albany
  Paul Plassmann		Argonne National Laboratory
  Lui Sha			Carnegie Mellon Univeristy
  Paul Woodward			University of Minnesota


From owner-parkbench-comm@CS.UTK.EDU Tue Jul  1 17:06:52 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id RAA20550; Tue, 1 Jul 1997 17:06:51 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA21503; Tue, 1 Jul 1997 17:03:35 -0400
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA21438; Tue, 1 Jul 1997 17:02:42 -0400
Received: from baker (baker.npac.syr.edu) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA10168; Tue, 1 Jul 97 22:00:22 BST
Date: Tue,  1 Jul 97 20:55:49    
From: Mark Baker <mab@sis.port.ac.uk>
Subject: Fall 97 Parkbench Workshop - Southampton, UK
To: ejz@ecs.soton.ac.uk, parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU,
        William Gropp <gropp@mcs.anl.gov>,
        Antoine Hyaric <Antoine.Hyaric@comlab.ox.ac.uk>, gent@genias.de,
        gcf@npac.syr.edu, geerd.hoffman@ecmwf.co.uk, reed@cs.uiuc.edu,
        david@cs.cf.ac.uk, clemens-august.thole@gmd.de, klaus.stueben@gmd.de,
        "J.C.T. Pool" <jpool@cacr.caltech.edu>,
        Paul Messina <messina@cacr.caltech.edu>, foster@mcs.anl.gov,
        idh@soton.ac.uk, rjc@soton.ac.uk, plg@pac.soton.ac.uk,
        Graham.Nudd@dcs.warwick.ac.uk
Cc: lec@ecs.soton.ac.uk, rjr@ecs.soton.ac.uk,
        "MATRAVERS Prof. D R STAF" <DRM12@sms.port.ac.uk>,
        wilsona@sis.port.ac.uk, grant <grant@afs.mcc.ac.uk>,
        hwyau@epcc.ed.ac.uk
X-Priority: 3 (Normal)
X-Mailer: Chameleon 5.0.1, TCP/IP for Windows, NetManage Inc.
Message-Id: <Chameleon.867790738.mab@baker>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

This is to let you know that the Department of Electronics and Computer
Science at the University of Southampton is organising a Fall 97 
Parkbench Workshop on the 11th and 12th of September 1997.
See http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/ for futher 
details.

The workshop will include a number of talks from researchers working in
th field of performance evaluation and modelling of computer systems, a panel
discussion session and a Parkbench committee meeting.

The Workshop is free to attend - workshop delegates need only cover their
own travel and accommodation expenses. Attendance is limited and so the 
availability of places at the Workshop will be allocated on a first come basis.

It is planned to turn the talks given at the Workshop into a series of 
short papers which will be put together and published as a Special Issue 
of the electronic journal Performance Evaluation and Modelling of 
Computer Systems (PEMCS).

For further information or registration details refer to the Web pages -
(http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/registration.html).

I would appreciate it if you would kindly pass this email onto colleges who
may be interested in the event.

Regards

Mark


-------------------------------------
Dr Mark Baker
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 7/1/97 - Time: 8:55:49 PM
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Wed Jul 23 17:19:23 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id RAA04434; Wed, 23 Jul 1997 17:19:23 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA28191; Wed, 23 Jul 1997 17:10:39 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id RAA28171; Wed, 23 Jul 1997 17:10:24 -0400 (EDT)
Received: from baker (baker.npac.syr.edu) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA14190; Wed, 23 Jul 97 22:10:30 BST
Date: Wed, 23 Jul 97 22:01:41 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: PEMCS Web Site
To: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Face: "3@c]&iv:nfs&\mp6n<RAc6^[LSl&b"vx:G#zkJus3[uV=a.|~/c]T(LKr/FQ'iWPiMF'4x
 n2{)H=1~y.#7>N90ioxbQ-Eu:]}^MyviIL7YjwT,Cl)|TYpTQ})PP'&O=V`~)JQRWjM?H;'`q\"3mv
 "j@5vs)}!WC3pG9q:;rpe0\LoLQfY"1?1A.\(f=E*&QAW8WK+)*)T0[Bv=[{.-f7<6Ddv!2XaWhH
X-Priority: 3 (Normal)
Message-Id: <Chameleon.869692062.mab@baker>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

The Web site that will host the Journal of "Performance 
Evaluation and Modelling of Computer Systems (PEMCS)" can
be found at:

http://hpc-journals.ecs.soton.ac.uk/PEMCS/

The pages I have put up are at the present still in a 
"draft/under-construction" state.

I would appreciate any comments or feedback about the
pages.

Regards

Mark



-------------------------------------
Dr Mark Baker
DIS, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 07/23/97 - Time: 22:01:41
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Thu Jul 24 08:26:42 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA12708; Thu, 24 Jul 1997 08:26:42 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA04617; Thu, 24 Jul 1997 08:21:55 -0400 (EDT)
Received: from berry.cs.utk.edu (BERRY.CS.UTK.EDU [128.169.94.70]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA04599; Thu, 24 Jul 1997 08:21:23 -0400 (EDT)
Received: from cs.utk.edu by berry.cs.utk.edu with ESMTP (cf v2.11c-UTK)
          id IAA13817; Thu, 24 Jul 1997 08:21:24 -0400
Message-Id: <199707241221.IAA13817@berry.cs.utk.edu>
To: Mark Baker <mab@sis.port.ac.uk>
cc: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU
Subject: Re: PEMCS Web Site 
In-reply-to: Your message of Wed, 23 Jul 1997 22:01:41 -0000.
             <Chameleon.869692062.mab@baker> 
Date: Thu, 24 Jul 1997 08:21:24 -0400
From: "Michael W. Berry" <berry@CS.UTK.EDU>



> Dear All,
> 
> The Web site that will host the Journal of "Performance 
> Evaluation and Modelling of Computer Systems (PEMCS)" can
> be found at:
> 
> http://hpc-journals.ecs.soton.ac.uk/PEMCS/
> 
> The pages I have put up are at the present still in a 
> "draft/under-construction" state.
> 
> I would appreciate any comments or feedback about the
> pages.
> 
> Regards
> 
> Mark
> 
> 
> 
> -------------------------------------
> Dr Mark Baker
> DIS, University of Portsmouth, Hants, UK
> Tel: +44 1705 844285	Fax: +44 1705 844006
> E-mail: mab@sis.port.ac.uk
> Date: 07/23/97 - Time: 22:01:41
> URL http://www.sis.port.ac.uk/~mab/
> -------------------------------------
> 

Mark,
the webpages are well organized.  You might reconsider the
red text on the green background of the menu frame.  It was
difficult to read on my machine at home.

Nice work!
Mike

-------------------------------------------------------------------
Michael W. Berry                     Ayres Hall 114
berry@cs.utk.edu                     Department of Computer Science          
OFF:(423) 974-3838                   University of Tennessee
FAX:(423) 974-4404                   Knoxville, TN  37996-1301
URL:http://www.cs.utk.edu/~berry/
-------------------------------------------------------------------

From owner-parkbench-comm@CS.UTK.EDU Fri Aug  1 12:59:29 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id MAA05831; Fri, 1 Aug 1997 12:59:27 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA01387; Fri, 1 Aug 1997 12:38:00 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA01337; Fri, 1 Aug 1997 12:37:24 -0400 (EDT)
Received: from baker (baker.npac.syr.edu) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA15842; Fri, 1 Aug 97 17:36:11 BST
Date: Fri,  1 Aug 97 17:17:51 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Reminder - Fall Parkbench Workshop
To: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Face: ,<'y31|nlb,jCP5?km9\KD+>p9/e?:|$RRhY]e;#`awGHh=mrY.T??#]-*rt}l0*u`k2A7n
 KlqNG"u'-%cS@3|G[%=m%bSB[lfSn5n"gD4CU(j?1y?#SOkm!qw_=p%c#"6g&(+\Oy6T{4CEShal?z
 M)&Gd'Pb6Qc~>SPx{m[F55=]yY>cN>|/m5)T?q`OTjdQL=7-n%NT({;;$P*2[#7ZWL8baLoI_/F89,
 x'u`*$'<|ctKNYTSJuLV=!$QT3bN*>91V,a0Cc"_UsxwMKg\;#W2LZ$!`j?ZWp;byz~;y}2Dz6i7y%
 E&;gfnmI_~}+oifmWXJMHfWeezBL1("ZnFe!rnX[Q|,:IJ?iq+PePa/[3R4
X-Priority: 3 (Normal)
Message-Id: <Chameleon.870453138.mab@baker>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

This email is a reminder about the:

----------------------------------------------------------------------------------------------------

				Fall ParkBench Workshop

                           Thursday 11th/Friday 12th September 1997 

                               at the University of Southampton, UK


		See http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/


----------------------------------------------------------------------------------------------------

If you are interested in attending the Workshop you should register now and 
reserve accommodation as hotel rooms in Southampton during the workshop period
will be in short supply due to the "International Southampton Boat Show" which 
will also be taking place.

At present we have a preliminary reservation on rooms at the County Hotel where
the Workshop is being held. Without concrete delegate reservations we can only
hold onto there rooms for approximately another week.

Thereafter, accommodation at the Hotel, or around the city, may be more problematic
in getting and reserving. So, I encourage potential Workshop delegates to 
register ASAP.

Mark


-------------------------------------
Dr Mark Baker
University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 08/01/97 - Time: 17:17:52
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Mon Aug 11 13:13:12 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id NAA20171; Mon, 11 Aug 1997 13:13:11 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA06842; Mon, 11 Aug 1997 13:02:59 -0400 (EDT)
Received: from MIT.EDU (SOUTH-STATION-ANNEX.MIT.EDU [18.72.1.2]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA06808; Mon, 11 Aug 1997 13:02:42 -0400 (EDT)
Received: from MIT.MIT.EDU by MIT.EDU with SMTP
	id AA27349; Mon, 11 Aug 97 13:02:14 EDT
Received: from HOCKEY.MIT.EDU by MIT.MIT.EDU (5.61/4.7) id AA01161; Mon, 11 Aug 97 13:02:12 EDT
Message-Id: <9708111702.AA01161@MIT.MIT.EDU>
X-Sender: mmccarth@po9.mit.edu
X-Mailer: Windows Eudora Pro Version 2.1.2
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Mon, 11 Aug 1997 13:02:12 -0400
To: alison.wall@rl.ac.uk, weber@scripps.edu, schauser@cs.ucsb.edu,
        dewombl@sandia.gov, edgorha@sandia.gov, rdskocy@sandia.gov,
        sales@pgroup.com, utpds@CS.UTK.EDU, parkbench-comm@CS.UTK.EDU,
        pancake@cs.orst.edu, johnreed@ghost.CS.ORST.EDU, levesque@apri.com,
        davida@cit.gu.edu.au, gddt@gup.uni-linz.ac.at,
        atempt@gup.uni-linz.ac.at, rileyba@ornl.gov, bac@ccs.ornl.gov
From: "Michael F. McCarthy" <mmccarth@MIT.EDU>
Subject: For Sale: CM-5


   PLEASE FORWARD THIS NOTE TO ANYONE THAT YOU BELIEVE 
   MAY HAVE AN INTEREST IN PURCHASING THIS SYSTEM!
__________________________________________________________________________

Case #3971 -- FOR SALE - CM5 with 128 nodes and SDA --
__________________________________________________________________________

The MIT Lab for Computer Science offers for bid sale a Thinking Machines 
CM-5 Connection Machine (described below).  

Bids to purchase this system are requested from all interested parties,
(with a minimum expected Bid of $25,000).

All bids must be received at the MIT property office by 5:00 PM (EDT)
on Monday, 8/Sept/97.

The machine must be moved from MIT within 10 business days of  acceptance
of the bid. All expenses and arrangements for moving will be made by 
purchaser.

The system consists of:

1) 128 PN CM-5 w/ Vector Units, 256 Network addresses-Part 
          No.CM5-128V-32F
2) Scalable Disk Array with Twenty-four(24) 
          1.2 GB Drives-Part No.CM5-SA25F
3) Control Processor Interface-Part No. CM5-CPI
4) S-Bus to Diagnostics Network Interface-Part No. CM5-SDN
5) S-Bus Network Interface Board(5)-Part No. CM5-SNI

[N.B. On July 16 1997 power was turned off.The machine can be 
turned back on in its present location only until Friday, 22/AUG/97 
when wiring changes are planned in that machine room.]
 
"The Institute reserves the right to reject any or all offers.MIT makes no
warranty of any kind, express or implied, with respect to this equipment.
This includes fitness for a particular purpose. It is the responsibility of 
those making an offer to determine, before making an offer, that the
equipment meets any conditions required by those making that offer.Thank you."
__________________________________________________________________________

Submit bids for Case #3971  
                before Monday, 8/Sept/97, 5:00 PM (EDT) to: 
*****************************************************************
* Michael F. McCarthy       * Phone:  (617)253-2779             *
* MIT Property Office       * FAX:    (617)253-2444             *
* E19-429                   * E-Mail: mmccarth@MIT.EDU          * 
* 77 Massachusetts Ave.     *                                   *
* Cambridge, MA 02139       *                                   *
*****************************************************************
__________________________________________________________________________

SYSTEM HISTORY 

The Project SCOUT CM-5 is housed in M.I.T's Laboratory for Computer 
Science (L.C.S). The machine was acquired in 1993 as part of the the ARPA 
sponsored project SCOUT, and used to accomplish the stated aim of the 
project of "fermenting collaborations between users, builders and
networkers of massively parallel computers". The CM-5 computer, developed
and manufactured by Thinking Machines Corporation, evolved from earlier
T.M.C. computers (the CM-2 and the CM-200)with an architecture targeted 
toward teraflops performance for large, complex data intensive applications.

The MIT hardware consists of a total of 128 32MHz SPARC  microprocessors,
each with 4 proprietary floating point arithmetic units and 32Mb of local
memory attached to it. The system also includes a subsidiary 25Gb parallel 
file system for handling high volume parallel application I/O. 
 
The system was operated under full maintenance contract 
from May of 1993 until March 20 1997.

On July 16 1997 power was turned off. The machine can be turned back on
in its present location only until Friday, 22/AUG/97 when wiring changes 
are planned in that machine room.

The system was used primarily for research but a description of an 
instructional use made of the machine can be found at
     http://www-erl.mit.edu/eaps/seminar/iap95/cnh/CM5Intro.html

Web sites about other CM5 sites and general information include:
     http://www.math.uic.edu/~hanson/cmg.html
     http://www.acl.lanl.gov/UserInfo/cm5admin.html
     http://ec.msc.edu/CM5/

__________________________________________________________________________
FUTURE MAINTENANCE

People submitting bids may wish to discuss future maintenance issues
with a company that is a present maintainer of CM5 Equipment, 
Connection Machine Services. 
*****************************************************************
* Larry Stewart                * Phone:  (505) 820-1470         *
*                              * Cell:   (505) 690-7799         *
* Account Executive            * FAX:    (505) 820-0810         *
* Connection Machines Services * Home:   (505) 983-9670         *
* 1373 Camino Sin Salida       * Pager   (888) 712-4143         *
* Santa Fe, NM 87501           * E-Mail: stewart@ix.netcom.com  *
*****************************************************************
__________________________________________________________________________








Michael F. McCarthy
MIT Property Office
E19-429
77 Massachusetts Ave.
Cambridge, MA 02139
Ph   (617)253-2779
Fax  (617)253-2444


From owner-parkbench-comm@CS.UTK.EDU Mon Sep  1 05:44:50 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id FAA11838; Mon, 1 Sep 1997 05:44:50 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA07176; Mon, 1 Sep 1997 05:35:14 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA07160; Mon, 1 Sep 1997 05:34:44 -0400 (EDT)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA14311; Mon, 1 Sep 97 10:33:06 BST
Date: Mon,  1 Sep 97 10:19:23 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Final Announcement: Fall ParkBench Workshop
To: "Daniel A. Reed"  <reed@cs.uiuc.edu>,
        "J.C.T. Pool"  <jpool@cacr.caltech.edu>, a.j.grant@mcc.ac.uk,
        Antoine Hyaric  <Antoine.Hyaric@comlab.ox.ac.uk>,
        Ed Zaluska  <E.J.Zaluska@ecs.soton.ac.uk>,
        Fritz Ferstl  <ferstl@genias.de>, Hon W Yau  <hwyau@epcc.ed.ac.uk>,
        idh@soton.ac.uk, parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU,
        Paul Messina  <messina@cacr.caltech.edu>,
        R.Rankin@Queens-Belfast.AC.UK, rjc@soton.ac.uk, topic@mcc.ac.uk,
        Wolfgang Genzsch  <getup@genias.de>
Cc: lec@ecs.soton.ac.uk
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Face: ,<'y31|nlb,jCP5?km9\KD+>p9/e?:|$RRhY]e;#`awGHh=mrY.T??#]-*rt}l0*u`k2A7n
 KlqNG"u'-%cS@3|G[%=m%bSB[lfSn5n"gD4CU(j?1y?#SOkm!qw_=p%c#"6g&(+\Oy6T{4CEShal?z
 M)&Gd'Pb6Qc~>SPx{m[F55=]yY>cN>|/m5)T?q`OTjdQL=7-n%NT({;;$P*2[#7ZWL8baLoI_/F89,
 x'u`*$'<|ctKNYTSJuLV=!$QT3bN*>91V,a0Cc"_UsxwMKg\;#W2LZ$!`j?ZWp;byz~;y}2Dz6i7y%
 E&;gfnmI_~}+oifmWXJMHfWeezBL1("ZnFe!rnX[Q|,:IJ?iq+PePa/[3R4
X-Priority: 3 (Normal)
Message-Id: <Chameleon.873106125.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear all,

This is the FINAL ANNOUNCEMENT:

If you would like to attend this workshop please let Lesley Courtney 
(lec@ecs.soton.ac.uk) know by Friday 5th September 1997 at 
the latest as we need to confirm numbers.

Workshop details can be found at

http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/

Regards

Mark



-------------------------------------
Dr Mark Baker
University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 09/01/97 - Time: 10:19:23
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Wed Sep  3 15:37:55 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id PAA20262; Wed, 3 Sep 1997 15:37:55 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA08273; Wed, 3 Sep 1997 15:19:14 -0400 (EDT)
Received: from punt-2.mail.demon.net (punt-2b.mail.demon.net [194.217.242.6]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA08262; Wed, 3 Sep 1997 15:19:10 -0400 (EDT)
Received: from minnow.demon.co.uk ([158.152.73.63]) by punt-2.mail.demon.net
           id aa0626941; 3 Sep 97 17:35 BST
Message-ID: <pin21IA7KYD0Ew2z@minnow.demon.co.uk>
Date: Wed, 3 Sep 1997 16:31:07 +0100
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Prototype PICT release 1.0
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>

At their last meeting the Parkbench Committee recommended that an
interactive curve fitting tool be produced for the postprocessing and
parametrisation of Parkbench results using the latest Internet Web
technology. I have produced a prototype of such a tool as a Java applet
running on a Web page on the user's machine and called it PICT
(Parkbench Interactive Curve-fitting Tool). This is now ready for
evaluation and testing by the committee.

The tool provides the following features:

(1) Automatic plotting of Low-Level Parkbench output files from a URL
anywhere on the Web (At present limited to New COMMS1 and Raw data, but
easily extended to original COMMS1 and RINF1). This is useful for a
quick comparison of raw data.

(2) Automatic plotting of both 2 and 3-parameter curve-fits which are
produce by the benchmarks. Good for checking the quality of the fits.

(3) Allows manual rescaling of the graph range to suit the data, either
by typing in the required range values or by dragging out a range box
with the mouse.

(4) Allows the 2-parameter and 3-parameter performance curves to be
manually moved about the graph in order to fine tune the fits. The curve
follows the mouse and the RMS and MAX percentage errors are shown as the
curve moves. Alternatively parameter values can be typed in and the
Manual button pressed when the curve for these values will be plotted.

(5) The data file being plotted can be VIEWed and a HELP button provides
a description of the action of each button in a separate windows.

The PICT applet has been built on top of Leigh Brookshaw's 2D plotting
package the URL for which is given at the bottom of the HELP window. The
features under the RESTART button are in his original code, I have just
added the 2-PARA and 3-PARA features.

The applet was developed using JDK1.0 beta on a PC with a 1600x1200
display and works on the PC both locally and from my Web page with
appletview, MSIE 3.02 and Netscape 3.01. It has also been successfully
run on a Solaris Sun with NS3.01, but another Sun user has reported no
graphs and errors due to "wrong applet version". So please report your
experiences (both success and failure please) to me with all the
details.

To play with PICT turn your browser to:

     http://www.minnow.demon.co.uk/pict/source/pict1.html  or 
                                               pict1a.html 

pict1.html asks for 1000x732 pixels and suits PCs best (it's about the
minimum useful size).

pict1a.html asks for 1020x900 pixels and was necessary for the whole
applet to visible on the Sun.

For those wishing to look closer all the source is provided and should
be downloadable. Suggestions for improvement, corrections or
constructive criticism are solicited.

I have asked for an agenda item to be included for the Parkbench meeting
on 11 Sept in Southampton so that PICT can be discussed. I look forward
to seeing some of you there.
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-lowlevel@CS.UTK.EDU Wed Sep 10 06:29:15 1997
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA21129; Wed, 10 Sep 1997 06:29:14 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA20815; Wed, 10 Sep 1997 06:31:30 -0400 (EDT)
Received: from sun3.nsfnet-relay.ac.uk (sun3.nsfnet-relay.ac.uk [128.86.8.50]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id GAA20791; Wed, 10 Sep 1997 06:30:47 -0400 (EDT)
Received: from bright.ecs.soton.ac.uk by sun3.nsfnet-relay.ac.uk with JANET SMTP (PP); Wed, 10 Sep 1997 11:30:44 +0100
Received: from landlord.ecs.soton.ac.uk by bright.ecs.soton.ac.uk; Wed, 10 Sep 97 11:32:57 BST
From: Vladimir Getov <vsg@ecs.soton.ac.uk>
Received: from bill.ecs.soton.ac.uk by landlord.ecs.soton.ac.uk; Wed, 10 Sep 97 11:33:16 BST
Date: Wed, 10 Sep 97 11:33:13 BST
Message-Id: <2458.9709101033@bill.ecs.soton.ac.uk>
To: parkbench-lowlevel@CS.UTK.EDU, parkbench-comm@CS.UTK.EDU,
        parkbench-hpf@CS.UTK.EDU
Subject: ParkBench Committee Meeting - tentative Agenda

Dear Colleague,

The ParkBench (Parallel Benchmark Working Group)
will meet in Southampton, U.K. on 
September 11th, 1997 as part of the ParkBench Workshop.

The Workshop site will be the County Hotel in Southampton.

  County Hotel
  Highfield Lane
  Southampton, U.K.
  Phone: +44-(0)1703-359955


Please send us your comments about the tentative agenda:


14:30  Finalize meeting agenda
       Minutes of last meeting (Erich Strohmaier)

14:45  Changes to Current release:
         - Low Level COMMS benchmarks (Vladimir Getov)
         - NAS Parallel Benchmarks (Subhash Saini)

15:15  New benchmarks:
         - HPF Low Level benchmarks (Mark Baker)


15:30  ParkBench Performance Analysis Tools:
         - ParkBench Result Templates (Vladimir Getov and Mark Papiani)
         - Visualization of Parallel Benchmark Results - new GBIS
           (Mark Papiani and Flavio Bergamaschi)
         - Interactive Web-page Curve-fitting of Parallel Performance
           Measurements (Roger Hockney)


16:15  Demonstrations:
         - Java Low-Level Benchmarks (Vladimir Getov)
         - BenchView: Java Tool for Visualization of Parallel Benchmark Results
           (Mark Papiani and Flavio Bergamaschi)
         - PICT: An Interactive Web-page Curve-fitting Tool (Roger Hockney)


16:45  Other activities:
         - "Electronic Benchmarking Journal" - status report (Mark Baker)


       Miscellaneous
       Date and venue for next meeting

17:00       Adjourn


Tony Hey
Vladimir Getov
Erich Strohmaier

From owner-parkbench-comm@CS.UTK.EDU Wed Sep 10 06:40:25 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA21186; Wed, 10 Sep 1997 06:40:25 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA20806; Wed, 10 Sep 1997 06:31:06 -0400 (EDT)
Received: from sun3.nsfnet-relay.ac.uk (sun3.nsfnet-relay.ac.uk [128.86.8.50]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id GAA20791; Wed, 10 Sep 1997 06:30:47 -0400 (EDT)
Received: from bright.ecs.soton.ac.uk by sun3.nsfnet-relay.ac.uk with JANET SMTP (PP); Wed, 10 Sep 1997 11:30:44 +0100
Received: from landlord.ecs.soton.ac.uk by bright.ecs.soton.ac.uk; Wed, 10 Sep 97 11:32:57 BST
From: Vladimir Getov <vsg@ecs.soton.ac.uk>
Received: from bill.ecs.soton.ac.uk by landlord.ecs.soton.ac.uk; Wed, 10 Sep 97 11:33:16 BST
Date: Wed, 10 Sep 97 11:33:13 BST
Message-Id: <2458.9709101033@bill.ecs.soton.ac.uk>
To: parkbench-lowlevel@CS.UTK.EDU, parkbench-comm@CS.UTK.EDU,
        parkbench-hpf@CS.UTK.EDU
Subject: ParkBench Committee Meeting - tentative Agenda

Dear Colleague,

The ParkBench (Parallel Benchmark Working Group)
will meet in Southampton, U.K. on 
September 11th, 1997 as part of the ParkBench Workshop.

The Workshop site will be the County Hotel in Southampton.

  County Hotel
  Highfield Lane
  Southampton, U.K.
  Phone: +44-(0)1703-359955


Please send us your comments about the tentative agenda:


14:30  Finalize meeting agenda
       Minutes of last meeting (Erich Strohmaier)

14:45  Changes to Current release:
         - Low Level COMMS benchmarks (Vladimir Getov)
         - NAS Parallel Benchmarks (Subhash Saini)

15:15  New benchmarks:
         - HPF Low Level benchmarks (Mark Baker)


15:30  ParkBench Performance Analysis Tools:
         - ParkBench Result Templates (Vladimir Getov and Mark Papiani)
         - Visualization of Parallel Benchmark Results - new GBIS
           (Mark Papiani and Flavio Bergamaschi)
         - Interactive Web-page Curve-fitting of Parallel Performance
           Measurements (Roger Hockney)


16:15  Demonstrations:
         - Java Low-Level Benchmarks (Vladimir Getov)
         - BenchView: Java Tool for Visualization of Parallel Benchmark Results
           (Mark Papiani and Flavio Bergamaschi)
         - PICT: An Interactive Web-page Curve-fitting Tool (Roger Hockney)


16:45  Other activities:
         - "Electronic Benchmarking Journal" - status report (Mark Baker)


       Miscellaneous
       Date and venue for next meeting

17:00       Adjourn


Tony Hey
Vladimir Getov
Erich Strohmaier

From owner-parkbench-lowlevel@CS.UTK.EDU Thu Sep 18 18:27:19 1997
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id SAA12991; Thu, 18 Sep 1997 18:27:18 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id SAA29359; Thu, 18 Sep 1997 18:26:21 -0400 (EDT)
Received: from k2.llnl.gov (zosel@k2.llnl.gov [134.9.1.1]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id SAA29352; Thu, 18 Sep 1997 18:26:19 -0400 (EDT)
Received: (from zosel@localhost)
	by k2.llnl.gov (8.8.5/8.8.5/LLNL-Jun96) id PAA07246
	for parkbench-lowlevel@cs.utk.edu; Thu, 18 Sep 1997 15:26:16 -0700 (PDT)
Date: Thu, 18 Sep 1997 15:26:16 -0700 (PDT)
From: Mary E Zosel <zosel@k2.llnl.gov>
Message-Id: <199709182226.PAA07246@k2.llnl.gov>
To: parkbench-lowlevel@CS.UTK.EDU
Subject: any pthreads tests???

Does anyone know of any low-level performance tests for pthreads libraries???
I'm interested in both single processor performance of pthreads calls - 
and also multiprocessor (shared memory) calls ... to measure the overhead
of the calls.
  -mary zosel-   zosel@llnl.gov

From owner-parkbench-lowlevel@CS.UTK.EDU Sun Sep 21 09:13:20 1997
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id JAA08699; Sun, 21 Sep 1997 09:13:20 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA15884; Sun, 21 Sep 1997 09:15:32 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA15877; Sun, 21 Sep 1997 09:15:30 -0400 (EDT)
Received: from mordillo (p41.ascend3.is2.bb.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA10322; Sun, 21 Sep 97 14:15:58 BST
Date: Sun, 21 Sep 97 13:32:56 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Re: any pthreads tests???
To: Mary E Zosel  <zosel@k2.llnl.gov>, parkbench-lowlevel@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <199709182226.PAA07246@k2.llnl.gov> 
Message-Id: <Chameleon.874845447.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Mary,

This has been talked about as one of the activities that Parkbench
would be interested in persuing. But, so far we have not had the
time or man-power to follow up our interests.

Ron Sercely at HP/CTCX was particularly interested in this area. Also,
I know the people at Manchester University wrote a bunch of
Pthreads codes - some were benchmarks - for their KSR machine.

Hope this helps.

Regards

Mark


--- On Thu, 18 Sep 1997 15:26:16 -0700 (PDT)  Mary E Zosel <zosel@k2.llnl.gov> wrote:
> Does anyone know of any low-level performance tests for pthreads libraries???
> I'm interested in both single processor performance of pthreads calls - 
> and also multiprocessor (shared memory) calls ... to measure the overhead
> of the calls.
>   -mary zosel-   zosel@llnl.gov
> 

---------------End of Original Message-----------------

-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 09/21/97 - Time: 13:32:57
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Wed Sep 24 06:04:19 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA23913; Wed, 24 Sep 1997 06:04:18 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA23163; Wed, 24 Sep 1997 05:46:35 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA23156; Wed, 24 Sep 1997 05:46:26 -0400 (EDT)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA29780; Wed, 24 Sep 97 10:47:01 BST
Date: Wed, 24 Sep 97 10:38:39 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: PC timers
To: parkbench-comm@CS.UTK.EDU, parkbench-low-level@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.875094053.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Can someone suggest the appropriate PC-based timer 
function (MS Visual C++ or Digital Visual Fortran)
to replace the usual gettimeofday call !?

Cheers

Mark

-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 09/24/97 - Time: 10:38:39
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Thu Sep 25 10:11:01 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id KAA20147; Thu, 25 Sep 1997 10:11:01 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA18087; Thu, 25 Sep 1997 09:24:56 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA18080; Thu, 25 Sep 1997 09:24:53 -0400 (EDT)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA12457; Thu, 25 Sep 97 14:25:35 BST
Date: Thu, 25 Sep 97 14:11:59 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: PC Time function
To: parkbench-comm@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.875193559.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Thanks to all for timer info. I used the C function _ftime()
in the end because it had millisec resolution. Just had
to get a my head around using INTERFACE in F90 to include
the external C function.

I've inserted my version of the _ftime() timer below - I don't think
there are any obvious error in it :-)

I also implemented the dflib F90 function  CALL GETTIM(hour, min, sec, hund) -
this function passed tick2 testing but only has 1/100 sec resolution.

-------------------------------------------------------
double dwalltime00()
{

    struct _timeb timebuf;

    _ftime( &timebuf );

    return (double) timebuf.time + (double) timebuf.millitm / 1000.0;
}

double dwalltime00_()
{
	return dwalltime00();
}

double DWALLTIME00()
{
	return dwalltime00();
}
-------------------------------------------------------




Cheers

Mark




-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 09/25/97 - Time: 14:11:59
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Tue Oct  7 06:35:04 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA26560; Tue, 7 Oct 1997 06:35:04 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA25697; Tue, 7 Oct 1997 06:10:11 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA25668; Tue, 7 Oct 1997 06:09:40 -0400 (EDT)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA05125; Tue, 7 Oct 97 11:09:53 BST
Date: Tue,  7 Oct 97 10:43:49 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Workshop Papers
To: "Aad J. van der Steen"  <steen@fys.ruu.nl>, Charles Grassl  <cmg@cray.com>,
        Clemens Thole  <clemens-august.thole@gmd.de>,
        David Snelling  <snelling@fecit.co.uk>,
        Erich Strohmaier  <erich@CS.UTK.EDU>,
        Grapham Nudd  <Graham.Nudd@dcs.warwick.ac.uk>,
        Klaus Stueben  <klaus.stueben@gmd.de>, parkbench-comm@CS.UTK.EDU,
        Roger Hockney  <roger@minnow.demon.co.uk>,
        Saini Subhash  <saini@nas.nasa.gov>,
        Vladimir Getov  <vsg@ecs.soto.ac.uk>,
        William Gropp  <gropp@mcs.anl.gov>
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.876218541.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

I am now back in the office and have a small amount of time to follow
up the Parkbench Workshop that took place a few weeks ago.

I would firstly like to thanks everyone who attended - especially
all the speakers. Even though we did not attract hundreds of 
delegates to the workshop, I think the event was very successful 
- but I may be bias...

So, the plans are that in the first instance I will collect the slides
from all the speaker and package them up and put them on the PEMCS
Web site.

We also decided that we would encourage all the speaker to produce
short papers on their talks and put all the workshop paper together
to create a special issue the the PEMCES journal.

Can the speakers therefore send me their slides (I would prefer
powerpoint or word version if possible). I will harrass you further
about a short papers in the near future.

Thanks in advance for your help.

Regards

Mark



-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 10/07/97 - Time: 10:43:49
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Sun Oct 12 09:55:57 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id JAA28908; Sun, 12 Oct 1997 09:55:57 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA08800; Sun, 12 Oct 1997 09:44:23 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA08793; Sun, 12 Oct 1997 09:44:20 -0400 (EDT)
Received: from mordillo (p26.nas4.is2.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA11347; Sun, 12 Oct 97 14:45:07 BST
Date: Sun, 12 Oct 97 14:35:10 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Equivalent to comms1
To: parkbench-comm@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.876663429.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Can someone point me at the equivalant of comms1 written in
C - either MPI or sockets (or even PVM if its out there).

Cheers

Mark


-------------------------------------
Dr Mark Baker
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 10/12/97 - Time: 14:35:10
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Mon Oct 13 16:30:04 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA17020; Mon, 13 Oct 1997 16:29:59 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA24297; Mon, 13 Oct 1997 16:02:05 -0400 (EDT)
Received: from dancer.cs.utk.edu (DANCER.CS.UTK.EDU [128.169.92.77]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA24288; Mon, 13 Oct 1997 16:02:03 -0400 (EDT)
From: Philip Mucci <mucci@CS.UTK.EDU>
Received:  by dancer.cs.utk.edu (cf v2.11c-UTK)
          id QAA02925; Mon, 13 Oct 1997 16:02:00 -0400
Date: Mon, 13 Oct 1997 16:02:00 -0400
Message-Id: <199710132002.QAA02925@dancer.cs.utk.edu>
To: mab@sis.port.ac.uk, parkbench-comm@CS.UTK.EDU
Subject: Re: Equivalent to comms1
In-Reply-To: <Chameleon.876663429.mab@mordillo>
X-Mailer: [XMailTool v3.1.2b]


I would check out my mpbench on my web page....
It does PVM and MPI for now...

> Can someone point me at the equivalant of comms1 written in
> C - either MPI or sockets (or even PVM if its out there).
> 
> Cheers
> 
> Mark
> 
> 
> -------------------------------------
> Dr Mark Baker
> CSM, University of Portsmouth, Hants, UK
> Tel: +44 1705 844285	Fax: +44 1705 844006
> E-mail: mab@sis.port.ac.uk
> Date: 10/12/97 - Time: 14:35:10
> URL http://www.sis.port.ac.uk/~mab/
> -------------------------------------
> 

/%*\ Philip J. Mucci | GRA in CS under Dr. JJ Dongarra /*%\
\*%/ http://www.cs.utk.edu/~mucci  PVM/Active Messages \%*/ 

From owner-parkbench-comm@CS.UTK.EDU Mon Oct 20 10:37:14 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id KAA15359; Mon, 20 Oct 1997 10:37:14 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA07990; Mon, 20 Oct 1997 10:19:41 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA07691; Mon, 20 Oct 1997 10:17:09 -0400 (EDT)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA16636; Mon, 20 Oct 97 15:17:33 BST
Date: Mon, 20 Oct 97 15:02:39 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: PEMCS Short Article
To: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.877356527.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

I've just put up (at last!!) the first PEMCES short article at
http://hpc-journals.ecs.soton.ac.uk/PEMCS/Articles/

At the moment there is not much of a "house style" for the format
of the papers and articles - this will hopefully be developed over
the coming months.

I expect to put the first full paper up on the Web in the next week or
so.

Comments, ideas and help with the journal and its Web site are most
welcome.

Regards

Mark

------------------------------------------------------------------------------------------


COMPARING COMMUNICATION PERFORMANCE OF MPI ON THE CRAY RESEARCH T3E-600 AND IBM SP-2 1
	
                                by
                Glenn R. Luecke and James J. Coyle
                    Iowa State University
                 Ames, Iowa 50011-2251, USA
                
                     Waqar ul Haque
                University of Northern British Columbia
               Prince George, British Columbia, Canada V2N 4Z9
                              

Abstract 

This paper reports the performance of the Cray Research T3E and IBM SP-2 on a collection of 
communication tests that use MPI for the message passing. These tests have been designed to 
evaluate the performance of communication patterns that we feel are likely to occur in 
scientific programs. Communication tests were performed for messages of sizes 8 Bytes (B), 
1 KB, 100 KB, and 10 MB with 2, 4, 8, 16, 32 and 64 processors. Both machines provided a high 
level of concurrency for the nearest neighbor communication tests and moderate concurrency on 
the broadcast operations. On the tests used, the T3E significantly outperformed the SP-2 with 
most performance tests being at least three times faster than the SP-2. 


-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 10/20/97 - Time: 15:02:42
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Sat Oct 25 08:52:33 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA12875; Sat, 25 Oct 1997 08:52:33 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA05256; Sat, 25 Oct 1997 08:41:15 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA05244; Sat, 25 Oct 1997 08:41:05 -0400 (EDT)
Received: from mordillo (p16.nas2.is2.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA01764; Sat, 25 Oct 97 13:41:26 BST
Date: Sat, 25 Oct 97 13:27:24 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Parkbench Workshop Talks - On line
To: Chuck Koelbel  <chk@cs.rice.edu>,
        Clemens Thole  <clemens-august.thole@gmd.de>,
        Grapham Nudd  <Graham.Nudd@dcs.warwick.ac.uk>,
        Guy Robinson  <robinson@arsc.edu>,
        Klaus Stueben  <klaus.stueben@gmd.de>, parkbench-comm@CS.UTK.EDU,
        William Gropp  <gropp@mcs.anl.gov>
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 2 (High)
Message-Id: <Chameleon.877782734.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

I have put the talks received so far up at...

http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/abstracts.html

Please can the speakers who have not passed their talks onto me to
do so.

Thanks in advance.

Regards

Mark


-------------------------------------
Dr Mark Baker
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 10/25/97 - Time: 13:27:25
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Fri Oct 31 08:22:47 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA19412; Fri, 31 Oct 1997 08:22:46 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA15140; Fri, 31 Oct 1997 07:44:09 -0500 (EST)
Received: from post.mail.demon.net (post-20.mail.demon.net [194.217.242.27]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA15133; Fri, 31 Oct 1997 07:44:05 -0500 (EST)
Received: from minnow.demon.co.uk ([158.152.73.63]) by post.mail.demon.net
           id aa2017784; 31 Oct 97 12:25 GMT
Message-ID: <uFBOiBAJ2cW0EwdA@minnow.demon.co.uk>
Date: Fri, 31 Oct 1997 12:22:33 +0000
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Announcing PICT2
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>

                         ANNOUNCING PICT2
                         ++++++++++++++++

The prototype Parkbench Interactive Curve Fitting Tool (PICT1) that was
demonstrated at the Southampton meeting of Parkbench in September was
difficult to use on small screens because the image was too large and
could not be reduced in size to suit the users' screen size. Sorry, I
had developed it on my own 1600x1200 display without realising that most
users considered 800x600 as large!

Well the new version PICT2 that is now on my web page allows for the
full range of screen sizes: 640x480, 800x600, 1024x768, >=1600x1200, and
also allows the user to customise his own display by selecting a font
size and screen width and height. So the new version should be usable by
all -- I hope!   

Another problem at Southampton was that the display workstation was very
old and too slow in MHz to do the job. I use a P133 Pentium and the
graphs lines move around instantly, but if you only have a 20MHz machine
for example the response wil probably be too slow to be useful for real
curve interactive fitting. There is nothing I can do about this except
to suggest that you use the need to use PICT as an excuse (I mean
justification) to upgrade your equipment.

PICT2 still relies on the use of New COMMS1 to compute the least square
2-para fit and the 3-point fit fot the 3-para. The next step will be to
put these features in PICT but that is a fair amount of code to get
right and I thought it best to solve the screen-size problem first. But
remember the key point about PICT is that it allows Interactive manual
fitting and display that is not otherwise available.

To try out PICT2 turn your browser to:

             http://www.minnow.demon.co.uk/pict/source/pict2a.html

and follow the instructions. When you have a good PICT Frame displayed,
press the HELP button for a description of the button actions.

Please report problems, experiences (good and bad), suggestions to me
at:

             roger@minnow.demon.co.uk

I need feedback in order to improve the tool.  

Best wishes to you all

Roger
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Tue Nov 11 06:21:05 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA18373; Tue, 11 Nov 1997 06:21:05 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA27963; Tue, 11 Nov 1997 06:06:45 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA27930; Tue, 11 Nov 1997 06:06:15 -0500 (EST)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA23083; Tue, 11 Nov 97 11:07:22 GMT
Date: Tue, 11 Nov 97 11:00:36 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Couple of Announcements
To: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.879246493.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

A couple of announcements...

Firstly, the majority of the papers presented at  Fall ParkBench Workshop 
on Thursday 11th /Friday 12th September 1997 at the University of Southampton,
are now on-line and can be found at...

http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/abstracts.html

or 

>From http://hpc-journals.ecs.soton.ac.uk/PEMCS/ and click on News in the left frame...

Secondly, the first full paper for the electronic journal Performance Evaluation 
and Modelling of Computer Systems (PEMCS)

"PERFORM - A Fast Simulator For Estimating Program Execution Time" By Alistair
Dunlop and Tony Hey,  Department Electronics and Computer Science University of 
Southampton Southampton, SO17 1BJ, U.K. 

Can be found at...

http://hpc-journals.ecs.soton.ac.uk/PEMCS/Papers/vol1.html

See you'll at the Parkbench BOF at SC'97...


Mark



-------------------------------------
Dr Mark Baker
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 11/11/97 - Time: 11:00:36
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-lowlevel@CS.UTK.EDU Wed Nov 12 21:30:42 1997
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id VAA13985; Wed, 12 Nov 1997 21:30:42 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA06841; Wed, 12 Nov 1997 21:31:46 -0500 (EST)
Received: from rudolph.cs.utk.edu (RUDOLPH.CS.UTK.EDU [128.169.92.87]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA06806; Wed, 12 Nov 1997 21:31:01 -0500 (EST)
Received: from localhost by rudolph.cs.utk.edu with SMTP (cf v2.11c-UTK)
          id VAA24812; Wed, 12 Nov 1997 21:31:01 -0500
Date: Wed, 12 Nov 1997 21:31:00 -0500 (EST)
From: Erich Strohmaier <erich@CS.UTK.EDU>
To: parkbench-hpf@CS.UTK.EDU, parkbench-lowlevel@CS.UTK.EDU,
        parkbench-comm@CS.UTK.EDU
Subject: ParkBench BOF session at the SC'97
Message-ID: <Pine.SUN.3.96.971112212856.24760D-100000@rudolph.cs.utk.edu>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear Colleague,

  The ParkBench (PARallel Kernels and BENCHmarks) committee has
organized a BOF session at the SC'97 in San Jose.

   Room:   Convention Center Room C1 
   Time:   Wednesday  5:30pm              


We will talk about the latest release, new results available and future
plans.

                Tentative Agenda of the BOF

   - Introduction, background, WWW-Server 
   - Current Release of ParkBench
   - Low Level Performance Evaluation Tools
   - LinAlg Kernel Benchmarks
   - NAS Parallel Benchmarks,  including latest results
   - Plans for the next Release 
   - Electronic Journal of Performance Evaluation and Modeling
     for Computer Systems
   - Questions from the floor / discussion 


  Please mark your calendar and plan to attend.


Jack Dongarra
Tony Hey 
Erich Strohmaier



From owner-parkbench-comm@CS.UTK.EDU Wed Nov 12 21:46:18 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id VAA14031; Wed, 12 Nov 1997 21:46:17 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA06813; Wed, 12 Nov 1997 21:31:03 -0500 (EST)
Received: from rudolph.cs.utk.edu (RUDOLPH.CS.UTK.EDU [128.169.92.87]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id VAA06806; Wed, 12 Nov 1997 21:31:01 -0500 (EST)
Received: from localhost by rudolph.cs.utk.edu with SMTP (cf v2.11c-UTK)
          id VAA24812; Wed, 12 Nov 1997 21:31:01 -0500
Date: Wed, 12 Nov 1997 21:31:00 -0500 (EST)
From: Erich Strohmaier <erich@CS.UTK.EDU>
To: parkbench-hpf@CS.UTK.EDU, parkbench-lowlevel@CS.UTK.EDU,
        parkbench-comm@CS.UTK.EDU
Subject: ParkBench BOF session at the SC'97
Message-ID: <Pine.SUN.3.96.971112212856.24760D-100000@rudolph.cs.utk.edu>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear Colleague,

  The ParkBench (PARallel Kernels and BENCHmarks) committee has
organized a BOF session at the SC'97 in San Jose.

   Room:   Convention Center Room C1 
   Time:   Wednesday  5:30pm              


We will talk about the latest release, new results available and future
plans.

                Tentative Agenda of the BOF

   - Introduction, background, WWW-Server 
   - Current Release of ParkBench
   - Low Level Performance Evaluation Tools
   - LinAlg Kernel Benchmarks
   - NAS Parallel Benchmarks,  including latest results
   - Plans for the next Release 
   - Electronic Journal of Performance Evaluation and Modeling
     for Computer Systems
   - Questions from the floor / discussion 


  Please mark your calendar and plan to attend.


Jack Dongarra
Tony Hey 
Erich Strohmaier



From owner-parkbench-lowlevel@CS.UTK.EDU Thu Nov 13 06:30:40 1997
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA07097; Thu, 13 Nov 1997 06:30:40 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA01844; Thu, 13 Nov 1997 05:55:24 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA01835; Thu, 13 Nov 1997 05:55:18 -0500 (EST)
Received: from mordillo (p19.nas2.is2.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA18430; Thu, 13 Nov 97 10:56:11 GMT
Date: Thu, 13 Nov 97 10:48:53 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Fall 97 Parkbench Committee Meeting Minutes
To: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU,
        parkbench-lowlevel@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <Pine.SUN.3.96.971112212856.24760D-100000@rudolph.cs.utk.edu> 
Message-Id: <Chameleon.879418489.mab@mordillo>
Mime-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="mordillo:879418490:877:126:21579"

--mordillo:879418490:877:126:21579
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

Here are the minutes of the Parkbench committee meeting held The County
Hotel in Southampton during the Fall 97 Parkbench Workshop.

For those of you with a MIME-compliant mail-reader I've attached a formatted
word 7 doc.

Regards

Mark

-----------------------------------------------------------------------------

Parkbench Committee Meeting
Held during the Fall Parkbench Workshop

The County Hotel
Southampton, UK

1515,  11th September 1997


Meeting Participation List:

Mark Baker - Univ. of Portsmouth (mab@sis.port.ac.uk)
Flavio Bergamaschi  - Univ of Southampton (fab@ecs.soton.ac.uk)
Jack Dongarra - Univ. of Tenn./ORNL (dongarra@cs.utk.edu)
Vladimir Getov  - Univ. of Westminister (getovv@wmin.ac.uk)
Charles Grassl - SGI/Cray (cmg@cray.com)
William Gropp - ANL (gropp@mcs.anl.gov)
Tony Hey - Univ. of Southampton (ajgh@ecs.soton.ac.uk)
Roger Hockney - Univ. of Westminister (roger@minnow.demon.co.uk)
Mark Papiani - Univ of Southampton (mp@ecs.soton.ac.uk)
Subhash Saini - NASA Ames (saini@nas.nasa.gov)
Dave Snelling - FECIT (snelling@fecit.co.uk)
Aad J. van der Steen  - RUU (steen@fys.ruu.nl)
Erich Strohmaier - Univ. of Tennessee (erich@cs.utk.edu)
Klaus Stueben - GMD  (klaus.stueben@gmd.de)

Meeting Activities and Actions

Tony Hey chaired the meeting.

Minutes from last meeting were seven pages long and it was decided that only the actions from the last 

meeting would be reviewed. The actions from last meeting were reviewed - a short discussion about each 

took place. A discussion about interaction with SPEC-HPG was initiated.

Comms Low-Level Benchmarks 

Vladimir Getov gave a short presentation on the current status of the Parkbench Comms benchmarks.  
Charles Grassl was asked to explained how his new Comms programs worked and the rationale behind it. 
A long discussion ensued.

Action - Create a formal proposal  of alternative or additions to the comms low-level benchmarks for 
SC'97 
- Charles Grassl.

Action - Members should look at the PALLAS version of the low-level benchmarks (based on 
Genesis/RAPS).

Action  - Erich  Strohmaier and Vladimir Getov will discuss the efforts needed to split up Parkbench 
and 
add in the new Comms1 benchmark (with new curve fitting routine).

NPB - Subhash Siani reported on the status of the NAS Parallel Benchmarks

HPF - Mark Baker read Chuck Koebel's email about CEWES HPCM HPF efforts.

Action - Subhash Siani will let RICE know that Gina should start of from the single NAS codes

Electronic Journal - Mark Baker and Tony Hey reported on the electronic journal PEMCS and its Web 
site. It was agreed that this would be discussed  further informally.

Parkbench Report -Erich Strohmaier reported on the efforts of creating a new Parkbench report. A short 

discussion about this ensued.

Action - Jack Dongarra /Tony Hey will talk to other members about the potential efforts that could be 
put 
into a Parkbench report II by SC'97.

Funding Efforts

Jack Dongarra's recent benchmarking  proposal was turned down. Tony Hey mentioned the possibly of 
entering a proposal to the EU.
Possibility of a joint EU / NSF bid.

Mark Baker asked if SIO would be interested in being more closely involved.  William Gropp reported 
that 
SIO was actually winding down and so formal association was not really an option.

AOB

The participants were then invited by Tony to move to the University of Southampton (bldg. 16) for the 

Parkbench demonstrations which included:

-- Java Low-Level Benchmarks (Vladimir Getov)
-- BenchView: Java Tool for Visualization of Parallel Benchmark Results (Mark Papiani and Flavio
   Bergamaschi)
-- PICT: An Interactive Web-page Curve-fitting Tool (Roger Hockney)

Jack Dongarra  informed the committee of  Parkbench BOF at SC'97 (Wednesday at 3.30PM).

The meeting was wound up by Tony Hey at 1630.

-----------------------------------------------------------------------------




-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 11/13/97 - Time: 10:48:53
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------

--mordillo:879418490:877:126:21579
Content-Type: APPLICATION/msword; name="minutes-fall-97.doc"
Content-Transfer-Encoding: BASE64
Content-Description: minutes-fall-97.doc

0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAAB
AAAAEQAAAAAAAAAAEAAAEgAAAAEAAAD+////AAAAABAAAAD/////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
///////////////////////cpWgAY+AJBAAAAABlAAAAAAAAAAAAAAAAAwAA
hxAAABAeAAAAAAAAAAAAAAAAAAAAAAAAhw0AAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAABgAAGoAAAAAGAAAagAAAGoYAAAAAAAAahgAAAAA
AABqGAAAAAAAAGoYAAAAAAAAahgAABQAAACkGAAAAAAAAKQYAAAAAAAApBgA
AAAAAACkGAAAAAAAAKQYAAAAAAAApBgAAAoAAACuGAAAEAAAAKQYAAAAAAAA
Eh0AAHwAAAC+GAAAAAAAAL4YAAAAAAAAvhgAAAAAAAC+GAAAAAAAAL4YAAAA
AAAAvhgAAAAAAAC+GAAAAAAAAL4YAAAAAAAABxoAAAIAAAAJGgAAAAAAAAka
AAAAAAAACRoAAEsAAABUGgAAUAEAAKQbAABQAQAA9BwAAB4AAACOHQAAWAAA
AOYdAAAqAAAAEh0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAahgAAAAAAAC+GAAA
AAAAAAAACQAKAAEAAgC+GAAAAAAAAL4YAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AL4YAAAAAAAAvhgAAAAAAAASHQAAAAAAANQYAAAAAAAAahgAAAAAAABqGAAA
AAAAAL4YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAL4YAAAAAAAA1BgAAAAAAADU
GAAAAAAAANQYAAAAAAAAvhgAABYAAABqGAAAAAAAAL4YAAAAAAAAahgAAAAA
AAC+GAAAAAAAAAcaAAAAAAAAAAAAAAAAAAAQq9KCIvC8AX4YAAAOAAAAjBgA
ABgAAABqGAAAAAAAAGoYAAAAAAAAahgAAAAAAABqGAAAAAAAAL4YAAAAAAAA
BxoAAAAAAADUGAAAMwEAANQYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAABQYXJrYmVuY2ggQ29tbWl0dGVlIE1lZXRp
bmcNDUhlbGQgZHVyaW5nIHRoZSBGYWxsIFBhcmtiZW5jaCBXb3Jrc2hvcA0N
VGhlIENvdW50eSBIb3RlbA0NU291dGhhbXB0b24sIFVLDQ0xNTE1LCAgMTF0
aCBTZXB0ZW1iZXIgMTk5Nw0NDU1lZXRpbmcgUGFydGljaXBhdGlvbiBMaXN0
Og0NTWFyayBCYWtlciAtIFVuaXYuIG9mIFBvcnRzbW91dGggKG1hYkBzaXMu
cG9ydC5hYy51aykNRmxhdmlvIEJlcmdhbWFzY2hpICAtIFVuaXYgb2YgU291
dGhhbXB0b24gKGZhYkBlY3Muc290b24uYWMudWspDUphY2sgRG9uZ2FycmEg
LSBVbml2LiBvZiBUZW5uLi9PUk5MIChkb25nYXJyYUBjcy51dGsuZWR1KQ1W
bGFkaW1pciBHZXRvdiAgLSBVbml2LiBvZiBXZXN0bWluaXN0ZXIgKGdldG92
dkB3bWluLmFjLnVrKQ1DaGFybGVzIEdyYXNzbCAtIFNHSS9DcmF5IChjbWdA
Y3JheS5jb20pDVdpbGxpYW0gR3JvcHAgLSBBTkwgKGdyb3BwQG1jcy5hbmwu
Z292KQ1Ub255IEhleSAtIFVuaXYuIG9mIFNvdXRoYW1wdG9uIChhamdoQGVj
cy5zb3Rvbi5hYy51aykNUm9nZXIgSG9ja25leSAtIFVuaXYuIG9mIFdlc3Rt
aW5pc3RlciAocm9nZXJAbWlubm93LmRlbW9uLmNvLnVrKQ1NYXJrIFBhcGlh
bmkgLSBVbml2IG9mIFNvdXRoYW1wdG9uIChtcEBlY3Muc290b24uYWMudWsp
DVN1Ymhhc2ggU2FpbmkgLSBOQVNBIEFtZXMgKHNhaW5pQG5hcy5uYXNhLmdv
dikNRGF2ZSBTbmVsbGluZyAtIEZFQ0lUIChzbmVsbGluZ0BmZWNpdC5jby51
aykNQWFkIEouIHZhbiBkZXIgU3RlZW4gIC0gUlVVIChzdGVlbkBmeXMucnV1
Lm5sKQ1FcmljaCBTdHJvaG1haWVyIC0gVW5pdi4gb2YgVGVubmVzc2VlIChl
cmljaEBjcy51dGsuZWR1KQ1LbGF1cyBTdHVlYmVuIC0gR01EICAoa2xhdXMu
c3R1ZWJlbkBnbWQuZGUpDQ1NZWV0aW5nIEFjdGl2aXRpZXMgYW5kIEFjdGlv
bnMNDVRvbnkgSGV5IGNoYWlyZWQgdGhlIG1lZXRpbmcuDQ1NaW51dGVzIGZy
b20gbGFzdCBtZWV0aW5nIHdlcmUgc2V2ZW4gcGFnZXMgbG9uZyBhbmQgaXQg
d2FzIGRlY2lkZWQgdGhhdCBvbmx5IHRoZSBhY3Rpb25zIGZyb20gdGhlIGxh
c3QgbWVldGluZyB3b3VsZCBiZSByZXZpZXdlZC4gVGhlIGFjdGlvbnMgZnJv
bSBsYXN0IG1lZXRpbmcgd2VyZSByZXZpZXdlZCAtIGEgc2hvcnQgZGlzY3Vz
c2lvbiBhYm91dCBlYWNoIHRvb2sgcGxhY2UuIEEgZGlzY3Vzc2lvbiBhYm91
dCBpbnRlcmFjdGlvbiB3aXRoIFNQRUMtSFBHIHdhcyBpbml0aWF0ZWQuDQ1D
b21tcyBMb3ctTGV2ZWwgQmVuY2htYXJrcyANDVZsYWRpbWlyIEdldG92IGdh
dmUgYSBzaG9ydCBwcmVzZW50YXRpb24gb24gdGhlIGN1cnJlbnQgc3RhdHVz
IG9mIHRoZSBQYXJrYmVuY2ggQ29tbXMgYmVuY2htYXJrcy4gIENoYXJsZXMg
R3Jhc3NsIHdhcyBhc2tlZCB0byBleHBsYWluZWQgaG93IGhpcyBuZXcgQ29t
bXMgcHJvZ3JhbXMgd29ya2VkIGFuZCB0aGUgcmF0aW9uYWxlIGJlaGluZCBp
dC4gDUEgbG9uZyBkaXNjdXNzaW9uIGVuc3VlZC4NDUFjdGlvbiAtIENyZWF0
ZSBhIGZvcm1hbCBwcm9wb3NhbCAgb2YgYWx0ZXJuYXRpdmUgb3IgYWRkaXRp
b25zIHRvIHRoZSBjb21tcyBsb3ctbGV2ZWwgYmVuY2htYXJrcyBmb3IgU0OS
OTcgLSBDaGFybGVzIEdyYXNzbC4NDUFjdGlvbiAtIE1lbWJlcnMgc2hvdWxk
IGxvb2sgYXQgdGhlIFBBTExBUyB2ZXJzaW9uIG9mIHRoZSBsb3ctbGV2ZWwg
YmVuY2htYXJrcyAoYmFzZWQgb24gR2VuZXNpcy9SQVBTKS4NDUFjdGlvbiAg
LSBFcmljaCAgU3Ryb2htYWllciBhbmQgVmxhZGltaXIgR2V0b3Ygd2lsbCBk
aXNjdXNzIHRoZSBlZmZvcnRzIG5lZWRlZCB0byBzcGxpdCB1cCBQYXJrYmVu
Y2ggYW5kIGFkZCBpbiB0aGUgbmV3IENvbW1zMSBiZW5jaG1hcmsgKHdpdGgg
bmV3IGN1cnZlIGZpdHRpbmcgcm91dGluZSkuDQ1OUEIgLSBTdWJoYXNoIFNp
YW5pIHJlcG9ydGVkIG9uIHRoZSBzdGF0dXMgb2YgdGhlIE5BUyBQYXJhbGxl
bCBCZW5jaG1hcmtzDQ1IUEYgLSBNYXJrIEJha2VyIHJlYWQgQ2h1Y2sgS29l
YmVsknMgZW1haWwgYWJvdXQgQ0VXRVMgSFBDTSBIUEYgZWZmb3J0cy4NDUFj
dGlvbiAtIFN1Ymhhc2ggU2lhbmkgd2lsbCBsZXQgUklDRSBrbm93IHRoYXQg
R2luYSBzaG91bGQgc3RhcnQgb2YgZnJvbSB0aGUgc2luZ2xlIE5BUyBjb2Rl
cw0NRWxlY3Ryb25pYyBKb3VybmFsIC0gTWFyayBCYWtlciBhbmQgVG9ueSBI
ZXkgcmVwb3J0ZWQgb24gdGhlIGVsZWN0cm9uaWMgam91cm5hbCBQRU1DUyBh
bmQgaXRzIFdlYiBzaXRlLiBJdCB3YXMgYWdyZWVkIHRoYXQgdGhpcyB3b3Vs
ZCBiZSBkaXNjdXNzZWQgIGZ1cnRoZXIgaW5mb3JtYWxseS4NDVBhcmtiZW5j
aCBSZXBvcnQgLUVyaWNoIFN0cm9obWFpZXIgcmVwb3J0ZWQgb24gdGhlIGVm
Zm9ydHMgb2YgY3JlYXRpbmcgYSBuZXcgUGFya2JlbmNoIHJlcG9ydC4gQSBz
aG9ydCBkaXNjdXNzaW9uIGFib3V0IHRoaXMgZW5zdWVkLg0NQWN0aW9uIC0g
SmFjayBEb25nYXJyYSAvVG9ueSBIZXkgd2lsbCB0YWxrIHRvIG90aGVyIG1l
bWJlcnMgYWJvdXQgdGhlIHBvdGVudGlhbCBlZmZvcnRzIHRoYXQgY291bGQg
YmUgcHV0IGludG8gYSBQYXJrYmVuY2ggcmVwb3J0IElJIGJ5IFNDkjk3Lg0N
RnVuZGluZyBFZmZvcnRzDQ1KYWNrIERvbmdhcnJhknMgcmVjZW50IGJlbmNo
bWFya2luZyAgcHJvcG9zYWwgd2FzIHR1cm5lZCBkb3duLiBUb255IEhleSBt
ZW50aW9uZWQgdGhlIHBvc3NpYmx5IG9mIGVudGVyaW5nIGEgcHJvcG9zYWwg
dG8gdGhlIEVVLg1Qb3NzaWJpbGl0eSBvZiBhIGpvaW50IEVVIC8gTlNGIGJp
ZC4NDU1hcmsgQmFrZXIgYXNrZWQgaWYgU0lPIHdvdWxkIGJlIGludGVyZXN0
ZWQgaW4gYmVpbmcgbW9yZSBjbG9zZWx5IGludm9sdmVkLiAgV2lsbGlhbSBH
cm9wcCByZXBvcnRlZCB0aGF0IFNJTyB3YXMgYWN0dWFsbHkgd2luZGluZyBk
b3duIGFuZCBzbyBmb3JtYWwgYXNzb2NpYXRpb24gd2FzIG5vdCByZWFsbHkg
YW4gb3B0aW9uLg0NQU9CDQ1UaGUgcGFydGljaXBhbnRzIHdlcmUgdGhlbiBp
bnZpdGVkIGJ5IFRvbnkgdG8gbW92ZSB0byB0aGUgVW5pdmVyc2l0eSBvZiBT
b3V0aGFtcHRvbiAoYmxkZy4gMTYpIGZvciB0aGUgUGFya2JlbmNoIGRlbW9u
c3RyYXRpb25zIHdoaWNoIGluY2x1ZGVkOg0NSmF2YSBMb3ctTGV2ZWwgQmVu
Y2htYXJrcyAoVmxhZGltaXIgR2V0b3YpDUJlbmNoVmlldzogSmF2YSBUb29s
IGZvciBWaXN1YWxpemF0aW9uIG9mIFBhcmFsbGVsIEJlbmNobWFyayBSZXN1
bHRzIChNYXJrIFBhcGlhbmkgYW5kIEZsYXZpbyBCZXJnYW1hc2NoaSkNUElD
VDogQW4gSW50ZXJhY3RpdmUgV2ViLXBhZ2UgQ3VydmUtZml0dGluZyBUb29s
IChSb2dlciBIb2NrbmV5KQ0NSmFjayBEb25nYXJyYSAgaW5mb3JtZWQgdGhl
IGNvbW1pdHRlZSBvZiAgUGFya2JlbmNoIEJPRiBhdCBTQ5I5NyAoV2VkbmVz
ZGF5IGF0IDMuMzBQTSkuDQ1UaGUgbWVldGluZyB3YXMgd291bmQgdXAgYnkg
VG9ueSBIZXkgYXQgMTYzMC4NFQCk0C+l4D2mCAenCAeooAWpoAWqAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwAAHQMAAGgD
AAByAwAAdAMAAIYDAAChAwAAogMAAMUDAADXAwAABAQAABcEAAA+BAAAUQQA
AHwEAACNBAAAqgQAALYEAADNBAAA3gQAAAEFAAAVBQAAPgUAAFYFAABvBQAA
jgUAAKsFAAC9BQAA1gUAAOoFAAAJBgAAGQYAAEIGAABSBgAAagYAAH4GAACB
BgAAoAYAANcHAADzBwAARAgAAEkIAACJCAAAjggAANgIAADeCAAAVgkAAFwJ
AABeCQAAvwkAAMUJAAA3CgAAPQoAAGsKAABuCgAAtgoAALkKAAAACwAABgsA
AF8LAABxCwAACAwAABgMAACODAAAlAwAAB4NAAAtDQAALg0AAJIOAACVDgAA
hxAAAJ4QAAD79gD0APHvAO0A7QDtAO0A7QDrAO0A7QDtAO0A7QDtAO0A7QDm
APEA7QDtAOMA4+EA4wDtAPEA8QDjAPEA8QDjAPHvAPEA3wAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJ1AQACVoEABFWBVoEA
CFWBXQMAYxgAAANdBQADXQQAA10DAAVVgV0DAAJoAQAIVYFdAwBjHAAACFWB
XQMAYyQARwADAAAcAwAAHQMAAEUDAABGAwAAVwMAAFgDAABoAwAAaQMAAIQD
AACFAwAAhgMAAKIDAACjAwAA2QMAABkEAABTBAAAjwQAALgEAADgBAAAFwUA
AFgFAACQBQAAvwUAAOwFAAAbBgAAVAYAAIAGAACBBgAAoAYAAKEGAAC/BgAA
wAYAANYHAADXBwAA8wcAAPQHAAC9CAAA1wgAANgIAAD9AAHAIaIB+gABwCGi
Af0AAcAhRgH9AAHAIUYB/QABwCFGAf0AAcAhRgH9AAHAIUYB/QABwCHrAP0A
AcAh6wD6AAHAIesA+gABwCHrAPoAAcAh6QD6AAHAIesA+gABwCHyAPoAAcAh
8gD6AAHAIfIA+gABwCHyAPoAAcAh8gD6AAHAIfIA+gABwCHyAPoAAcAh8gD6
AAHAIfIA+gABwCHyAPoAAcAh8gD6AAHAIfIA+gABwCHyANwAAcAh8gD6AAHA
IesA+gABwCEWAfoAAcAh6wD6AAHAIesA+gABwCHrAPoAA8Ah6wD6AAHAIesA
+gABwCHpAPoAAcAh6wD6AALAIfIA+gABwCHrAPoAAcAh6wAAAAAAAAAAHQAA
BQMMNP8BAAgAAAEAAAABAGgBAAAAAAAAtwAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAgAABQMAAgAABQEn2AgAAFUJAABWCQAAvgkAAL8JAABq
CgAAawoAALUKAAC2CgAA/woAAAALAABeCwAAXwsAAAcMAAAIDAAAjQwAAI4M
AAAdDQAAHg0AAC4NAAAvDQAAsA0AANUNAADWDQAAkQ4AAJIOAACWDgAAlw4A
ACcPAAAoDwAAUw8AAL4PAAD/DwAAABAAAFgQAABZEAAAhxAAAP0E/8Ah2QH9
AAHAIesA/QT/wCHZAf0AAcAh6wD9BP/AIeAB/QABwCHrAP0AAcAh7gD9AAHA
IesA/QABwCHuAP0AAcAh6wD9AAHAIe4A/QABwCHrAP0E/8Ah2QH9AAHAIesA
/QT/wCHZAf0AAcAh6wD9BP/AIdkB/QABwCHrAP0AAcAh6QD9AAHAIesA/QAC
wCHrAP0AAcAh6wD9AAHAIesA/QACwCHrAP0AAcAh6wD9AAHAIekA/QABwCHr
AP0AAsAh6wD9AAHAIesA2wABwCH6ANsE/8Ah5QHbAAHAIfoA/QABwCHrAP0A
AcAh6wD9AAHAIesA/QABwCHrAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAIQAABQMNCxFoAROY/gw0/wEACAAAAQAAAAEAaAEAAAAA
AAC3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAFAyQOAA8A
CAABAEsADwAAAAAAGgAAQPH/AgAaAAZOb3JtYWwAAgAAAAMAYQkEAAAAAAAA
AAAAAAAAAAAAAAAAACIAQUDy/6EAIgAWRGVmYXVsdCBQYXJhZ3JhcGggRm9u
dAAAAAAAAAAAAAAAAAAAAIcNAAAEAIcQAAAAAP////8CAAQh//8BAAAg//8C
AAAAAABqBwAAhw0AAAAAAQAAAAEAAAAAAAADAACeEAAACQAAAwAA2AgAAIcQ
AAAKAAsAAAAAAAECAAAVAgAAiQ0AAAcAHAAHADMBC01hcmsgIEJha2VyJEM6
XHRleFxQYXJrQmVuY2hcbWludXRlcy1mYWxsLTk3LmRvYwtNYXJrICBCYWtl
cjNDOlx0ZXhcUGFya0JlbmNoXEFkbWluaXN0cmF0aW9uXG1pbnV0ZXMtZmFs
bC05Ny5kb2MLTWFyayAgQmFrZXIzQzpcdGV4XFBhcmtCZW5jaFxBZG1pbmlz
dHJhdGlvblxtaW51dGVzLWZhbGwtOTcuZG9jC01hcmsgIEJha2VyM0M6XHRl
eFxQYXJrQmVuY2hcQWRtaW5pc3RyYXRpb25cbWludXRlcy1mYWxsLTk3LmRv
YwtNYXJrICBCYWtlcjNDOlx0ZXhcUGFya0JlbmNoXEFkbWluaXN0cmF0aW9u
XG1pbnV0ZXMtZmFsbC05Ny5kb2P/QFRla3Ryb25peCBQaGFzZXIgNTUwIDEy
MDAgZHBpAExQVDE6AHdpbnNwb29sAFRla3Ryb25peCBQaGFzZXIgNTUwIDEy
MDAgZHBpAFRla3Ryb25peCBQaGFzZXIgNTUwIDEyMDAgZHBpAAAAAQQABJwA
tAATzwEAAQABAOoKbwhkAAEADwBYAgIAAQAAAAMAAABMZXR0ZXIAABQAZWVl
ZWVlZWVlZWVlZWVlZWVlZWVlZQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFBSSVbgEAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAAAAQJxAnECcAABAnAAAA
AAAAAABjdQgA/wMAAQEBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFRla3Ryb25peCBQaGFzZXIg
NTUwIDEyMDAgZHBpAAAAAQQABJwAtAATzwEAAQABAOoKbwhkAAEADwBYAgIA
AQAAAAMAAABMZXR0ZXIAAAAADwAGAAAACgAwARQAMAEUAHIAcABjAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAFBSSVbgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAYAAAAAAAQJxAnECcAABAnAAAAAAAAAABjdQgA/wMAAQEBAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAOAAQDHAAAAxwAAAAgAzwDPAMcAAAAAAAAAxwAAAHwAFRaQAQAAVGlt
ZXMgTmV3IFJvbWFuAAwSkAECAFN5bWJvbAAWIpABAAZBcmlhbABIZWx2ZXRp
Y2EAABsmvAIAAEFyaWFsIFJvdW5kZWQgTVQgQm9sZAARNZABAABDb3VyaWVy
IE5ldwARNZABAgBNUyBMaW5lRHJhdwAiAAQAcQiJGAAA0AIAAGgBAAAAANBb
GYa2ahuGAAAAAAcAXAAAAPQBAAAnCwAAAgAFAAAABACDEBcAAAAAAAAAAAAA
AAIAAQAAAAEAAAAAAAAAIQMAAAAAKgAAAAAAAAALTWFyayAgQmFrZXILTWFy
ayAgQmFrZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAEAAAACAAAAAwAAAAQAAAAFAAAABgAAAAcA
AAAIAAAACQAAAAoAAAALAAAADAAAAA0AAAAOAAAADwAAAP7////9////FAAA
AP7///8cAAAA/v/////////////////////////////////////////+////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////UgBvAG8AdAAg
AEUAbgB0AHIAeQAAAGspDUphY2sgRG9uZ2FycmEgLSBVbml2LiBvZiBUZW5u
Li9PUk5MIChkbxYABQH//////////wEAAAAACQIAAAAAAMAAAAAAAABGAAAA
AKD5PUK9vrwBEKvSgiLwvAETAAAAQAMAAGdldG9XAG8AcgBkAEQAbwBjAHUA
bQBlAG4AdAAAAHNzbCAtIFNHSS9DcmF5IChjbWdAY3JheS5jb20pDVdpbGxp
YW0gGgACAQIAAAADAAAA/////3BwQG1jcy5hbmwuZ292KQ1Ub255IEhleSAt
IFVuaXYuIG9mIAAAAAAQHgAAdG9uIAEAQwBvAG0AcABPAGIAagAAAC51aykN
Um9nZXIgSG9ja25leSAtIFVuaXYuIG9mIFdlc3RtaW5pc3RlciAocm8SAAIB
////////////////LmNvLnVrKQ1NYXJrIFBhcGlhbmkAAAAAAAAAAAAAAAAA
AAAAAAAAAGoAAABtcEBlBQBTAHUAbQBtAGEAcgB5AEkAbgBmAG8AcgBtAGEA
dABpAG8AbgAAAHMgKHNhaW5pQG5hcy5uYXNhLmdvdikNRCgAAgH/////BAAA
AP////9FQ0lUIChzbmVsbGluZ0BmZWNpdAAAAAAAAAAAAAAAAAAAAAACAAAA
vAEAAHRlZW4BAAAA/v///wMAAAAEAAAABQAAAAYAAAAHAAAACAAAAP7///8K
AAAACwAAAAwAAAD+////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
/////////////////////////////////wEA/v8DCgAA/////wAJAgAAAAAA
wAAAAAAAAEYYAAAATWljcm9zb2Z0IFdvcmQgRG9jdW1lbnQACgAAAE1TV29y
ZERvYwAQAAAAV29yZC5Eb2N1bWVudC42APQ5snEAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAA/v8AAAQAAgAAAAAAAAAAAAAAAAAAAAAAAQAA
AOCFn/L5T2gQq5EIACsns9kwAAAAjAEAABIAAAABAAAAmAAAAAIAAACgAAAA
AwAAAKwAAAAEAAAAuAAAAAUAAADMAAAABgAAANgAAAAHAAAA5AAAAAgAAAD0
AAAACQAAAAgBAAASAAAAFAEAAAoAAAA8AQAACwAAAEgBAAAMAAAAVAEAAA0A
AABgAQAADgAAAGwBAAAPAAAAdAEAABAAAAB8AQAAEwAAAIQBAAACAAAA5AQA
AB4AAAABAAAAAAAGAB4AAAABAAAAAFdSTR4AAAAMAAAATWFyayAgQmFrZXIA
HgAAAAEAAAAAOmkQHgAAAAEAAAAAAAAAHgAAAAcAAABOb3JtYWwAYR4AAAAM
AAAATWFyayAgQmFrZXIAHgAAAAIAAAA3AAQAHgAAAB4AAABNaWNyb3NvZnQg
V29yZCBmb3IgV2luZG93cyA5NQAAAEAAAAAAKC3aDAAAAEAAAAAAAAAABQBE
AG8AYwB1AG0AZQBuAHQAUwB1AG0AbQBhAHIAeQBJAG4AZgBvAHIAbQBhAHQA
aQBvAG4AAAAAAAAAAAAAADgAAgD///////////////8AAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJAAAA6AAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAP///////////////wAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAA////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/
//////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAD+/wAABAACAAAAAAAAAAAAAAAAAAAAAAABAAAAAtXN
1ZwuGxCTlwgAKyz5rjAAAAC4AAAACAAAAAEAAABIAAAADwAAAFAAAAAEAAAA
dAAAAAUAAAB8AAAABgAAAIQAAAALAAAAjAAAABAAAACUAAAADAAAAJwAAAAC
AAAA5AQAAB4AAAAZAAAAVW5pdmVyc2l0eSBvZiBQb3J0c21vdXRoAAAAAAMA
AAAAOgAAAwAAABcAAAADAAAABQAAAAsAAAAAAAAACwAAAAAAAAAMEAAAAgAA
AB4AAAABAAAAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AABAAAAAADhSnMW+vAFAAAAAANR+ciLwvAEDAAAAAgAAAAMAAAD0AQAAAwAA
ACcLAAADAAAAAAAAAAAAAAD+/wAABAACAAAAAAAAAAAAAAAAAAAAAAABAAAA
AtXN1ZwuGxCTlwgAKyz5rjAAAAC4AAAACAAAAAEAAABIAAAADwAAAFAAAAAE
AAAAdAAAAAUAAAB8AAAABgAAAIQAAAALAAAAjAAAABAAAACUAAAADAAAAJwA
AAACAAAA5AQAAB4AAAAZAAAAVW5pdmVyc2l0eSBvZiBQb3J0c21vdXRoAAAA
AAMAAAAAOgAAAwAAABcAAAADAAAABQAAAAsAAAAAAAAACwAAAAAAAAAMEAAA
AgAAAB4AAAABAAAAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAA

--mordillo:879418490:877:126:21579--

From owner-parkbench-comm@CS.UTK.EDU Thu Nov 13 06:31:53 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA07105; Thu, 13 Nov 1997 06:31:52 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA01880; Thu, 13 Nov 1997 05:56:05 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA01835; Thu, 13 Nov 1997 05:55:18 -0500 (EST)
Received: from mordillo (p19.nas2.is2.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA18430; Thu, 13 Nov 97 10:56:11 GMT
Date: Thu, 13 Nov 97 10:48:53 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Fall 97 Parkbench Committee Meeting Minutes
To: parkbench-comm@CS.UTK.EDU, parkbench-hpf@CS.UTK.EDU,
        parkbench-lowlevel@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <Pine.SUN.3.96.971112212856.24760D-100000@rudolph.cs.utk.edu> 
Message-Id: <Chameleon.879418489.mab@mordillo>
Mime-Version: 1.0
Content-Type: MULTIPART/MIXED; BOUNDARY="mordillo:879418490:877:126:21579"

--mordillo:879418490:877:126:21579
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

Here are the minutes of the Parkbench committee meeting held The County
Hotel in Southampton during the Fall 97 Parkbench Workshop.

For those of you with a MIME-compliant mail-reader I've attached a formatted
word 7 doc.

Regards

Mark

-----------------------------------------------------------------------------

Parkbench Committee Meeting
Held during the Fall Parkbench Workshop

The County Hotel
Southampton, UK

1515,  11th September 1997


Meeting Participation List:

Mark Baker - Univ. of Portsmouth (mab@sis.port.ac.uk)
Flavio Bergamaschi  - Univ of Southampton (fab@ecs.soton.ac.uk)
Jack Dongarra - Univ. of Tenn./ORNL (dongarra@cs.utk.edu)
Vladimir Getov  - Univ. of Westminister (getovv@wmin.ac.uk)
Charles Grassl - SGI/Cray (cmg@cray.com)
William Gropp - ANL (gropp@mcs.anl.gov)
Tony Hey - Univ. of Southampton (ajgh@ecs.soton.ac.uk)
Roger Hockney - Univ. of Westminister (roger@minnow.demon.co.uk)
Mark Papiani - Univ of Southampton (mp@ecs.soton.ac.uk)
Subhash Saini - NASA Ames (saini@nas.nasa.gov)
Dave Snelling - FECIT (snelling@fecit.co.uk)
Aad J. van der Steen  - RUU (steen@fys.ruu.nl)
Erich Strohmaier - Univ. of Tennessee (erich@cs.utk.edu)
Klaus Stueben - GMD  (klaus.stueben@gmd.de)

Meeting Activities and Actions

Tony Hey chaired the meeting.

Minutes from last meeting were seven pages long and it was decided that only the actions from the last 

meeting would be reviewed. The actions from last meeting were reviewed - a short discussion about each 

took place. A discussion about interaction with SPEC-HPG was initiated.

Comms Low-Level Benchmarks 

Vladimir Getov gave a short presentation on the current status of the Parkbench Comms benchmarks.  
Charles Grassl was asked to explained how his new Comms programs worked and the rationale behind it. 
A long discussion ensued.

Action - Create a formal proposal  of alternative or additions to the comms low-level benchmarks for 
SC'97 
- Charles Grassl.

Action - Members should look at the PALLAS version of the low-level benchmarks (based on 
Genesis/RAPS).

Action  - Erich  Strohmaier and Vladimir Getov will discuss the efforts needed to split up Parkbench 
and 
add in the new Comms1 benchmark (with new curve fitting routine).

NPB - Subhash Siani reported on the status of the NAS Parallel Benchmarks

HPF - Mark Baker read Chuck Koebel's email about CEWES HPCM HPF efforts.

Action - Subhash Siani will let RICE know that Gina should start of from the single NAS codes

Electronic Journal - Mark Baker and Tony Hey reported on the electronic journal PEMCS and its Web 
site. It was agreed that this would be discussed  further informally.

Parkbench Report -Erich Strohmaier reported on the efforts of creating a new Parkbench report. A short 

discussion about this ensued.

Action - Jack Dongarra /Tony Hey will talk to other members about the potential efforts that could be 
put 
into a Parkbench report II by SC'97.

Funding Efforts

Jack Dongarra's recent benchmarking  proposal was turned down. Tony Hey mentioned the possibly of 
entering a proposal to the EU.
Possibility of a joint EU / NSF bid.

Mark Baker asked if SIO would be interested in being more closely involved.  William Gropp reported 
that 
SIO was actually winding down and so formal association was not really an option.

AOB

The participants were then invited by Tony to move to the University of Southampton (bldg. 16) for the 

Parkbench demonstrations which included:

-- Java Low-Level Benchmarks (Vladimir Getov)
-- BenchView: Java Tool for Visualization of Parallel Benchmark Results (Mark Papiani and Flavio
   Bergamaschi)
-- PICT: An Interactive Web-page Curve-fitting Tool (Roger Hockney)

Jack Dongarra  informed the committee of  Parkbench BOF at SC'97 (Wednesday at 3.30PM).

The meeting was wound up by Tony Hey at 1630.

-----------------------------------------------------------------------------




-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 11/13/97 - Time: 10:48:53
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------

--mordillo:879418490:877:126:21579
Content-Type: APPLICATION/msword; name="minutes-fall-97.doc"
Content-Transfer-Encoding: BASE64
Content-Description: minutes-fall-97.doc

0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAAB
AAAAEQAAAAAAAAAAEAAAEgAAAAEAAAD+////AAAAABAAAAD/////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
///////////////////////cpWgAY+AJBAAAAABlAAAAAAAAAAAAAAAAAwAA
hxAAABAeAAAAAAAAAAAAAAAAAAAAAAAAhw0AAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAABgAAGoAAAAAGAAAagAAAGoYAAAAAAAAahgAAAAA
AABqGAAAAAAAAGoYAAAAAAAAahgAABQAAACkGAAAAAAAAKQYAAAAAAAApBgA
AAAAAACkGAAAAAAAAKQYAAAAAAAApBgAAAoAAACuGAAAEAAAAKQYAAAAAAAA
Eh0AAHwAAAC+GAAAAAAAAL4YAAAAAAAAvhgAAAAAAAC+GAAAAAAAAL4YAAAA
AAAAvhgAAAAAAAC+GAAAAAAAAL4YAAAAAAAABxoAAAIAAAAJGgAAAAAAAAka
AAAAAAAACRoAAEsAAABUGgAAUAEAAKQbAABQAQAA9BwAAB4AAACOHQAAWAAA
AOYdAAAqAAAAEh0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAahgAAAAAAAC+GAAA
AAAAAAAACQAKAAEAAgC+GAAAAAAAAL4YAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AL4YAAAAAAAAvhgAAAAAAAASHQAAAAAAANQYAAAAAAAAahgAAAAAAABqGAAA
AAAAAL4YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAL4YAAAAAAAA1BgAAAAAAADU
GAAAAAAAANQYAAAAAAAAvhgAABYAAABqGAAAAAAAAL4YAAAAAAAAahgAAAAA
AAC+GAAAAAAAAAcaAAAAAAAAAAAAAAAAAAAQq9KCIvC8AX4YAAAOAAAAjBgA
ABgAAABqGAAAAAAAAGoYAAAAAAAAahgAAAAAAABqGAAAAAAAAL4YAAAAAAAA
BxoAAAAAAADUGAAAMwEAANQYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAABQYXJrYmVuY2ggQ29tbWl0dGVlIE1lZXRp
bmcNDUhlbGQgZHVyaW5nIHRoZSBGYWxsIFBhcmtiZW5jaCBXb3Jrc2hvcA0N
VGhlIENvdW50eSBIb3RlbA0NU291dGhhbXB0b24sIFVLDQ0xNTE1LCAgMTF0
aCBTZXB0ZW1iZXIgMTk5Nw0NDU1lZXRpbmcgUGFydGljaXBhdGlvbiBMaXN0
Og0NTWFyayBCYWtlciAtIFVuaXYuIG9mIFBvcnRzbW91dGggKG1hYkBzaXMu
cG9ydC5hYy51aykNRmxhdmlvIEJlcmdhbWFzY2hpICAtIFVuaXYgb2YgU291
dGhhbXB0b24gKGZhYkBlY3Muc290b24uYWMudWspDUphY2sgRG9uZ2FycmEg
LSBVbml2LiBvZiBUZW5uLi9PUk5MIChkb25nYXJyYUBjcy51dGsuZWR1KQ1W
bGFkaW1pciBHZXRvdiAgLSBVbml2LiBvZiBXZXN0bWluaXN0ZXIgKGdldG92
dkB3bWluLmFjLnVrKQ1DaGFybGVzIEdyYXNzbCAtIFNHSS9DcmF5IChjbWdA
Y3JheS5jb20pDVdpbGxpYW0gR3JvcHAgLSBBTkwgKGdyb3BwQG1jcy5hbmwu
Z292KQ1Ub255IEhleSAtIFVuaXYuIG9mIFNvdXRoYW1wdG9uIChhamdoQGVj
cy5zb3Rvbi5hYy51aykNUm9nZXIgSG9ja25leSAtIFVuaXYuIG9mIFdlc3Rt
aW5pc3RlciAocm9nZXJAbWlubm93LmRlbW9uLmNvLnVrKQ1NYXJrIFBhcGlh
bmkgLSBVbml2IG9mIFNvdXRoYW1wdG9uIChtcEBlY3Muc290b24uYWMudWsp
DVN1Ymhhc2ggU2FpbmkgLSBOQVNBIEFtZXMgKHNhaW5pQG5hcy5uYXNhLmdv
dikNRGF2ZSBTbmVsbGluZyAtIEZFQ0lUIChzbmVsbGluZ0BmZWNpdC5jby51
aykNQWFkIEouIHZhbiBkZXIgU3RlZW4gIC0gUlVVIChzdGVlbkBmeXMucnV1
Lm5sKQ1FcmljaCBTdHJvaG1haWVyIC0gVW5pdi4gb2YgVGVubmVzc2VlIChl
cmljaEBjcy51dGsuZWR1KQ1LbGF1cyBTdHVlYmVuIC0gR01EICAoa2xhdXMu
c3R1ZWJlbkBnbWQuZGUpDQ1NZWV0aW5nIEFjdGl2aXRpZXMgYW5kIEFjdGlv
bnMNDVRvbnkgSGV5IGNoYWlyZWQgdGhlIG1lZXRpbmcuDQ1NaW51dGVzIGZy
b20gbGFzdCBtZWV0aW5nIHdlcmUgc2V2ZW4gcGFnZXMgbG9uZyBhbmQgaXQg
d2FzIGRlY2lkZWQgdGhhdCBvbmx5IHRoZSBhY3Rpb25zIGZyb20gdGhlIGxh
c3QgbWVldGluZyB3b3VsZCBiZSByZXZpZXdlZC4gVGhlIGFjdGlvbnMgZnJv
bSBsYXN0IG1lZXRpbmcgd2VyZSByZXZpZXdlZCAtIGEgc2hvcnQgZGlzY3Vz
c2lvbiBhYm91dCBlYWNoIHRvb2sgcGxhY2UuIEEgZGlzY3Vzc2lvbiBhYm91
dCBpbnRlcmFjdGlvbiB3aXRoIFNQRUMtSFBHIHdhcyBpbml0aWF0ZWQuDQ1D
b21tcyBMb3ctTGV2ZWwgQmVuY2htYXJrcyANDVZsYWRpbWlyIEdldG92IGdh
dmUgYSBzaG9ydCBwcmVzZW50YXRpb24gb24gdGhlIGN1cnJlbnQgc3RhdHVz
IG9mIHRoZSBQYXJrYmVuY2ggQ29tbXMgYmVuY2htYXJrcy4gIENoYXJsZXMg
R3Jhc3NsIHdhcyBhc2tlZCB0byBleHBsYWluZWQgaG93IGhpcyBuZXcgQ29t
bXMgcHJvZ3JhbXMgd29ya2VkIGFuZCB0aGUgcmF0aW9uYWxlIGJlaGluZCBp
dC4gDUEgbG9uZyBkaXNjdXNzaW9uIGVuc3VlZC4NDUFjdGlvbiAtIENyZWF0
ZSBhIGZvcm1hbCBwcm9wb3NhbCAgb2YgYWx0ZXJuYXRpdmUgb3IgYWRkaXRp
b25zIHRvIHRoZSBjb21tcyBsb3ctbGV2ZWwgYmVuY2htYXJrcyBmb3IgU0OS
OTcgLSBDaGFybGVzIEdyYXNzbC4NDUFjdGlvbiAtIE1lbWJlcnMgc2hvdWxk
IGxvb2sgYXQgdGhlIFBBTExBUyB2ZXJzaW9uIG9mIHRoZSBsb3ctbGV2ZWwg
YmVuY2htYXJrcyAoYmFzZWQgb24gR2VuZXNpcy9SQVBTKS4NDUFjdGlvbiAg
LSBFcmljaCAgU3Ryb2htYWllciBhbmQgVmxhZGltaXIgR2V0b3Ygd2lsbCBk
aXNjdXNzIHRoZSBlZmZvcnRzIG5lZWRlZCB0byBzcGxpdCB1cCBQYXJrYmVu
Y2ggYW5kIGFkZCBpbiB0aGUgbmV3IENvbW1zMSBiZW5jaG1hcmsgKHdpdGgg
bmV3IGN1cnZlIGZpdHRpbmcgcm91dGluZSkuDQ1OUEIgLSBTdWJoYXNoIFNp
YW5pIHJlcG9ydGVkIG9uIHRoZSBzdGF0dXMgb2YgdGhlIE5BUyBQYXJhbGxl
bCBCZW5jaG1hcmtzDQ1IUEYgLSBNYXJrIEJha2VyIHJlYWQgQ2h1Y2sgS29l
YmVsknMgZW1haWwgYWJvdXQgQ0VXRVMgSFBDTSBIUEYgZWZmb3J0cy4NDUFj
dGlvbiAtIFN1Ymhhc2ggU2lhbmkgd2lsbCBsZXQgUklDRSBrbm93IHRoYXQg
R2luYSBzaG91bGQgc3RhcnQgb2YgZnJvbSB0aGUgc2luZ2xlIE5BUyBjb2Rl
cw0NRWxlY3Ryb25pYyBKb3VybmFsIC0gTWFyayBCYWtlciBhbmQgVG9ueSBI
ZXkgcmVwb3J0ZWQgb24gdGhlIGVsZWN0cm9uaWMgam91cm5hbCBQRU1DUyBh
bmQgaXRzIFdlYiBzaXRlLiBJdCB3YXMgYWdyZWVkIHRoYXQgdGhpcyB3b3Vs
ZCBiZSBkaXNjdXNzZWQgIGZ1cnRoZXIgaW5mb3JtYWxseS4NDVBhcmtiZW5j
aCBSZXBvcnQgLUVyaWNoIFN0cm9obWFpZXIgcmVwb3J0ZWQgb24gdGhlIGVm
Zm9ydHMgb2YgY3JlYXRpbmcgYSBuZXcgUGFya2JlbmNoIHJlcG9ydC4gQSBz
aG9ydCBkaXNjdXNzaW9uIGFib3V0IHRoaXMgZW5zdWVkLg0NQWN0aW9uIC0g
SmFjayBEb25nYXJyYSAvVG9ueSBIZXkgd2lsbCB0YWxrIHRvIG90aGVyIG1l
bWJlcnMgYWJvdXQgdGhlIHBvdGVudGlhbCBlZmZvcnRzIHRoYXQgY291bGQg
YmUgcHV0IGludG8gYSBQYXJrYmVuY2ggcmVwb3J0IElJIGJ5IFNDkjk3Lg0N
RnVuZGluZyBFZmZvcnRzDQ1KYWNrIERvbmdhcnJhknMgcmVjZW50IGJlbmNo
bWFya2luZyAgcHJvcG9zYWwgd2FzIHR1cm5lZCBkb3duLiBUb255IEhleSBt
ZW50aW9uZWQgdGhlIHBvc3NpYmx5IG9mIGVudGVyaW5nIGEgcHJvcG9zYWwg
dG8gdGhlIEVVLg1Qb3NzaWJpbGl0eSBvZiBhIGpvaW50IEVVIC8gTlNGIGJp
ZC4NDU1hcmsgQmFrZXIgYXNrZWQgaWYgU0lPIHdvdWxkIGJlIGludGVyZXN0
ZWQgaW4gYmVpbmcgbW9yZSBjbG9zZWx5IGludm9sdmVkLiAgV2lsbGlhbSBH
cm9wcCByZXBvcnRlZCB0aGF0IFNJTyB3YXMgYWN0dWFsbHkgd2luZGluZyBk
b3duIGFuZCBzbyBmb3JtYWwgYXNzb2NpYXRpb24gd2FzIG5vdCByZWFsbHkg
YW4gb3B0aW9uLg0NQU9CDQ1UaGUgcGFydGljaXBhbnRzIHdlcmUgdGhlbiBp
bnZpdGVkIGJ5IFRvbnkgdG8gbW92ZSB0byB0aGUgVW5pdmVyc2l0eSBvZiBT
b3V0aGFtcHRvbiAoYmxkZy4gMTYpIGZvciB0aGUgUGFya2JlbmNoIGRlbW9u
c3RyYXRpb25zIHdoaWNoIGluY2x1ZGVkOg0NSmF2YSBMb3ctTGV2ZWwgQmVu
Y2htYXJrcyAoVmxhZGltaXIgR2V0b3YpDUJlbmNoVmlldzogSmF2YSBUb29s
IGZvciBWaXN1YWxpemF0aW9uIG9mIFBhcmFsbGVsIEJlbmNobWFyayBSZXN1
bHRzIChNYXJrIFBhcGlhbmkgYW5kIEZsYXZpbyBCZXJnYW1hc2NoaSkNUElD
VDogQW4gSW50ZXJhY3RpdmUgV2ViLXBhZ2UgQ3VydmUtZml0dGluZyBUb29s
IChSb2dlciBIb2NrbmV5KQ0NSmFjayBEb25nYXJyYSAgaW5mb3JtZWQgdGhl
IGNvbW1pdHRlZSBvZiAgUGFya2JlbmNoIEJPRiBhdCBTQ5I5NyAoV2VkbmVz
ZGF5IGF0IDMuMzBQTSkuDQ1UaGUgbWVldGluZyB3YXMgd291bmQgdXAgYnkg
VG9ueSBIZXkgYXQgMTYzMC4NFQCk0C+l4D2mCAenCAeooAWpoAWqAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwAAHQMAAGgD
AAByAwAAdAMAAIYDAAChAwAAogMAAMUDAADXAwAABAQAABcEAAA+BAAAUQQA
AHwEAACNBAAAqgQAALYEAADNBAAA3gQAAAEFAAAVBQAAPgUAAFYFAABvBQAA
jgUAAKsFAAC9BQAA1gUAAOoFAAAJBgAAGQYAAEIGAABSBgAAagYAAH4GAACB
BgAAoAYAANcHAADzBwAARAgAAEkIAACJCAAAjggAANgIAADeCAAAVgkAAFwJ
AABeCQAAvwkAAMUJAAA3CgAAPQoAAGsKAABuCgAAtgoAALkKAAAACwAABgsA
AF8LAABxCwAACAwAABgMAACODAAAlAwAAB4NAAAtDQAALg0AAJIOAACVDgAA
hxAAAJ4QAAD79gD0APHvAO0A7QDtAO0A7QDrAO0A7QDtAO0A7QDtAO0A7QDm
APEA7QDtAOMA4+EA4wDtAPEA8QDjAPEA8QDjAPHvAPEA3wAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJ1AQACVoEABFWBVoEA
CFWBXQMAYxgAAANdBQADXQQAA10DAAVVgV0DAAJoAQAIVYFdAwBjHAAACFWB
XQMAYyQARwADAAAcAwAAHQMAAEUDAABGAwAAVwMAAFgDAABoAwAAaQMAAIQD
AACFAwAAhgMAAKIDAACjAwAA2QMAABkEAABTBAAAjwQAALgEAADgBAAAFwUA
AFgFAACQBQAAvwUAAOwFAAAbBgAAVAYAAIAGAACBBgAAoAYAAKEGAAC/BgAA
wAYAANYHAADXBwAA8wcAAPQHAAC9CAAA1wgAANgIAAD9AAHAIaIB+gABwCGi
Af0AAcAhRgH9AAHAIUYB/QABwCFGAf0AAcAhRgH9AAHAIUYB/QABwCHrAP0A
AcAh6wD6AAHAIesA+gABwCHrAPoAAcAh6QD6AAHAIesA+gABwCHyAPoAAcAh
8gD6AAHAIfIA+gABwCHyAPoAAcAh8gD6AAHAIfIA+gABwCHyAPoAAcAh8gD6
AAHAIfIA+gABwCHyAPoAAcAh8gD6AAHAIfIA+gABwCHyANwAAcAh8gD6AAHA
IesA+gABwCEWAfoAAcAh6wD6AAHAIesA+gABwCHrAPoAA8Ah6wD6AAHAIesA
+gABwCHpAPoAAcAh6wD6AALAIfIA+gABwCHrAPoAAcAh6wAAAAAAAAAAHQAA
BQMMNP8BAAgAAAEAAAABAGgBAAAAAAAAtwAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAgAABQMAAgAABQEn2AgAAFUJAABWCQAAvgkAAL8JAABq
CgAAawoAALUKAAC2CgAA/woAAAALAABeCwAAXwsAAAcMAAAIDAAAjQwAAI4M
AAAdDQAAHg0AAC4NAAAvDQAAsA0AANUNAADWDQAAkQ4AAJIOAACWDgAAlw4A
ACcPAAAoDwAAUw8AAL4PAAD/DwAAABAAAFgQAABZEAAAhxAAAP0E/8Ah2QH9
AAHAIesA/QT/wCHZAf0AAcAh6wD9BP/AIeAB/QABwCHrAP0AAcAh7gD9AAHA
IesA/QABwCHuAP0AAcAh6wD9AAHAIe4A/QABwCHrAP0E/8Ah2QH9AAHAIesA
/QT/wCHZAf0AAcAh6wD9BP/AIdkB/QABwCHrAP0AAcAh6QD9AAHAIesA/QAC
wCHrAP0AAcAh6wD9AAHAIesA/QACwCHrAP0AAcAh6wD9AAHAIekA/QABwCHr
AP0AAsAh6wD9AAHAIesA2wABwCH6ANsE/8Ah5QHbAAHAIfoA/QABwCHrAP0A
AcAh6wD9AAHAIesA/QABwCHrAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAIQAABQMNCxFoAROY/gw0/wEACAAAAQAAAAEAaAEAAAAA
AAC3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAFAyQOAA8A
CAABAEsADwAAAAAAGgAAQPH/AgAaAAZOb3JtYWwAAgAAAAMAYQkEAAAAAAAA
AAAAAAAAAAAAAAAAACIAQUDy/6EAIgAWRGVmYXVsdCBQYXJhZ3JhcGggRm9u
dAAAAAAAAAAAAAAAAAAAAIcNAAAEAIcQAAAAAP////8CAAQh//8BAAAg//8C
AAAAAABqBwAAhw0AAAAAAQAAAAEAAAAAAAADAACeEAAACQAAAwAA2AgAAIcQ
AAAKAAsAAAAAAAECAAAVAgAAiQ0AAAcAHAAHADMBC01hcmsgIEJha2VyJEM6
XHRleFxQYXJrQmVuY2hcbWludXRlcy1mYWxsLTk3LmRvYwtNYXJrICBCYWtl
cjNDOlx0ZXhcUGFya0JlbmNoXEFkbWluaXN0cmF0aW9uXG1pbnV0ZXMtZmFs
bC05Ny5kb2MLTWFyayAgQmFrZXIzQzpcdGV4XFBhcmtCZW5jaFxBZG1pbmlz
dHJhdGlvblxtaW51dGVzLWZhbGwtOTcuZG9jC01hcmsgIEJha2VyM0M6XHRl
eFxQYXJrQmVuY2hcQWRtaW5pc3RyYXRpb25cbWludXRlcy1mYWxsLTk3LmRv
YwtNYXJrICBCYWtlcjNDOlx0ZXhcUGFya0JlbmNoXEFkbWluaXN0cmF0aW9u
XG1pbnV0ZXMtZmFsbC05Ny5kb2P/QFRla3Ryb25peCBQaGFzZXIgNTUwIDEy
MDAgZHBpAExQVDE6AHdpbnNwb29sAFRla3Ryb25peCBQaGFzZXIgNTUwIDEy
MDAgZHBpAFRla3Ryb25peCBQaGFzZXIgNTUwIDEyMDAgZHBpAAAAAQQABJwA
tAATzwEAAQABAOoKbwhkAAEADwBYAgIAAQAAAAMAAABMZXR0ZXIAABQAZWVl
ZWVlZWVlZWVlZWVlZWVlZWVlZQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFBSSVbgEAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAAAAQJxAnECcAABAnAAAA
AAAAAABjdQgA/wMAAQEBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFRla3Ryb25peCBQaGFzZXIg
NTUwIDEyMDAgZHBpAAAAAQQABJwAtAATzwEAAQABAOoKbwhkAAEADwBYAgIA
AQAAAAMAAABMZXR0ZXIAAAAADwAGAAAACgAwARQAMAEUAHIAcABjAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAFBSSVbgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAYAAAAAAAQJxAnECcAABAnAAAAAAAAAABjdQgA/wMAAQEBAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAOAAQDHAAAAxwAAAAgAzwDPAMcAAAAAAAAAxwAAAHwAFRaQAQAAVGlt
ZXMgTmV3IFJvbWFuAAwSkAECAFN5bWJvbAAWIpABAAZBcmlhbABIZWx2ZXRp
Y2EAABsmvAIAAEFyaWFsIFJvdW5kZWQgTVQgQm9sZAARNZABAABDb3VyaWVy
IE5ldwARNZABAgBNUyBMaW5lRHJhdwAiAAQAcQiJGAAA0AIAAGgBAAAAANBb
GYa2ahuGAAAAAAcAXAAAAPQBAAAnCwAAAgAFAAAABACDEBcAAAAAAAAAAAAA
AAIAAQAAAAEAAAAAAAAAIQMAAAAAKgAAAAAAAAALTWFyayAgQmFrZXILTWFy
ayAgQmFrZXIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAEAAAACAAAAAwAAAAQAAAAFAAAABgAAAAcA
AAAIAAAACQAAAAoAAAALAAAADAAAAA0AAAAOAAAADwAAAP7////9////FAAA
AP7///8cAAAA/v/////////////////////////////////////////+////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////UgBvAG8AdAAg
AEUAbgB0AHIAeQAAAGspDUphY2sgRG9uZ2FycmEgLSBVbml2LiBvZiBUZW5u
Li9PUk5MIChkbxYABQH//////////wEAAAAACQIAAAAAAMAAAAAAAABGAAAA
AKD5PUK9vrwBEKvSgiLwvAETAAAAQAMAAGdldG9XAG8AcgBkAEQAbwBjAHUA
bQBlAG4AdAAAAHNzbCAtIFNHSS9DcmF5IChjbWdAY3JheS5jb20pDVdpbGxp
YW0gGgACAQIAAAADAAAA/////3BwQG1jcy5hbmwuZ292KQ1Ub255IEhleSAt
IFVuaXYuIG9mIAAAAAAQHgAAdG9uIAEAQwBvAG0AcABPAGIAagAAAC51aykN
Um9nZXIgSG9ja25leSAtIFVuaXYuIG9mIFdlc3RtaW5pc3RlciAocm8SAAIB
////////////////LmNvLnVrKQ1NYXJrIFBhcGlhbmkAAAAAAAAAAAAAAAAA
AAAAAAAAAGoAAABtcEBlBQBTAHUAbQBtAGEAcgB5AEkAbgBmAG8AcgBtAGEA
dABpAG8AbgAAAHMgKHNhaW5pQG5hcy5uYXNhLmdvdikNRCgAAgH/////BAAA
AP////9FQ0lUIChzbmVsbGluZ0BmZWNpdAAAAAAAAAAAAAAAAAAAAAACAAAA
vAEAAHRlZW4BAAAA/v///wMAAAAEAAAABQAAAAYAAAAHAAAACAAAAP7///8K
AAAACwAAAAwAAAD+////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
/////////////////////////////////wEA/v8DCgAA/////wAJAgAAAAAA
wAAAAAAAAEYYAAAATWljcm9zb2Z0IFdvcmQgRG9jdW1lbnQACgAAAE1TV29y
ZERvYwAQAAAAV29yZC5Eb2N1bWVudC42APQ5snEAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAA/v8AAAQAAgAAAAAAAAAAAAAAAAAAAAAAAQAA
AOCFn/L5T2gQq5EIACsns9kwAAAAjAEAABIAAAABAAAAmAAAAAIAAACgAAAA
AwAAAKwAAAAEAAAAuAAAAAUAAADMAAAABgAAANgAAAAHAAAA5AAAAAgAAAD0
AAAACQAAAAgBAAASAAAAFAEAAAoAAAA8AQAACwAAAEgBAAAMAAAAVAEAAA0A
AABgAQAADgAAAGwBAAAPAAAAdAEAABAAAAB8AQAAEwAAAIQBAAACAAAA5AQA
AB4AAAABAAAAAAAGAB4AAAABAAAAAFdSTR4AAAAMAAAATWFyayAgQmFrZXIA
HgAAAAEAAAAAOmkQHgAAAAEAAAAAAAAAHgAAAAcAAABOb3JtYWwAYR4AAAAM
AAAATWFyayAgQmFrZXIAHgAAAAIAAAA3AAQAHgAAAB4AAABNaWNyb3NvZnQg
V29yZCBmb3IgV2luZG93cyA5NQAAAEAAAAAAKC3aDAAAAEAAAAAAAAAABQBE
AG8AYwB1AG0AZQBuAHQAUwB1AG0AbQBhAHIAeQBJAG4AZgBvAHIAbQBhAHQA
aQBvAG4AAAAAAAAAAAAAADgAAgD///////////////8AAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJAAAA6AAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAP///////////////wAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAA////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/
//////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAD+/wAABAACAAAAAAAAAAAAAAAAAAAAAAABAAAAAtXN
1ZwuGxCTlwgAKyz5rjAAAAC4AAAACAAAAAEAAABIAAAADwAAAFAAAAAEAAAA
dAAAAAUAAAB8AAAABgAAAIQAAAALAAAAjAAAABAAAACUAAAADAAAAJwAAAAC
AAAA5AQAAB4AAAAZAAAAVW5pdmVyc2l0eSBvZiBQb3J0c21vdXRoAAAAAAMA
AAAAOgAAAwAAABcAAAADAAAABQAAAAsAAAAAAAAACwAAAAAAAAAMEAAAAgAA
AB4AAAABAAAAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AABAAAAAADhSnMW+vAFAAAAAANR+ciLwvAEDAAAAAgAAAAMAAAD0AQAAAwAA
ACcLAAADAAAAAAAAAAAAAAD+/wAABAACAAAAAAAAAAAAAAAAAAAAAAABAAAA
AtXN1ZwuGxCTlwgAKyz5rjAAAAC4AAAACAAAAAEAAABIAAAADwAAAFAAAAAE
AAAAdAAAAAUAAAB8AAAABgAAAIQAAAALAAAAjAAAABAAAACUAAAADAAAAJwA
AAACAAAA5AQAAB4AAAAZAAAAVW5pdmVyc2l0eSBvZiBQb3J0c21vdXRoAAAA
AAMAAAAAOgAAAwAAABcAAAADAAAABQAAAAsAAAAAAAAACwAAAAAAAAAMEAAA
AgAAAB4AAAABAAAAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAA

--mordillo:879418490:877:126:21579--

From owner-parkbench-comm@CS.UTK.EDU Mon Nov 17 08:32:09 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA28026; Mon, 17 Nov 1997 08:32:09 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA07698; Mon, 17 Nov 1997 07:58:13 -0500 (EST)
Received: from post.mail.demon.net (post-20.mail.demon.net [194.217.242.27]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA07665; Mon, 17 Nov 1997 07:57:54 -0500 (EST)
Received: from minnow.demon.co.uk ([158.152.73.63]) by post.mail.demon.net
           id aa2024828; 17 Nov 97 12:43 GMT
Message-ID: <06u4dCAfsDc0Ew8p@minnow.demon.co.uk>
Date: Mon, 17 Nov 1997 12:39:59 +0000
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: To the PARKBENCH97 BOF
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>

                 GREETINGS TO THE PARKBENCH 1997 BOF
                 -----------------------------------
I am not able to attend the Parkbench BOF this year but would like to
make the following input:

Chairman: Please express my apologies for absence to the meeting.


Agenda Item: Low-Level Performance Evaluation tools.
             --------------------------------------
The latest version of the Parkbench Interactive Curve Fitting Tool 
(PICT2) is on my Web page at:

      http://www.minnow.demon.co.uk/pict/source/pict2a.html

I believe that this solves the problem of displaying on different 
sized screens. Please try it and give me feedback (I have had little
so far, so I don't know how worthwhile it is!). 

This plots and allows manual interactive curve fitting of data 
anywhere on the Web in raw-data, Original COMMS1, and New COMMS1 
format. However, it still relies on COMMS1 calculating the least 
squares 2-Para and 3-Point 3-Para fits. 

Agenda Item : Plans for the next Release.
              --------------------------
Just a reminder that New COMMS1 as announced in my email to the 
committee of 16 Feb 1997, was designed as the minimum necessary 
changes to the existing release to solve the problems raised at
the beginning of the year. It involves new versions of 5 routines
and 2 new routines. In addition, the Make files need the 2 new 
routines added where appropriate. We have incorporated these 
changes at Westminster in the existing release without trouble.

I believe that these should be incorported in the next release.

In summary:

New COMMS1

In directory:

http://www.minnow.demon.co.uk/Pbench/comms1/

The 5 Changed Routines:

(1) File COMMS1_1.F replaces

        ParkBench/Low_Level/comms1/src_mpi/COMMS1.f

(2) File COMMS1_1.INC replaces

ParkBench/Low_Level/comms1/src_mpi/comms1.inc

(3) File ESTCOM_1.F replaces

        ParkBench/Low_Level/comms1/src_mpi/ESTCOM.f

(4) File LSTSQ_1.F replaces

        ParkBench/lib/Low_Level/LSTSQ.f

(5) File CHECK_1.F replaces

        Parkbench/lib/Low_Level/CHECK.f

The 2 New Routines:

(6) File LINERR_1.F add as

        ParkBench/lib/Low_Level/LINERR.f

(7) File VPOWER_1.F add as

        ParkBench/lib/Low_Level/VPOWER.f


HAVE A NICE MEETING, and best wishes to you all,

Roger Hockney

-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Mon Dec  1 08:38:55 1997
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA05062; Mon, 1 Dec 1997 08:38:55 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA20432; Mon, 1 Dec 1997 08:03:34 -0500 (EST)
Received: from hermes.lsi.usp.br (hermes.lsi.usp.br [143.107.161.220]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id IAA20425; Mon, 1 Dec 1997 08:03:30 -0500 (EST)
Received: from cali.lsi.usp.br (cali.lsi.usp.br [10.0.161.7]) by hermes.lsi.usp.br (8.8.5/8.7.3) with SMTP id LAA05866; Mon, 1 Dec 1997 11:03:20 -0200 (BDB)
Message-ID: <34830ABD.487C@lsi.usp.br>
Date: Mon, 01 Dec 1997 11:06:37 -0800
From: Martha Torres <mxtd@lsi.usp.br>
Organization: LSI
X-Mailer: Mozilla 3.01Gold (Win95; I)
MIME-Version: 1.0
To: parkbench-comm@CS.UTK.EDU
CC: mxtd@lsi.usp.br
Subject: compiling ParkBench for MPICH
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Sirs
ParkBench Committee

Dear Sirs,
I am Ph.D student and I am working with collective communication
operations. Particulary, I am interested in to quantify the influence
of collective communication operations on the total execution time
of several MPI-programs.

My platform is a cluster of 8 Dual Pentium Pro processors 
interconnected by 100Mb/s Fastethernet.
I use MPICH version 1.1, fort77 and cc compilers

I have downloaded ParkBench.tar from netlib. I followed 
all instructions but there are some programs that
did not work:
1. Low_Level/poly1 poly2 rinf1 tick1 tick2
They did not compile. It appears the following:
ParkBench/lib/LINUX/ParkBench_misc.a: No such file or
directory. 
How do I create this library??

2. Kernels/LU_solver QR TRD
They also did not compile. It appears the following:
ParkBench/lib/LINUX/pblas_subset.a: In function 'pberror_'
undefined reference to 'blacs_gridinfo_'
undefined reference to 'blacs_abort_'

3. Comp_Apps/PSTSWM and Kernels/MATMUL
They compiled but they did not run

Thanks in advance, 

Best Regards
Martha Torres
Laboratorio de Sistema Integraveis
University of Sao Paulo
Sao Paulo - S.P. Brazil


From owner-parkbench-lowlevel@CS.UTK.EDU Wed Dec  3 02:22:07 1997
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id CAA13224; Wed, 3 Dec 1997 02:22:07 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id CAA11602; Wed, 3 Dec 1997 02:22:29 -0500 (EST)
Received: from soran.pacific.net.sg (soran.pacific.net.sg [203.120.90.76]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id CAA11594; Wed, 3 Dec 1997 02:22:26 -0500 (EST)
From: <htchng@honda.insasbhd.com>
Received: from pop1.pacific.net.sg (pop1.pacific.net.sg [203.120.90.85])
	by soran.pacific.net.sg with ESMTP
	id PAA08723 for <pbwg-compactapp@cs.utk.edu>; Wed, 3 Dec 1997 15:22:07 +0800 (SGT)
Received: from pacific.net.sg ([203.116.15.109])
        by pop1.pacific.net.sg with SMTP
        id PAA19445 for <pbwg-compactapp@cs.utk.edu>; Wed, 3 Dec 1997 15:22:19 +0800 (SGT)
Message-Id: <199712030722.PAA19445@pop1.pacific.net.sg>
To: pbwg-compactapp@CS.UTK.EDU
Date: Wed,  3 Dec 97 15:25:30 +0800
Subject: Seeking Importer for Blank CD-R and  Computer Parts
X-Mailer: Crescent Internet ToolPak OLE Mail Control v.1.0

Dear Sir,

I understand that you are a computer reseller/trader.
(If you not, or not interested in this message, DO NOTHING, as we might have made a mistake)
We respect your privacy. 
As such, we only followup if you are interested and responded to our mail.

We are seeking importer for the following products:-
Able to supply the following in bulk / small quantity.
 
1.CD-R (Jewel Case)   
2.CD-R (Spindle)      
3.CD-R replicator (4pcs/hour, 50pcs tower)
4.Yamaha CDR400 (4x write, 6x read) recorder.
5.CD-RW as well as its recorder
6.PC Mother Board
7.PC RAMs
8.PC CPUs.
All products FOB Singapore.
Clients to specify freight forwarder.

 
Thank you very much. 
Have a nice day.

Best regards,

Manager,
Insas Networks.


From owner-parkbench-comm@CS.UTK.EDU Wed Jan  7 16:49:19 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA19963; Wed, 7 Jan 1998 16:49:19 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA17461; Wed, 7 Jan 1998 16:30:05 -0500 (EST)
Received: from timbuk.cray.com (timbuk-fddi.cray.com [128.162.8.102]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id QAA17452; Wed, 7 Jan 1998 16:30:02 -0500 (EST)
Received: from ironwood.cray.com (root@ironwood-fddi.cray.com [128.162.21.36]) by timbuk.cray.com (8.8.7/CRI-gate-news-1.3) with ESMTP id PAA16817 for <parkbench-comm@CS.UTK.EDU>; Wed, 7 Jan 1998 15:30:03 -0600 (CST)
Received: from magnet.cray.com (magnet [128.162.173.162]) by ironwood.cray.com (8.8.4/CRI-ironwood-news-1.0) with ESMTP id PAA27253; Wed, 7 Jan 1998 15:30:00 -0600 (CST)
From: Charles Grassl <cmg@cray.com>
Received: by magnet.cray.com (8.8.0/btd-b3)
          id VAA26077; Wed, 7 Jan 1998 21:29:59 GMT
Message-Id: <199801072129.VAA26077@magnet.cray.com>
Subject: Low Level benchmarks
To: parkbench-comm@CS.UTK.EDU
Date: Wed, 7 Jan 1998 15:29:59 -0600 (CST)
X-Mailer: ELM [version 2.4 PL24-CRI-d]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit


-- 
Charles Grassl

From owner-parkbench-comm@CS.UTK.EDU Wed Jan  7 16:56:40 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA19981; Wed, 7 Jan 1998 16:56:40 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA17784; Wed, 7 Jan 1998 16:36:27 -0500 (EST)
Received: from timbuk.cray.com (timbuk-fddi.cray.com [128.162.8.102]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id QAA17776; Wed, 7 Jan 1998 16:36:24 -0500 (EST)
Received: from ironwood.cray.com (root@ironwood-fddi.cray.com [128.162.21.36]) by timbuk.cray.com (8.8.7/CRI-gate-news-1.3) with ESMTP id PAA17087 for <parkbench-comm@cs.utk.edu>; Wed, 7 Jan 1998 15:36:24 -0600 (CST)
Received: from magnet.cray.com (magnet [128.162.173.162]) by ironwood.cray.com (8.8.4/CRI-ironwood-news-1.0) with ESMTP id PAA28449 for <parkbench-comm@cs.utk.edu>; Wed, 7 Jan 1998 15:36:22 -0600 (CST)
Received: from magnet by magnet.cray.com (8.8.0/btd-b3) via SMTP
          id VAA26107; Wed, 7 Jan 1998 21:36:21 GMT
Sender: cmg@cray.com
Message-ID: <34B3F553.167E@cray.com>
Date: Wed, 07 Jan 1998 15:36:19 -0600
From: Charles Grassl <cmg@cray.com>
Organization: Cray Research
X-Mailer: Mozilla 3.01SC-SGI (X11; I; IRIX 6.2 IP22)
MIME-Version: 1.0
To: parkbench-comm@CS.UTK.EDU
Subject: Low Level benchmark errors and differences
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

To:      Parkbench Low Level interests
From:    Charles Grassl

Subject: Low Level benchmark errors and differences

Date:    7 January, 1998


We should not produce or publish Parkbench Low level benchmark results
with the current suite of programs because the programs are inaccurate
and unreliable.  I ran the Low Level programs and compared the results
with the same metrics as recorded from other benchmark programs.
The differences range from less than 5% (acceptable) to a factor of 6
times difference, which is unacceptable.

The differences, or "errors", are summarized in the table below.
The recorded differences in results from the Low Level program were
arrived at by comparing the Parkbench program reported metrics with the
same metrics as measured by alternative programs.


       Table.  Differences in Low Level benchmark results
               for two systems.  System A is an Origin 2000.
               System B is a CRAY T3E.

                     System A          System B
                  Rinf  Startup    Rinf   Startup
        -----------------------------------------
        COMMS1    <10%     6x       <5%      6x
        COMMS2      2x     3x       <5%     <5%
        COMMS3     <5%              <5%
        POLY1      <5%    60%        2x     <5%
        POLY2      <5%    60%        2x     <5%
        POLY3       -      -         2x     80x


The Parkbench Low Level programs are occasionally requested for
benchmarking computer systems, but the results are usually rejected
because of their inaccuracy and unreliability.  If not rejected, they
cause confusion and consternation because the results do not agree
with other measurements of the same variables.  I emphasize that this
is not a case of obtaining optimization and favorable results for a
computer system.  The problem is with the inaccuracy and unreliability
of the results.

The Low Level programs measure and report low level parameters.
Therefore their value is in accuracy and utility.  The programs do not
constitute definitions of the reported metrics and hence the results
should correlate with other measurements of the the same variables.

The Low Level programs are obsolete and need to be replaced.  I have
written seven simple programs, with MPI and PVM versions, and offer them
as a replacement for the Low Level suite.

I strongly suggest that we delete or withdraw from distribution the
current Low Level suite.

From owner-parkbench-comm@CS.UTK.EDU Thu Jan  8 05:40:28 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id FAA01529; Thu, 8 Jan 1998 05:40:28 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id FAA00442; Thu, 8 Jan 1998 05:20:21 -0500 (EST)
Received: from sun1.ccrl-nece.technopark.gmd.de (sun1.ccrl-nece.technopark.gmd.de [193.175.160.67]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id FAA00380; Thu, 8 Jan 1998 05:20:13 -0500 (EST)
Received: from sgi7.ccrl-nece.technopark.gmd.de (sgi7.ccrl-nece.technopark.gmd.de [193.175.160.89]) by sun1.ccrl-nece.technopark.gmd.de (8.7/3.4W296021412) with SMTP id LAA28869; Thu, 8 Jan 1998 11:20:05 +0100 (MET)
Received: (from hempel@localhost) by sgi7.ccrl-nece.technopark.gmd.de (950413.SGI.8.6.12/950213.SGI.AUTOCF) id LAA24864; Thu, 8 Jan 1998 11:18:48 +0100
Date: Thu, 8 Jan 1998 11:18:48 +0100
From: hempel@ccrl-nece.technopark.gmd.de (Rolf Hempel)
Message-Id: <199801081018.LAA24864@sgi7.ccrl-nece.technopark.gmd.de>
To: parkbench-comm@CS.UTK.EDU
Subject: Low Level benchmark errors and differences
Cc: ritzdorf@ccrl-nece.technopark.gmd.de,
        zimmermann@ccrl-nece.technopark.gmd.de, clantwin@ess.nec.de,
        eckhard@ess.nec.de, lonsdale@ccrl-nece.technopark.gmd.de,
        tbeckers@ess.nec.de
Reply-To: hempel@ccrl-nece.technopark.gmd.de

To:		Parkbench Low Level interests
From:		Rolf Hempel

Subject:	Low Level benchmark errors and differences,
		Note from Charles Grassl of January 7th

Date:		8 January, 1998


Thank you, Charles, for your note on the Low Level benchmarks. It could
not have come at a better time, because at NEC we just recently ran into
problems with COMMS1.

This code had been specified by a customer as a test case in a current
procurement. When we ran COMMS1 with our current MPI library, the
results for rinfinity and latency were completely wrong. In particular,
the latency values were off by more than a factor of two, when compared
with other ping-pong test programs. The following turned out to be the
main reasons for the errors:

1. The performance model is completely inadequate. A linear dependency
   between time and message length, fitted to the measurements by
   least squares, is bound to fail in the presence of discontinuities
   caused by protocol changes. Most MPI implementations change
   protocols for different message lengths for an overall performance
   optimization.

2. To make things worse, the least square fit overweighs the data points
   for very long messages, because the differences "model minus
   measurement" are largest there in absolute terms. The fitted line,
   therefore, more or less ignores the short message measurements.
   As a result, the latencies are completely up to chance.

3. The correction for internal measurement overhead (e.g., for
   subroutine calls) is programmed in a sloppy way, to say the least.
   We discovered several subroutine calls which were not
   taken into account, and the overhead is measured with low
   precision. For our implementation, this alone introduced a latency
   error of about 25%.
   
The result in our case was that, instead of the 13.5 usec latency
measured by the MPICH MPPTEST routine, COMMS1 initially reported some
28 usec. My colleague Hubert Ritzdorf then made an interesting
experiment: he removed some optimization from our MPI library for
long messages, thus INCREASING the communication times for messages
longer than 128000 bytes, and not changing anything for shorter
messages. The resulting DROP in latency from 28 to under 22 usec
clearly shows how ridiculous the COMMS1 benchmark is.

Thus, I strongly agree with Charles in that the COMMS* benchmarks
must be removed from PARKBENCH. They don't help anybody, and they
only cause confusion on the side of customers and frustration on the
side of benchmarkers. Let's get rid of this long-standing nuisance as
quickly as possible.

Best regards,
 Rolf Hempel
------------------------------------------------------------------------
Rolf Hempel      (email: hempel@ccrl-nece.technopark.gmd.de)
Senior Research Staff Member
C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10,
53757 Sankt Augustin, Germany
Tel.: +49 (0) 2241 - 92 52 - 95
Fax:  +49 (0) 2241 - 92 52 - 99

From owner-parkbench-comm@CS.UTK.EDU Thu Jan  8 08:07:54 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA02383; Thu, 8 Jan 1998 08:07:53 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA05392; Thu, 8 Jan 1998 07:50:13 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA05383; Thu, 8 Jan 1998 07:50:03 -0500 (EST)
Received: from mordillo (p108.nas1.is4.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA03072; Thu, 8 Jan 98 12:48:32 GMT
Date: Thu,  8 Jan 98 12:10:55 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Re: Low Level benchmark errors and differences 
To: Charles Grassl  <cmg@cray.com>, parkbench-comm@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <34B3F553.167E@cray.com> 
Message-Id: <Chameleon.884263257.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

I am in agreement with Charles and Rolf about the low-level codes.

We've known for some time that they (the codes) are less than perfect,
if not in some cases flawed. At the SC'97 Parkbench meeting it was
mooted that Parkbench should concentrate on producing, supporting,
analysing and recording Low-Level codes and results. If this is the
case then we should certainly ensure that what we support codes
that are soundly written and produce consistent and reliable results. 

I certainly believe that a set of codes, akin to the low-level ones,
should be part of the Parkbench suite. Maybe this is a good time to
replace the current codes with those that Charles has produced !?

As a side issue, I think we should produce C versions of whatever low-level
codes we produce.

Charles, I'd be interested in your thoughts on the codes that Pallas produce 
- ftp://ftp.pallas.de/pub/PALLAS/PMB/PMB10.tar.gz. These are C benchmark 
codes that run: 

PingPong - like comms1
PingPing - like comms2
Xover
Cshift
Exchange 
Allreduce
Bcast
Barrier - like synch1

Obviously, I would'nt like to comment on how well written they are or how reliable
the results that they produce are. I'm relatively impressed with them. I also like
the fact they try and produce results for commonly used MPI functions - 
cshift/exchange/etc. I've run the codes on NT boxes and they appear to produce 
results close to what I would expect. 

Regards

Mark


--- On Wed, 07 Jan 1998 15:36:19 -0600  Charles Grassl <cmg@cray.com> wrote:
> To:      Parkbench Low Level interests
> From:    Charles Grassl
> 
> Subject: Low Level benchmark errors and differences
> 
> Date:    7 January, 1998
> 
> 
> We should not produce or publish Parkbench Low level benchmark results
> with the current suite of programs because the programs are inaccurate
> and unreliable.  I ran the Low Level programs and compared the results
> with the same metrics as recorded from other benchmark programs.
> The differences range from less than 5% (acceptable) to a factor of 6
> times difference, which is unacceptable.
> 
> The differences, or "errors", are summarized in the table below.
> The recorded differences in results from the Low Level program were
> arrived at by comparing the Parkbench program reported metrics with the
> same metrics as measured by alternative programs.
> 
> 
>        Table.  Differences in Low Level benchmark results
>                for two systems.  System A is an Origin 2000.
>                System B is a CRAY T3E.
> 
>                      System A          System B
>                   Rinf  Startup    Rinf   Startup
>         -----------------------------------------
>         COMMS1    <10%     6x       <5%      6x
>         COMMS2      2x     3x       <5%     <5%
>         COMMS3     <5%              <5%
>         POLY1      <5%    60%        2x     <5%
>         POLY2      <5%    60%        2x     <5%
>         POLY3       -      -         2x     80x
> 
> 
> The Parkbench Low Level programs are occasionally requested for
> benchmarking computer systems, but the results are usually rejected
> because of their inaccuracy and unreliability.  If not rejected, they
> cause confusion and consternation because the results do not agree
> with other measurements of the same variables.  I emphasize that this
> is not a case of obtaining optimization and favorable results for a
> computer system.  The problem is with the inaccuracy and unreliability
> of the results.
> 
> The Low Level programs measure and report low level parameters.
> Therefore their value is in accuracy and utility.  The programs do not
> constitute definitions of the reported metrics and hence the results
> should correlate with other measurements of the the same variables.
> 
> The Low Level programs are obsolete and need to be replaced.  I have
> written seven simple programs, with MPI and PVM versions, and offer them
> as a replacement for the Low Level suite.
> 
> I strongly suggest that we delete or withdraw from distribution the
> current Low Level suite.
> 

---------------End of Original Message-----------------

-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 01/08/98 - Time: 12:10:55
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Mon Jan 12 16:02:28 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA26216; Mon, 12 Jan 1998 16:02:28 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA16631; Mon, 12 Jan 1998 15:38:05 -0500 (EST)
Received: from post.mail.demon.net (post-20.mail.demon.net [194.217.242.27]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA16588; Mon, 12 Jan 1998 15:37:38 -0500 (EST)
Received: from minnow.demon.co.uk ([158.152.73.63]) by post.mail.demon.net
           id aa2012292; 12 Jan 98 17:34 GMT
Message-ID: <X8YQ8DANPlu0Ewpp@minnow.demon.co.uk>
Date: Mon, 12 Jan 1998 17:33:01 +0000
To: hempel@ccrl-nece.technopark.gmd.de
Cc: parkbench-comm@CS.UTK.EDU, ritzdorf@ccrl-nece.technopark.gmd.de,
        zimmermann@ccrl-nece.technopark.gmd.de, clantwin@ess.nec.de,
        eckhard@ess.nec.de, lonsdale@ccrl-nece.technopark.gmd.de,
        tbeckers@ess.nec.de
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Re: Low Level benchmark errors and differences
In-Reply-To: <199801081018.LAA24864@sgi7.ccrl-nece.technopark.gmd.de>
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>

To: Rolf, Charles, Mark and others,

From: Roger 

I too am distressed to see the original COMMS1 code (written and tested
for message lengths only up to 10^4) is still being issued by Parkbench
and being used well outside its range of proven validity (message
lengths now typically up to 10^7 or even 10^8).

These problems were pointed out about one year ago by Charles and Ron,
and as a result I worked on the code and issued to the committee a
minmum set of changes to the current release that would solve many of
the problems. These involve replacing five existing routines and adding
two to the existing release. The routines involved have been
downloadable from my Web site since about 12 March 1997 and have been
used successfully at Westminster University in our work. 

The New COMMS1, as I called it, was the subject of two printed reports
to the May 1997 meeting of Parkbench and  further results were shown at
the Sept 1997 meeting. There were also extensive discussions in this
email group during 1997.

Unfortunately my simple fixes were not inserted into the Parkbench
release and as a result we are still getting a bad press from
benchmarkers. After all the effort I put into solving this problem a
year ago, I feel rather let down that my work was never used. If my
changes had been encorporated into the Parkbenchmarks when they were
offered at least as an interim measure, I believe we could have avoided
much of the current bad publicity. 

I emphasise that the New COMMS1 was written as a minimum patch to the
existing release to solve an urgent problem in the simplest way. I am
not against a complete rethink of the low level benchmarks and now that
MPI has become a recognised standard, benchmarks timing the principal
software primitives of MPI would seem to be the most useful. Quite
possibly Charles's or Mucci's codes could be used.

However, I am still firmly convinced of the value of approximate
parametric representation of all the benchmark measurements based on a
simple performance model. Most of the existing low-level benchmarks were
written primarily to determine such parameters and hence include both
raw measurements and least squares curve fitting to obtain the
parameters. I have yet to see data that cannot be satisfactorily fitted
by 2 or 3 parameters, or two sets of 2-paras. And remember that I am
talking here about fitting ALL the measured data by some simple
formulae.    

After the decision of the May 1997 meeting to separate the raw
measurements from the parametric curve fitting, the curve fitting will
eventually become part of the "Parkbench Interactive Curve Fitting Tool"
(PICT). At present this applet can be used to produce a manual curve
fit, but eventually I will put up on my Web site a version in which the
least squares and 3-point buttons are active. But PICT as it is can now
be used manually to see how good or bad the 2-para and 3-para fits are.
Turn your browser to:

       http://www.minnow.demon.co.uk/pict/source/pict2a.html

and insert your raw data. I would be very interested to see what the NEC
data looks like.  

To answer some of Rolf's points: 

Rolf Hempel <hempel@ccrl-nece.technopark.gmd.de> writes
>
>1. The performance model is completely inadequate. A linear dependency
>   between time and message length, fitted to the measurements by
>   least squares, is bound to fail in the presence of discontinuities
>   caused by protocol changes. Most MPI implementations change
>   protocols for different message lengths for an overall performance
>   optimization.
>
Note that the original COMMS1 that you are using allows you to insert
one break point to take account of one major discontinuity. Have you
tried this?

In any case, to make t_0 a good measure of startup it is sensible ALWAYS
to make a breakpoint at say 100 or 1000 Byte, then the short message t_0
should be a good measure of startup. The long message t_0 is then not of
interest and should be ignored. In this way one is using the straight-
line fit over a short range of lengths, and the resulting t_0 should be
a better estimate of latency because it is derived from several
measurements rather than just selecting a single measurement (e.g. the
time for the shortest message) -- surely a better experimental
procedure. I emphasise that this procedure can be used now with the
original COMMS1 to get sensible results.

If there are many small discontinuities or changes of protocol then I
expect you data is rather like that shown by Charles this time last year
and used as an example in PICT. In this case the 3-para fit may give
good results for your data as it did for Charles's.

>2. To make things worse, the least square fit overweighs the data points
>   for very long messages, because the differences "model minus
>   measurement" are largest there in absolute terms. The fitted line,
>   therefore, more or less ignores the short message measurements.
>   As a result, the latencies are completely up to chance.
>
This is absolutely true and was discovered to be the problem one year
ago. My solution, used in the New COMMS1, was and is to minimise the sum
of the squares of the relative (rather than absolute) error. If this is
done the values for short messages are not ignored in the way described,
and t_0 is held much closer to the time for the smallest message length.

Note also that the 3-parameter fit provided by New COMMS1 can be fitted
exactly to the time for the shortest message, to the bandwidth for the
longest message, and to the bandwidth near the mid point. This is the
so-called 3-point fit, but it does require a third parameter.

Can you please email me the output file for the NEC from the original
COMMS1. I can then put this data through the New COMMS1 and see what two
and three parameter fits are produced.

Otherwise you could update your version of Parkbenchmarks with the 7
subroutines and rerun using New COMMS1. See the instructions at the end
of this email.
 
>28 usec. My colleague Hubert Ritzdorf then made an interesting
>experiment: he removed some optimization from our MPI library for
>long messages, thus INCREASING the communication times for messages
>longer than 128000 bytes, and not changing anything for shorter
>messages. The resulting DROP in latency from 28 to under 22 usec
>clearly shows how ridiculous the COMMS1 benchmark is.
>
Hubert's results are just what one would expect from minimising the
absolute error. I suspect you would not see this effect with New COMMS1
which does not over-emphasise the long message measurements.

Please remember that the t_0 reported by COMMS1 is not a measurement of
the time for any particular message length. It is the constant term in
the fitted curve:

                        t = t_0 + n/rinf

which is an approximation to ALL the measured data.

If you want to know the time, say for the smallest message length, then
that is listed in the table of lengths and times reported in the
benchmark output. If you mean by latency the time for the shortest
message (hopefully zero or 1 Byte) then the COMMS1 measurements of this
are in this table not in t_0.

For those who missed my two earlier emailings on using the New COMMS1, I
copy my earlier email below:

Agenda Item : Plans for the next Release.
              --------------------------
Just a reminder that New COMMS1 as announced in my email to the 
committee of 16 Feb 1997, was designed as the minimum necessary 
changes to the existing release to solve the problems raised at
the beginning of the year. It involves new versions of 5 routines
and 2 new routines. In addition, the Make files need the 2 new 
routines added where appropriate. We have incorporated these 
changes at Westminster in the existing release without trouble.

I believe that these should be incorported in the next release.

In summary:

New COMMS1

In directory:

http://www.minnow.demon.co.uk/Pbench/comms1/

The 5 Changed Routines:

(1) File COMMS1_1.F replaces the following file in the current release:

        ParkBench/Low_Level/comms1/src_mpi/COMMS1.f

(2) File COMMS1_1.INC replaces

        ParkBench/Low_Level/comms1/src_mpi/comms1.inc

(3) File ESTCOM_1.F replaces

        ParkBench/Low_Level/comms1/src_mpi/ESTCOM.f

(4) File LSTSQ_1.F replaces

        ParkBench/lib/Low_Level/LSTSQ.f

(5) File CHECK_1.F replaces

        Parkbench/lib/Low_Level/CHECK.f

The 2 New Routines:

(6) File LINERR_1.F add as

        ParkBench/lib/Low_Level/LINERR.f

(7) File VPOWER_1.F add as

        ParkBench/lib/Low_Level/VPOWER.f

Best wishes to you all

Roger
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Tue Jan 13 08:38:07 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA17513; Tue, 13 Jan 1998 08:38:07 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA03191; Tue, 13 Jan 1998 08:20:10 -0500 (EST)
Received: from sun1.ccrl-nece.technopark.gmd.de (sun1.ccrl-nece.technopark.gmd.de [193.175.160.67]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id IAA03184; Tue, 13 Jan 1998 08:20:07 -0500 (EST)
Received: from sgi7.ccrl-nece.technopark.gmd.de (sgi7.ccrl-nece.technopark.gmd.de [193.175.160.89]) by sun1.ccrl-nece.technopark.gmd.de (8.7/3.4W296021412) with SMTP id OAA04953; Tue, 13 Jan 1998 14:19:47 +0100 (MET)
Received: (from hempel@localhost) by sgi7.ccrl-nece.technopark.gmd.de (950413.SGI.8.6.12/950213.SGI.AUTOCF) id OAA02202; Tue, 13 Jan 1998 14:18:30 +0100
Date: Tue, 13 Jan 1998 14:18:30 +0100
From: hempel@ccrl-nece.technopark.gmd.de (Rolf Hempel)
Message-Id: <199801131318.OAA02202@sgi7.ccrl-nece.technopark.gmd.de>
To: roger@minnow.demon.co.uk
Subject: COMMS1 Benchmark
Cc: tbeckers@ess.nec.de, lonsdale@ccrl-nece.technopark.gmd.de,
        eckhard@ess.nec.de, clantwin@ess.nec.de, parkbench-comm@CS.UTK.EDU
Reply-To: hempel@ccrl-nece.technopark.gmd.de

Dear Roger,

thank you for your note on the COMMS1 benchmark. We didn't try the
NEW COMMS1 code yet with our MPI library, so I cannot comment on its
accuracy. I just would like to answer some of the issues you raised
in your mail.

Of course we have seen that in COMMS1 you can select a transition point
between a short and a long model. For this choice, however, you have
to be able to change the input data. In our case (a benchmark suite
used in a procurement) our customer had provided the input dataset,
and we were not allowed to change it. So, the only way for us to correct
the results was to tune our MPI library to make it fit to the benchmark
program. I don't think that this is what you had in mind when you
wrote COMMS1.

You didn't comment on the inaccuracies we found in the raw measurements.
We ran several ping-pong benchmarks before, as, for example, the
MPPTEST routine of MPICH, and they consistently give better latencies
for short messages (difference approx. 25%). As I explained in my
previous mail, we found the reason to be an improper correction for
measurement overheads in COMMS1. Thus, the raw data are flawed,
and this cannot be resolved by any parameter fitting. This is also the
reason that I hesitate to send you the raw data reported by COMMS1 on
our machine.

I agree with you that it would be nice to have a few parameters to
characterize the performance of any given system. The values for
"n1/2" and "rinfinity" have been quite successful for vector arithmetic
operations. The situation is, however, much more complicated for
communication operations.

As an example, let's take the famous ping-pong benchmark. We already
discussed the problem of discontinuities caused by protocol changes.
If you want to do a parameter fitting, the only reasonable solution
seems to me that your test program automatically detects such points
and handles the different protocols separately. If you leave the
selection to an input parameter, you will inevitably run into the
problem I discussed above.

Even if you solve this problem, there remain many others. In modern
(i.e. highly optimized) MPI implementations, the performance of a
ping-pong operation crucially depends on the status of the two processes
involved. Is the receiving process already waiting for the message?
In a ping-pong, it usually is. This can make a huge difference!
Also, the performance can also depend on the global number of processes
active in the application. Not only do search lists in communication
progress engines become shorter if there are fewer processes, but some
implementers even went as far as writing special code for the case
where you just have two processes. Ping-pong codes such as COMMS1
almost always just use two communicating processes, so they measure the
best case. Another effect which is too often ignored is that messages
can interfere with each other (both at the hardware and software level)
if they are sent at the same time between different process pairs.
All those effects combined cause a substantial difference between
ping-pong results and measurements in real applications. In this
situation the apparent precision of performance parameters can be
quite misleading.

If I want to judge the quality of an MPI implementation, I don't
trust in best fit parameters so much. For the ping-pong code, I just
look at a graphic representation of time versus message length for
short messages, and another one of bandwidth versus message length for
long messages. This way I can study discontinuities and other minor
effects in detail. And then, take real applications and measure the
communication times there. Then you will often find surprising 
results which you have never seen in a ping-pong benchmark.

Best wishes,
  Rolf
------------------------------------------------------------------------
Rolf Hempel      (email: hempel@ccrl-nece.technopark.gmd.de)
Senior Research Staff Member
C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10,
53757 Sankt Augustin, Germany
Tel.: +49 (0) 2241 - 92 52 - 95
Fax:  +49 (0) 2241 - 92 52 - 99

From owner-parkbench-comm@CS.UTK.EDU Thu Jan 15 14:17:57 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id OAA00690; Thu, 15 Jan 1998 14:17:56 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA23858; Thu, 15 Jan 1998 13:55:08 -0500 (EST)
Received: from timbuk.cray.com (timbuk-fddi.cray.com [128.162.8.102]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id NAA23830; Thu, 15 Jan 1998 13:54:57 -0500 (EST)
Received: from ironwood.cray.com (root@ironwood-fddi.cray.com [128.162.21.36]) by timbuk.cray.com (8.8.7/CRI-gate-news-1.3) with ESMTP id LAA11159 for <parkbench-comm@cs.utk.edu>; Thu, 15 Jan 1998 11:11:42 -0600 (CST)
Received: from magnet.cray.com (magnet [128.162.173.162]) by ironwood.cray.com (8.8.4/CRI-ironwood-news-1.0) with ESMTP id LAA08650 for <parkbench-comm@cs.utk.edu>; Thu, 15 Jan 1998 11:11:41 -0600 (CST)
From: Charles Grassl <cmg@cray.com>
Received: by magnet.cray.com (8.8.0/btd-b3)
          id RAA07227; Thu, 15 Jan 1998 17:11:40 GMT
Message-Id: <199801151711.RAA07227@magnet.cray.com>
Subject: Low Level Benchmarks
To: parkbench-comm@CS.UTK.EDU
Date: Thu, 15 Jan 1998 11:11:39 -0600 (CST)
X-Mailer: ELM [version 2.4 PL24-CRI-d]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit


To:      Parkbench interests
From:    Charles Grassl
Subject: Low Level benchmarks

Date:    15 January, 1998


Mark, thank you for pointing us to the PMB benchmark.  It is well written
and coded, but has some discrepancies and shortcomings.  My comments
lead to suggestions and recommendation regarding low level communication
benchmarks.

First, in program PMB the PingPong tests are twice as fast (in time)
as the corresponding message length tests in the PingPing tests (as run
on a CRAY T3E).  The calculation of the time and bandwidth is incorrect
by a factor of 100% in one of the programs.

This error can be fixed by recording, using and reporting the actual
time, amount of data sent and their ratio.  That is, the time should not
be divided by two in order to correct for a round trip.  This recorded
time is for a round trip message, and is not precisely the time for
two messages.  Half the round trip message passing time, as reported in
the PMB tests, is not the time for a single message and should not be
reported and such.  This same erroneous technique is used in the COMMS1
and COMMS2 two benchmarks.  (Is Parkbench is responsible for propagating
this incorrect methodology.)

In program PMB, the testing procedure performs a "warm up".  This
procedure is a poor testing methodology because is discards important
data.  Testing programs such as this should record all times and calculate
the variance and other statistics in order to perform error analysis.

Program PMB does not measure contention or allow extraction of network
contention data.  Tests "Allreduce" and "Bcast" and several others
stress the inter-PE communication network with multiple messages,
but it is not possible to extract information about the contention from
these tests.  The MPI routines for Allreduce and Bcast have algorithms
which change with respect to number of PEs and message lengths,  Hence,
without detailed information about the specific algorithms used, we cannot
extract information about network performance or further characterize
the inter-PE network.

Basic measurements must be separated from algorithms.  Tests PingPong,
PingPing, Barrier, Xover, Cshift and Exchange are low level.  Tests
Allreduce and Bcast are algorithms.  The algorithms Allreduce and Bcast
need additional (algorithmic) information in order to be described in
terms of the basic level benchmarks.


With respect to low level testing, the round trip exchange of messages,
as per PingPing and PingPong in PMB or COMMS1 and COMMS2, is not
characteristic of the lowest level of communication.  This pattern
is actually rather rare in programming practice.  It is more common
for tasks to send single messages and/or to receive single messages.
In this scheme, messages do not make a round trip and there is not
necessarily caching or other coherency effects.

The single message passing is a distinctly different case from that
of round trip tests.  We should be worried that the round trip testing
might introduce artifacts not characteristic of actual (low level) usage.
We need a better test of basic bandwidth and latency in order to measure
and characterize message passing performance.


Here are suggestions and requirements, in an outline form, for a low
level benchmark design:



    I. Single and double (bidirectional) messages.

       A. Test single messages, not round trips.
         1. The round trip test is an algorithm and a pattern.  As
            such it should not be used as the basic low level test of
            bandwidth.
         2. Use direct measurements where possible (which is nearly
            always).  For experimental design, the simplest method is
            the most desirable and best.
         3. Do not perform least squares fits A PIORI.  We know that
            the various message passing mechanisms are not linear or
            analytic because different mechanisms are used for different
            message sizes.  It is not necessarily known before hand
            where this transition occurs.  Some computer systems have
            more than two regimes and their boundaries are dynamic.
         4. Our discussion of least squares fitting is loosing tract
            of experimental design versus modeling.  For example, the
            least squares parameter for t_0 from COMMS1 is not a better
            estimate of latency than actual measurements (assuming
            that the timer resolution is adequate).  A "better" way to
            measure latency is to perform addition DIRECT measurements,
            repetitions or otherwise, and hence decrease the statistical
            error.  The fitting as used in the COMMS programs SPREADS
            error.  It does not reduce error and hence it is not a
            good technique for measuring such an important parameter
            as latency.

       B. Do not test zero length messages.  Though valid, zero length
          messages are likely to take special paths through library
          routines.  This special case is not particularly interesting or
          important.
          1. In practice, the most common and important message size is 64
             bits (one word).  The time for this message is the starting
             point for bandwidth characterization.

       D. Record all times and use statistics to characterize the message
          passing time.  That is, do not prime or warm up caches
          or buffers.  Timings for unprimed caches and buffers give
          interesting and important bounds.  These timings are also the
          nearest to typical usage.  
          1. Characterize message rates by a minimum, maximum, average
             and standard deviation.

       E. Test inhomogeneity of the communication network.  The basic
          message test should be performed for all pairs of PEs.
   

   II. Contention.

       A. Measure network contention relative to all PEs sending and/or
          receiving messages.

       B. Do not use high level routines where the algorithm is not known.
          1. With high level algorithms, we cannot deduce which component
             of the timing is attributable to the "operation count"
             and which is attributable to the actual system (hardware)
             performance.


  III. Barrier.

       A. Simple test of barrier time for all numbers of processors.




Additionally, the suite should be easy to use.  C and Fortran programs
for direct measurements of message passing times are short and simple.
These simple tests are of order 100 lines of code and, at least in
Fortran 90, can be written in a portable and reliable manner.

The current Parkbench low level suite does not satisfy the above
requirements.  It is inaccurate, as pointed out by previous letters, and
uses questionable techniques and methodologies.  It is also difficult to
use, witness the proliferation of files, patches, directories, libraries
and the complexity and size of the Makefiles.

This Low Level suite is a burden for those who are expecting a tool to
evaluate and investigate computer performance.  The suite is becoming
a liability for our group.  As such, it should be withdrawn from
distribution.

I offer to write, test and submit a new set of programs which satisfy
most of the above requirements.


Charles Grassl
SGI/Cray Research
Eagan, Minnesota  USA

From owner-parkbench-comm@CS.UTK.EDU Fri Jan 16 09:12:18 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id JAA11774; Fri, 16 Jan 1998 09:12:18 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA16130; Fri, 16 Jan 1998 08:53:07 -0500 (EST)
Received: from haven.EPM.ORNL.GOV (haven.epm.ornl.gov [134.167.12.69]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id IAA16123; Fri, 16 Jan 1998 08:53:06 -0500 (EST)
Received: (from worley@localhost) by haven.EPM.ORNL.GOV (8.8.3/8.8.3) id IAA01963; Fri, 16 Jan 1998 08:52:17 -0500 (EST)
Date: Fri, 16 Jan 1998 08:52:17 -0500 (EST)
From: Pat Worley <worley@haven.EPM.ORNL.GOV>
Message-Id: <199801161352.IAA01963@haven.EPM.ORNL.GOV>
To: parkbench-comm@CS.UTK.EDU
Subject: Re: Low Level Benchmarks
In-Reply-To: Mail from 'Charles Grassl <cmg@cray.com>'
      dated: Thu, 15 Jan 1998 11:11:39 -0600 (CST)
Cc: worley@haven.EPM.ORNL.GOV, ritzdorf@ccrl-nece.technopark.gmd.de,
        zimmermann@ccrl-nece.technopark.gmd.de, clantwin@ess.nec.de,
        eckhard@ess.nec.de, lonsdale@ccrl-nece.technopark.gmd.de,
        tbeckers@ess.nec.de

I have not been paying close attention to the current Low Level communication
suite discussions, having confidence in capabilities and resolve of the
current participants, but have decided to muddy the waters with a few personal
observations. 

1) I do not use the Low Level suite in my own performnace-related work. 
   I find that the interpretation of results is much easier if the
   experiments are designed to answer (my) specific performance questions.
   Producing numbers that are accurate enough and whose experiments are
   well-enough understood to be used to answer arbitrary performance
   questions is much more difficult. 

2) It may be time to revisit the goals of the Low Level suite. There are
   two obvious extremes.

   a) Determine some (hopefully representative) metrics of point-to-point
      communication performance, concentrating on making the measurements
      fair when comparing across platforms, but not requiring that the
      underlying architecture parameters be derivable from these numbers,
      or that they agree exactly with any other group's measurements.
      In this situation, a two (or more) parameter model fit to the data can
      be useful, if only as a shorthand for the raw data, but the model
      should not be expected to explain the data.

   b) Characterize the low level communication performance for each
      platform. Charles Grassl's latest recommendation is a first step in
      that direction. As a personal aside, I attempted such an exercise
      a few years ago (on the T3D, looking at the effect of common usage
      patterns on performance, not just ping-pong between nearest
      neighbors). I quickly became swamped by the amount of data and by the
      number of ways of presenting it (and the work was never written up). I
      realize now that my problem was trying to address too many evaluation
      questions simultaneously. 

      In addition to the large amount of data required, an accurate
      characterization is likely to require more platform-specific
      elements, and will continue to evolve as new machines are added, in
      order to be as fair to the new machines as it is to the old ones.
      (The two parameter models are very acurrate for some of the previous
       generation of homogeneous message-passing platforms.)
    

    In case my sympathies are not clear, I prefer to revisit and fix the
    current suite, "dumbing it down", if only in presentation, making it clear
    what it does and does not measure. In my own work, the
    point-to-point measurements are only for establishing a general
    performance baseline. The important measures are the performance
    observed in the kernel and full application codes. The baseline
    measurements are simply to assess the "peak achieveable" communication 
    performance.

    While a full characterization is an important thing to do, I do not
    believe that this group has the manpower, resources, or staying power to
    do it right. At one time in the past, we proposed to simply be a
    clearinghouse for the best of the performance measurement codes. If
    Charles wants to write and submit such an extensive low level suite, we
    can consider it, but in the meantime we should address the problems in
    the current suite, and not claim more than is appropriate. In particular,
    make sure that the customer does not become concerned that the
    vendor-stated latency and bandwidth does not match the PARKBENCH reported
    values. A discrepancy does not necessarily mean that someone is lying,
    simply that different aspects are being measured. But we should also be
    sure that intermachine comparisons using PARKBENCH measurements are
    valid, otherwise, they serve no purpose.

Pat Worley


PS.  - I may be in the fringe, but all my codes are written using variants
       of SWAP and SENDRECV, and most of the codes I see can be written in
       such a fashion (and could gain something from it). So, ping-pong and
       ping-ping are not irrelevant to me. 

PPS. - Of course the real reason for using ping-pong is the difficulty in
       measuring the time for one-way messaging. I was not aware that this
       was a solved problem, at least at the MPI or PVM level. Perhaps
       system instrumentation can answer it, but I didn't know that
       portable measurement codes could be guaranteed to do so across the
       different platforms.


From owner-parkbench-comm@CS.UTK.EDU Fri Jan 16 10:57:55 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id KAA13381; Fri, 16 Jan 1998 10:57:55 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA20483; Fri, 16 Jan 1998 10:38:52 -0500 (EST)
Received: from sun1.ccrl-nece.technopark.gmd.de (sun1.ccrl-nece.technopark.gmd.de [193.175.160.67]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id KAA20468; Fri, 16 Jan 1998 10:38:45 -0500 (EST)
Received: from sgi7.ccrl-nece.technopark.gmd.de (sgi7.ccrl-nece.technopark.gmd.de [193.175.160.89]) by sun1.ccrl-nece.technopark.gmd.de (8.7/3.4W296021412) with SMTP id QAA09438; Fri, 16 Jan 1998 16:38:41 +0100 (MET)
Received: (from hempel@localhost) by sgi7.ccrl-nece.technopark.gmd.de (950413.SGI.8.6.12/950213.SGI.AUTOCF) id QAA04930; Fri, 16 Jan 1998 16:37:14 +0100
Date: Fri, 16 Jan 1998 16:37:14 +0100
From: hempel@ccrl-nece.technopark.gmd.de (Rolf Hempel)
Message-Id: <199801161537.QAA04930@sgi7.ccrl-nece.technopark.gmd.de>
To: parkbench-comm@CS.UTK.EDU
Subject: Re: Low Level Benchmarks
Cc: tbeckers@ess.nec.de, lonsdale@ccrl-nece.technopark.gmd.de,
        eckhard@ess.nec.de, clantwin@ess.nec.de,
        zimmermann@ccrl-nece.technopark.gmd.de,
        ritzdorf@ccrl-nece.technopark.gmd.de,
        hempel@ccrl-nece.technopark.gmd.de
Reply-To: hempel@ccrl-nece.technopark.gmd.de

I would like to send some remarks to the notes by Charles Grassl and
Pat Worley on the problem of low-level communication benchmarks.

As Pat pointed out, the ping-pong benchmark has been invented because
generally there is no global clock by which you could measure the time
for a single message. Everybody knows that this is no perfect solution,
and in my previous mail I already explained some aspects of why
ping-pong results can differ substantially from times found in real
applications. So, I think we will have to use ping-pong tests in
the future, with the caveat that they only measure a very special case
of message-passing. If Charles knows a way to measure single messages,
I would like to learn about it.

In most other points I agree with Charles. I'm strongly convinced that
the COMMS* routines are obsolete and should be replaced with something
reasonable. In particular, the current routines are far too complicated
to use, and give completely meaningless results. Therefore, I think one
should not even try to correct the COMMS* routines, especially as there
are already better alternatives available. One example is the PMB suite
of PALLAS. It is relatively easy to use, but the documentation should
provide more information than the internal calling tree given in the
README file. What is missing is a precise definition of the underlying
measuring methodology.

I strongly prefer the output of timing tables (perhaps translated in
good graphical representations) over crude parametrizations like the
ones in the COMMS* benchmarks. Those can only frustrate the experts
and confuse all other people.

As to the definition of latency, Charles is right in saying that zero
byte messages are dangerous because they often use special algorithms.
The straightforward solution to use 1 byte messages instead is bad
because usually messages are sent as multiples of 4 or 8 bytes, and for
other message lengths some overhead by additional copying or even
subroutine calls may be introduced. Since the lengths of most real
messages are multiples of 4 or 8 bytes, I support Charles' proposal to
measure the time for an 8 byte message and call it the latency.

I think the warm-up phase before the actual benchmarking is important
in order not to smear out initialization overheads over some number of
messages. The time for the first ping-pong (or other operation), 
however, should be measured and compared with the time found for the
following operations.

I very much welcome Charles Grassl's kind offer to write a new benchmark
suite. Perhaps there are even other suites available which could also
be candidates for getting adopted by PARKBENCH. This forum meanwhile
is quite well-known, which gives them considerable responsibility.
PARKBENCH's choice of benchmark programs influences procurements of new
machines world-wide, and the availability of a good set of low level
benchmarks could give PARKBENCH a good reputation. I'm afraid that the
current set of routines has the opposite effect.

- Rolf Hempel
------------------------------------------------------------------------
Rolf Hempel      (email: hempel@ccrl-nece.technopark.gmd.de)
Senior Research Staff Member
C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10,
53757 Sankt Augustin, Germany
Tel.: +49 (0) 2241 - 92 52 - 95
Fax:  +49 (0) 2241 - 92 52 - 99

From owner-parkbench-comm@CS.UTK.EDU Fri Jan 16 12:46:04 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id MAA14801; Fri, 16 Jan 1998 12:46:04 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA27007; Fri, 16 Jan 1998 12:29:03 -0500 (EST)
Received: from haven.EPM.ORNL.GOV (haven.epm.ornl.gov [134.167.12.69]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id MAA27000; Fri, 16 Jan 1998 12:29:01 -0500 (EST)
Received: (from worley@localhost) by haven.EPM.ORNL.GOV (8.8.3/8.8.3) id MAA02149; Fri, 16 Jan 1998 12:29:01 -0500 (EST)
Date: Fri, 16 Jan 1998 12:29:01 -0500 (EST)
From: Pat Worley <worley@haven.EPM.ORNL.GOV>
Message-Id: <199801161729.MAA02149@haven.EPM.ORNL.GOV>
To: parkbench-comm@CS.UTK.EDU
Subject: Re: Low Level Benchmarks
In-Reply-To: Mail from 'hempel@ccrl-nece.technopark.gmd.de (Rolf Hempel)'
      dated: Fri, 16 Jan 1998 16:37:14 +0100
Cc: worley@haven.EPM.ORNL.GOV

	In most other points I agree with Charles. I'm strongly convinced that
	the COMMS* routines are obsolete and should be replaced with something
	reasonable. 

I have no problem with this. As I indicated, I have no experience with these.

	What is missing is a precise definition of the underlying measuring
        methodology. 

Perhaps this is the point that I was trying to make. Not only must the
codes be easy to use, but the results should be easy to interpret.
Every code should have a simple description of what it is measuring, what
the data can be used for (and what it shouldn't be used for), and how to use
the data. 

PARKBENCH needs to provide guidance in what data to collect, not just
carefully crafted benchmark codes. And we need to describe clearly what 
low level communication tests are good for. For example,
I have problems with low level contention tests. Understanding hotspots
is an interesting exercise, but the connection to "real" codes
is more subtle. Do we stress test, look at contention for given
algorithms/global operators (and which algorithms), use some standard
workload characterization as the background job, ...? For any given
performance question, what should be used may be clear, but it is difficult
to do this a priori. A simultaneous send/receive stress test may very well be
something interesting to present, but we also need to be able to explain
why (because it is typical in synchronous global communication operations?).

In summary, I would like to see a prioritized list of what low level
information is worth collecting, and why. We can then use this to choose or
generate codes to do the testing. I apologize for being lazy. This may have
already been laid out in the original ParkBench document, but I never worried
about the low level tests before and don't have a copy of the document in
front of me. 

Pat Worley





From owner-parkbench-comm@CS.UTK.EDU Fri Jan 16 13:45:53 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id NAA15447; Fri, 16 Jan 1998 13:45:52 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA29375; Fri, 16 Jan 1998 13:15:58 -0500 (EST)
Received: from c3serve.c3.lanl.gov (root@c3serve-f0.c3.lanl.gov [128.165.20.100]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id NAA29368; Fri, 16 Jan 1998 13:15:55 -0500 (EST)
Received: from risc.c3.lanl.gov (risc.c3.lanl.gov [128.165.21.76]) by c3serve.c3.lanl.gov (8.8.5/1995112301) with ESMTP id LAA04436 for <parkbench-comm@cs.utk.edu>; Fri, 16 Jan 1998 11:16:08 -0700 (MST)
Received: from localhost (hoisie@localhost) by risc.c3.lanl.gov (950413.SGI.8.6.12/c93112801) with SMTP id LAA13115 for <parkbench-comm@CS.UTK.EDU>; Fri, 16 Jan 1998 11:14:30 -0700
Date: Fri, 16 Jan 1998 11:14:30 -0700 (MST)
From: Adolfy Hoisie <hoisie@c3serve.c3.lanl.gov>
To: parkbench-comm@CS.UTK.EDU
Subject: Low Level Benchmarks
Message-ID: <Pine.SGI.3.95.980116105256.11971B-100000@risc.c3.lanl.gov>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII


Just to amplify some of the numerous excellent points made by Pat and
Charles and Rolf, the emphasis of the Parkbench group, as I see it, should
be on defining the methodology for benchmarking at this level. A string of
numbers says very little about machine performance in absence of a solid,
scientifcally defined underlying base for the programs utilized for
benchmarking.
COMMS is obsolete in methodology, coding and generation and analysis of
results. As such, I have used it quite some time ago only to reach the
conclusions above. Instead, I always chose to write my own benchmarking
programs in order to extract meaningful data for the applications I was
working on.
I would like to see the debate heading towards what is it that we need to
measure in a suite of general use that is applicable to machines of
interest. For example, very little or no attention is being paid to
benchmarking DSM architectures, where quite a few architectural parameters
become harder to define and subtler to interpret. Including, but not
limited to, message passing characterization on these architectures.

Adolfy    

======================================================================
Adolfy Hoisie                  \        Los Alamos National Laboratory
                                \Scientific Computing, CIC-19, MS B256
hoisie@lanl.gov                  \           Los Alamos, NM  87545 USA
                                  \                Phone: 505-667-5216
http://www.c3.lanl.gov/~hoisie/hoisie.html           FAX: 505-667-1126


From owner-parkbench-comm@CS.UTK.EDU Sun Jan 18 07:38:42 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id HAA20627; Sun, 18 Jan 1998 07:38:42 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA21662; Sun, 18 Jan 1998 07:28:22 -0500 (EST)
Received: from post.mail.demon.net (post-10.mail.demon.net [194.217.242.154]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA21655; Sun, 18 Jan 1998 07:28:20 -0500 (EST)
Received: from minnow.demon.co.uk ([158.152.73.63]) by post.mail.demon.net
           id aa1002926; 18 Jan 98 12:25 GMT
Message-ID: <IWo32HA0Rfw0Ew5i@minnow.demon.co.uk>
Date: Sun, 18 Jan 1998 12:24:20 +0000
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Low Level Benchmarks
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>


To: the low-level discussion group

From: Roger

I comment below on recent emailings on this topic which arrived  on the 
16 Jan 1998.

Pat Worley writes:

>2) It may be time to revisit the goals of the Low Level suite. There ar
>   are two obvious extremes.
>
>   a) Determine some (hopefully representative) metrics of point-to-po
>      point communication performance, concentrating on making the    
>      measurements
>      SNIP
>      In this situation, a two (or more) parameter model fit to the 
>      data can be useful, if only as a shorthand for the raw data, 
>      but the model should not be expected to explain the data.

This is of course what COMMS1 sets out to do. But please when judging 
this point, use the New COMMS1 revised code that DOES give much more 
sensible answers in difficult cases. Please do not base your opinions on 
results from the Original COMMS1 code that is still unfortunately being 
issued by Parkbench. Instructions for getting the new code was given in 
my email to this group on 12 Jan 1998. 

>     (The two parameter models are very accurate for some of the 
>      previous generation of homogeneous message-passing platforms.)

It is nice to have confirmation of this from an independent source.
In addition, the 3-parameter mode is available in New COMMS1 for cases 
where the 2-para fails.

>    In case my sympathies are not clear, I prefer to revisit and fix 
>    the current suite, "dumbing it down", if only in presentation, 
>    making it clear what it does and does not measure. 
 
Again this was my objective in writting the New COMMS1 as a minimum fix
to the existing Original COMMS1. However I don't think I would call this
"Dumbing Down".

In fact New COMMS1 is a "Smartening UP" of the benchmark because it
provides a 3-parameter fit for those cases for which the 2-para fit
fails. It also reports the Key spot values of "time for shortest message
(which Charles and Rolfe want to call the Latency)" and bandwidth for
longest message (this could equally well be the maximum measured
bandwidth). It also compares the fitted values with measured values at
these key points. The fit formulae are also given in the output for
completeness.

Pleas note that COMMS1 has always reported ALL the measured lengths and
times in the output file as the basic data, and ALL spot bandwidths were
printed to the screen as measured, and could be captured in a file if
required. In New COMMS1 the spot bandwidths are more conveniently
included in the standard output file as they should have been in the
first place.

Unfortunately the above additions make the new output file more complex
(which I am not happy about). An example of New COMMS1 output is
attached at the end of this email. 

>PPS. - Of course the real reason for using ping-pong is the difficulty 
>       in measuring the time for one-way messaging. I was not aware  
>       that this was a solved problem, at least at the MPI or PVM 
>       level. Perhaps system instrumentation can answer it, but I 
>       didn't know that portable measurement codes could be guaranteed
>       to do so across the different platforms.

Exactly so.

*******************************

Rolf Hempel writes:

>of message-passing. If Charles knows a way to measure single messages,
>I would like to learn about it.

Me too.

>In most other points I agree with Charles. I'm strongly convinced that
>the COMMS* routines are obsolete and should be replaced with something
>reasonable. In particular, the current routines are far too complicated
>to use, and give completely meaningless results. Therefore, I think one

Please base your judgement on the results from New COMMS1 which has a
much more satisfactory fitting procedure (see the examples in the PICT
tool mentioned below). I believe that the revised program New COMMS1
gives reasonable results and is not obselete.

>README file. What is missing is a precise definition of the underlying
>measuring methodology.

In contrast, the methodology of the COMMS1 curve fitting is given in the
Parkbench Report and in detail in my book "The Science of Computer 
Benchmarking", see: 

              http://www.siam.org/catalog/mcc07/hockney.htm 

>I strongly prefer the output of timing tables (perhaps translated in
>good graphical representations) over crude parametrizations like the
>ones in the COMMS* benchmarks. Those can only frustrate the experts
>and confuse all other people.

You seem to have failed to notice that both the Original COMMS1 and the
New COMMS1 report the timing table as the FIRST part of their output
files.

Further a good graphical representation is available using the database
tool from Southampton and my own PICT tool (see below)

The COMMS1 fitting procedure is not crude. On the contrary it uses
least-squares fitting of a performance model that is quite satisfactory
for a lot of data. In minimising relative rather than absolute error,
New COMMS1 spreads the error in a much more satisfactory way and allows
the fitting to be used over a much longer range of message lengths.
Furthermore where the 2-parameter model is unsuitable, New COMMS1
provides a 3-parameter model which fits the Cray T3E (Charles's data 17
Dec 96) very well. I don't think one can call all this crude.

To see how good the 2 and 3 parameter fits produced by New COMMS1 are to 
recent data, check out the examples on my Parkbench Interactive Curve 
Fitting Tool (PICT) at:

          http://www.minnow.demon.co.uk/pict/source/pict2a.html

For the most part these show that 2-parameters fit the data surprisingly 
well. The parameters are not meaningless and useless, but often a
rather good summary of the measurements.

The 3-parameter fit is described quite fully in my talk to the 11 Sep
1997. I have finally written this up with pretty pictures for the PEMCS
Web Journal. Look at:

         http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/
                talks/Roger-Hockney/perfprof1.html

In truth we need to see a lot more data before judging the usefulness
of parametric fitting. That is why I would like to look at your NEC 
results. These need not be the timings from COMMS1, but any pingpong 
measurements that you regard as "good".

Please do not base your opinion on the results produced by the Original 
COMMS1 which is presently in the Parkbench suit. This will only work 
satisfactorily results for message lengths up to about 4*10^4. When used 
outside this range it may produce useless numbers.   

>messages are multiples of 4 or 8 bytes, I support Charles' proposal to
>measure the time for an 8 byte message and call it the latency.

I am STRONGLY opposed to this. Latency is an ambiguous term that has 
different meanings to different people. If we wish to report the time 
for an 8-byte message we should call it what it is, no more no less, eg:

                     t(n=8B) = 45.6 us

To call this latency only leads to confusion and senseless 
misunderstanding and argument.

****************************************************************
EXAMPLE NEW COMMS1 OUTPUT FILE: T3E Results from
Grassl's 17 Dec 1996 email to Parkbench committee
****************************************************************
 

 =================================================
 ===                                           ===
 ===  GENESIS / ParkBench Parallel Benchmarks  ===
 ===                                           ===
 ===              comms1_mpi                   ===
 ===                                           ===
 =================================================

 Pingpong Benchmark:
 -------------------
 Measures time to send a message between two nodes on a multi-processor
 computer (MPP or network) as a function of the message length.

 It also characterises the time and corresponding bandwidth by both two
 and three performance parameters.

 Original code by Roger Hockney (1986/7), modified by Ian Glendinning
 and Ade Miller (1993/4), and by Roger Hockney and Ron Sercely (1997).

 -----------------------------------------------------------------------
 You are running the VERSION dated:    RWH-12-Mar-1997
 -----------------------------------------------------------------------

 The measurement time requested for each test case was  1.00E+00
seconds.

 No distinction was made between long and short messages.

 Zero length messages were not used in least squares fitting.

 -----------------------------------------------
 (1) PRIMARY MEASUREMENTS (BW=Bandwidth, B=Byte)
 -----------------------------------------------------------------------
           SPOT MEASURED VALUES        |   EVOLVING TWO-PARAMETER FIT
 --------------------------------------|--------------------------------
 POINT LENGTH(n)   TIME(t)   BW(r=n/t) |    rinf      nhalf     RMS rel
          B           s         B/s    |    B/s         B       error %
 *SPOT1*-------------------------------|--------------------------------
    1  8.000E+00  1.260E-05  6.349E+05 | 0.000E+00  0.000E+00  0.000E+00
    2  1.000E+01  1.348E-05  7.418E+05 | 2.273E+06  2.064E+01 -1.255E-06
    3  2.000E+01  1.380E-05  1.449E+06 | 1.237E+07  1.516E+02  2.277E+00
    4  3.000E+01  1.590E-05  1.887E+06 | 7.798E+06  9.157E+01  2.762E+00
    5  4.000E+01  1.561E-05  2.562E+06 | 1.020E+07  1.237E+02  3.267E+00
    6  5.000E+01  1.648E-05  3.034E+06 | 1.115E+07  1.366E+02  3.126E+00
    7  6.000E+01  1.618E-05  3.708E+06 | 1.364E+07  1.711E+02  3.796E+00
    8  7.000E+01  1.773E-05  3.948E+06 | 1.356E+07  1.699E+02  3.552E+00
    9  8.000E+01  1.694E-05  4.723E+06 | 1.562E+07  1.992E+02  4.072E+00
   10  9.000E+01  1.793E-05  5.020E+06 | 1.634E+07  2.095E+02  3.954E+00
   11  1.000E+02  1.802E-05  5.549E+06 | 1.741E+07  2.249E+02  3.983E+00
   12  1.100E+02  1.889E-05  5.823E+06 | 1.776E+07  2.300E+02  3.841E+00
   13  1.200E+02  1.780E-05  6.742E+06 | 1.983E+07  2.607E+02  4.483E+00
   14  1.300E+02  1.917E-05  6.781E+06 | 2.034E+07  2.682E+02  4.368E+00
   15  1.400E+02  1.902E-05  7.361E+06 | 2.131E+07  2.828E+02  4.405E+00
   16  1.500E+02  1.941E-05  7.728E+06 | 2.209E+07  2.946E+02  4.389E+00
   17  1.600E+02  1.896E-05  8.439E+06 | 2.353E+07  3.167E+02  4.644E+00
   18  1.700E+02  2.057E-05  8.264E+06 | 2.362E+07  3.179E+02  4.514E+00
   19  1.800E+02  1.911E-05  9.419E+06 | 2.526E+07  3.434E+02  4.887E+00
   20  1.900E+02  2.125E-05  8.941E+06 | 2.517E+07  3.420E+02  4.765E+00
   21  2.000E+02  1.894E-05  1.056E+07 | 2.730E+07  3.754E+02  5.382E+00
   22  2.100E+02  2.091E-05  1.004E+07 | 2.767E+07  3.812E+02  5.282E+00
   23  2.200E+02  2.011E-05  1.094E+07 | 2.885E+07  3.998E+02  5.393E+00
   24  2.300E+02  2.136E-05  1.077E+07 | 2.915E+07  4.047E+02  5.296E+00
   25  2.400E+02  2.015E-05  1.191E+07 | 3.053E+07  4.268E+02  5.496E+00
   26  2.500E+02  2.228E-05  1.122E+07 | 3.047E+07  4.258E+02  5.390E+00
   27  2.600E+02  2.144E-05  1.213E+07 | 3.110E+07  4.360E+02  5.365E+00
   28  2.700E+02  2.212E-05  1.221E+07 | 3.142E+07  4.412E+02  5.290E+00
   29  2.800E+02  2.111E-05  1.326E+07 | 3.249E+07  4.588E+02  5.417E+00
   30  2.900E+02  2.259E-05  1.284E+07 | 3.272E+07  4.626E+02  5.337E+00
   31  3.000E+02  2.284E-05  1.313E+07 | 3.294E+07  4.663E+02  5.262E+00
   32  4.000E+02  2.256E-05  1.773E+07 | 3.550E+07  5.098E+02  5.818E+00
   33  6.000E+02  2.549E-05  2.354E+07 | 4.022E+07  5.921E+02  6.632E+00
   34  8.000E+02  2.817E-05  2.840E+07 | 4.567E+07  6.883E+02  7.296E+00
   35  1.000E+03  3.253E-05  3.074E+07 | 4.887E+07  7.452E+02  7.451E+00
   36  2.000E+03  4.496E-05  4.448E+07 | 5.553E+07  8.657E+02  8.013E+00
   37  5.000E+03  6.135E-05  8.150E+07 | 7.983E+07  1.312E+03  1.090E+01
   38  1.000E+04  8.579E-05  1.166E+08 | 1.070E+08  1.814E+03  1.284E+01
   39  2.000E+04  1.294E-04  1.546E+08 | 1.339E+08  2.315E+03  1.426E+01
   40  3.000E+04  1.722E-04  1.742E+08 | 1.523E+08  2.659E+03  1.493E+01
   41  4.000E+04  2.161E-04  1.851E+08 | 1.647E+08  2.890E+03  1.524E+01
   42  5.000E+04  2.594E-04  1.928E+08 | 1.735E+08  3.056E+03  1.539E+01
   43  1.000E+05  4.534E-04  2.206E+08 | 1.847E+08  3.266E+03  1.575E+01
   44  2.000E+05  7.784E-04  2.569E+08 | 1.996E+08  3.548E+03  1.648E+01
   45  3.000E+05  1.110E-03  2.703E+08 | 2.123E+08  3.787E+03  1.701E+01
   46  5.000E+05  1.697E-03  2.946E+08 | 2.256E+08  4.039E+03  1.762E+01
   47  1.000E+06  3.276E-03  3.053E+08 | 2.370E+08  4.255E+03  1.806E+01
   48  2.000E+06  6.373E-03  3.138E+08 | 2.468E+08  4.440E+03  1.839E+01
   49  3.000E+06  9.489E-03  3.162E+08 | 2.547E+08  4.590E+03  1.858E+01
   50  5.000E+06  1.569E-02  3.187E+08 | 2.612E+08  4.714E+03  1.870E+01
   51  1.000E+07  3.134E-02  3.191E+08 | 2.666E+08  4.816E+03  1.874E+01
 *SPOT2*----------------------------------------------------------------

                            ------------------------
                            COMMS1: Message Pingpong
                            ------------------------
                                 Result Summary    
                                 --------------    

 -------------------
 (2) KEY SPOT VALUES
 -------------------
                                        ----------------------- 
 *KEY1* Shortest  n =   8.000E+00 B,   |  t =   1.260E-05 s    |  ******
                                       |                       |  ******
 *KEY2* Longest   n =   1.000E+07 B,   |  r =   3.191E+08 B/s  |  ******
                                        ----------------------- 


 -----------------------------------------------------------------------
--
 ------------------------------------------
 (3) BEST TWO-PARAMETER LINEAR-(t vs n) FIT
 ------------------------------------------
 (Minimises sum of squares of relative error at all points being fitted)

 Root Mean Square (RMS) Relative Error in time =    18.74 %
                Maximum Relative Error in time =    43.61 %  at POINT =
1

 This is a fit to ALL points. Even though different expressions are
 given for short and long messages, they are algebraically identical
 and either may be used for any message length in the full range.

 --------------
 Short Messages
 --------------
 Best expressions to use if nhalf > 0 and n <= nhalf = 4.816E+03 B

 Bandwidth fitted to:    r = pi0*n/(1+n/nhalf)

 Time fitted to:         t = t0*(1+n/nhalf)

           --------------------------------------------
 *LIN1*   |  pi0 = 5.536E+04 Hz,   nhalf= 4.816E+03 B  |   ******
          |                                            |   ******
 *LIN2*   |   t0 = 1/pi0 = 1.807E-05 s                 |   ******
           --------------------------------------------

 Spot comparison at POINT = 1, n = 8.000E+00 B

   t(fit) = 1.810E-05 s,  t(measured) = 1.260E-05 s,  relative error in
time =   43.6 %

 -------------
 Long Messages
 -------------
 Best expressions to use if  n > nhalf = 4.816E+03 B, or nhalf=0

 Bandwidth fitted to:     r = rinf/(1+nhalf/n)

 Time fitted to:          t = (n+nhalf)/rinf

           -----------------------------------------------
 *LIN3*   |  rinf = 2.666E+08 B/s,  nhalf = 4.816E+03 B   |   ******
           -----------------------------------------------

 Spot comparison at POINT =  51, n = 1.000E+07 B

   r(fit) = 2.665E+08 B/s,  r(measured) = 3.191E+08 B/s,  relative error
in B/W =  -16.5 %


 -----------------------------------------------------------------------
--
 ---------------------------------------
 (4) BEST 3-PARAMETER VARIABLE-POWER FIT
 ---------------------------------------

 Root Mean Square (RMS) Relative Error in B/W =    6.89 %
                Maximum Relative Error in B/W =  -13.41 %  at POINT = 39

 This fit is to ALL data points

 Bandwidth is fitted to:   rvp = rivp/(1+(navp/n)^gamvp)^(1/gamvp)

 Time is fitted to:        tvp = t0vp*(1+(n/navp)^gamvp)^(1/gamvp)

 where                    t0vp = navp/rivp  and  navp = t0vp*rivp

 When gamvp = 1.0, this form reduces to the linear-time form (3) above,
 navp becomes nhalf, and rivp becomes rinf.

 The three independent parameters are (t0vp is derived):

          -------------------------------------------------------------
 *VPWR1* | rivp = 3.475E+08 B/s, navp = 3.670E+03 B, gamvp = 4.190E-01 |
         |                                                             |
 *VPWR2* | t0vp = navp/rivp =  1.056E-05 s                             |  
          -------------------------------------------------------------
 This function is guaranteed to fit the first and last measured values  
 of time  and bandwidth. It also fits the (interpolated) time and  
 bandwidth at n = navp.

-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Mon Jan 19 13:10:51 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id NAA16306; Mon, 19 Jan 1998 13:10:50 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA21116; Mon, 19 Jan 1998 12:53:17 -0500 (EST)
Received: from haze.vcpc.univie.ac.at (haze.vcpc.univie.ac.at [131.130.186.138]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id MAA21105; Mon, 19 Jan 1998 12:53:14 -0500 (EST)
Received: (from smap@localhost)
	by haze.vcpc.univie.ac.at (8.8.6/8.8.6) id SAA21164
	for <parkbench-comm@CS.UTK.EDU>; Mon, 19 Jan 1998 18:53:11 +0100 (MET)
From: Ian Glendinning <Ian.Glendinning@vcpc.univie.ac.at>
Received: from fidelio(131.130.186.155) by haze via smap (V2.0beta)
	id xma021162; Mon, 19 Jan 98 18:52:48 +0100
Received: (from ian@localhost) by fidelio.vcpc.univie.ac.at (8.7.5/8.7.3) id SAA03411 for parkbench-comm@CS.UTK.EDU; Mon, 19 Jan 1998 18:52:48 +0100 (MET)
Date: Mon, 19 Jan 1998 18:52:48 +0100 (MET)
Message-Id: <199801191752.SAA03411@fidelio.vcpc.univie.ac.at>
To: parkbench-comm@CS.UTK.EDU
Subject: Re: Low Level benchmark errors and differences
X-Sun-Charset: US-ASCII

Dear parkbench-comm subscriber,

I have been following the discussions regarding the low-level ParkBench
benchmarks over the last couple of weeks with intertest, but so far I have
been content to keep my head below the parapet, as most of the things I would
have said have been said by others anyway.  However, there is one thing that I
would like to point out.

On Wed Jan  7 22:56:04 1998, Charles Grassl wrote:

> The Low Level programs are obsolete and need to be replaced.

I agree that the existing code could use some improvement, though most of the
discussion seems to have revolved around the version in the "current release",
which as Roger has pointed out several times is very old, and he has written
an improved version.  Have people tried that version out?

> I have
> written seven simple programs, with MPI and PVM versions, and offer them
> as a replacement for the Low Level suite.

I have tried a version of Charles's "comms1" code that he sent me, on our CS-2
system, and found that it reported approximately half the expected asymptotic
bandwidth, so this code is not without its problems either!  By "expected", I
mean the bandwidth reported by various versions of (the ParkBench version of)
COMMS1 over the years, coded using first PARMACS, then PVM, and more recently
MPI, as a message-passing library.  This value corresponds closely to what one
would expect for the peak performance, given the performance figures for the
underlying hardware.  For an explanation of what I think is happening, please
read on...

On Thu Jan 15 20:20:36 1998, Charles Grassl wrote:

> This recorded
> time is for a round trip message, and is not precisely the time for
> two messages.  Half the round trip message passing time, as reported in
> the PMB tests, is not the time for a single message and should not be
> reported and such.  This same erroneous technique is used in the COMMS1
> and COMMS2 two benchmarks.  (Is Parkbench is responsible for propagating
> this incorrect methodology.)

As Pat Worley and Rolf Hempel pointed out, the ping-pong is used because of
the difficulty in measuring the time for one-way messages, and I believe that
this is illustrated in this instance, as it seems that Charles's attempt to
time one-way messages has caused the unexpectedly low asymptotic bandwidth
measurement...  Charles's code executes a send, and then as fast as possible
executes another one, without any concern as to whether the data has left the
sending processor, or has arrived at the receiving processor, and what I think
is happening is that his code is queuing requests to send, before the previous
messages have left the sending processor, forcing the MPI implementation to
buffer them, at the cost of an extra copy operation, which would not otherwise
have been necessary, thus reducing the effective bandwidth!

> With respect to low level testing, the round trip exchange of messages,
> as per PingPing and PingPong in PMB or COMMS1 and COMMS2, is not
> characteristic of the lowest level of communication.  This pattern
> is actually rather rare in programming practice.  It is more common
> for tasks to send single messages and/or to receive single messages.

It seems to me that it is not very common programming practice to send a
sequence of messages to the same destination in rapid fire, without having
either done some intermediate processing, or waiting to get some response
back.  If you were trying to code efficiently, you would doubtless merge
the messages into one, and send the data all together in one message, if it
was all available already, which it must have been if you were able to
execute the sends so rapidly one after another! 

> The single message passing is a distinctly different case from that
> of round trip tests.  We should be worried that the round trip testing
> might introduce artifacts not characteristic of actual (low level) usage.
> We need a better test of basic bandwidth and latency in order to measure
> and characterize message passing performance.

Well, it seems that in this case, the attempt to measure the single message
passing case has introduced an artifact.  To an extent it depends what you are
trying to measure of course, but it has always been my understanding that the
COMMS1 benchmark was trying to measure the peak performance that you could
reasonably expect to obtain using a portable message-passing library
interface, which, for a good implementation of MPI, ought to come close to the
theoretical hardware limit, which is precisely what the existing COMMS1
ping-pong code does on our system.  I would therefore argue in favour of
retaining the ping-pong technique for obtaining timings.
  Ian
--
Ian Glendinning         European Centre for Parallel Computing at Vienna (VCPC)
ian@vcpc.univie.ac.at   Liechtensteinstr. 22, A-1090 Vienna, Austria
Tel: +43 1 310 939612   WWW: http://www.vcpc.univie.ac.at/~ian/

From owner-parkbench-comm@CS.UTK.EDU Tue Jan 20 08:50:06 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id IAA06977; Tue, 20 Jan 1998 08:50:06 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id IAA01200; Tue, 20 Jan 1998 08:28:44 -0500 (EST)
Received: from sun1.ccrl-nece.technopark.gmd.de (sun1.ccrl-nece.technopark.gmd.de [193.175.160.67]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id IAA01193; Tue, 20 Jan 1998 08:28:39 -0500 (EST)
Received: from sgi7.ccrl-nece.technopark.gmd.de (sgi7.ccrl-nece.technopark.gmd.de [193.175.160.89]) by sun1.ccrl-nece.technopark.gmd.de (8.7/3.4W296021412) with SMTP id OAA12945; Tue, 20 Jan 1998 14:19:53 +0100 (MET)
Received: (from hempel@localhost) by sgi7.ccrl-nece.technopark.gmd.de (950413.SGI.8.6.12/950213.SGI.AUTOCF) id OAA09828; Tue, 20 Jan 1998 14:19:52 +0100
Date: Tue, 20 Jan 1998 14:19:52 +0100
From: hempel@ccrl-nece.technopark.gmd.de (Rolf Hempel)
Message-Id: <199801201319.OAA09828@sgi7.ccrl-nece.technopark.gmd.de>
To: cmg@cray.com
Subject: Re: Low Level Benchmarks
Cc: hempel@ccrl-nece.technopark.gmd.de, parkbench-comm@CS.UTK.EDU
Reply-To: hempel@ccrl-nece.technopark.gmd.de

Dear Charles,

thank you for your note, and for sending me your simple test program.
One thing I like about the program is that it's easy to install and run;
no complicated makefiles, include files and sophisticated driver
software. We had the code running in five minutes.

In many points I agree with Ian Glendinning who already reported about
his tests with your code on the Meiko system. When we ran the test on
our SX-4, however, the results were very similar to ping-pong figures.
With the particular MPI version I used for my measurements, the
classical ping-pong test as implemented in MPPTEST of the MPICH
distribution gives about 4 usec less time in latency and about 4%
higher throughput than your test program. The reason for the increase in
latency as reported by your code is fully explained by the fact that
you forgot to correct for the time spent in the timer routine (see
below). So, we would have no problem with adopting a corrected version
of your code as the basic communication test. However, I think that
this is not the point.

The question we have to answer is what communication pattern we want
to measure with our benchmark code. In my view the ping-pong technique,
with all its problems, is much closer to a typical application than
your program. Of course, the situation "receiver already waiting"
implemented by the ping-pong, is a special case which will not be
found for all messages in an application. In this situation,
the MPI implementation can use a more efficient protocol, which will
lead to a best case measurement of latency and throughput.

I agree with Ian that the rapid succession of messages in one direction
is very untypical. Only a stupid programmer would do it this way in
an application, and not aggregate the messages to a larger one. What
you really measure with this benchmark is how well the MPI library
can deal with this kind of congestion. As you see, our library is not
affected at all by this, but, as Ian reported, the Meiko shows a much
different behaviour. In a sense, you measure a kind of worst case
scenario, as opposed to the best case one in the ping-pong.

One technical detail of your program: You time every send operation
separately, and then sum up the individual times. This requires a quite
accurate clock. I would expect that some machines could run into
trouble with this approach. Also, you don't correct for the time needed
for calling the timer twice for every send/receive. On machines with
highly optimized MPI libraries this is not at all negligible. On our
machine two timer calls require as much time as 25% of a complete
send-receive sequence!

As a summary, your basic communication program does not convince me as
a better alternative to ping-pong programs such as MPPTEST. The only
thing I really like about it is its simplicity.

Best regards,
  Rolf
------------------------------------------------------------------------
Rolf Hempel      (email: hempel@ccrl-nece.technopark.gmd.de)
Senior Research Staff Member
C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10,
53757 Sankt Augustin, Germany
Tel.: +49 (0) 2241 - 92 52 - 95
Fax:  +49 (0) 2241 - 92 52 - 99

From owner-parkbench-comm@CS.UTK.EDU Wed Jan 21 11:22:06 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id LAA27346; Wed, 21 Jan 1998 11:22:06 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA20207; Wed, 21 Jan 1998 10:55:58 -0500 (EST)
Received: from sun1.ccrl-nece.technopark.gmd.de (sun1.ccrl-nece.technopark.gmd.de [193.175.160.67]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id KAA20176; Wed, 21 Jan 1998 10:55:44 -0500 (EST)
Received: from sgi7.ccrl-nece.technopark.gmd.de (sgi7.ccrl-nece.technopark.gmd.de [193.175.160.89]) by sun1.ccrl-nece.technopark.gmd.de (8.7/3.4W296021412) with SMTP id QAA01123; Wed, 21 Jan 1998 16:50:13 +0100 (MET)
Received: (from hempel@localhost) by sgi7.ccrl-nece.technopark.gmd.de (950413.SGI.8.6.12/950213.SGI.AUTOCF) id QAA11663; Wed, 21 Jan 1998 16:54:00 +0100
Date: Wed, 21 Jan 1998 16:54:00 +0100
From: hempel@ccrl-nece.technopark.gmd.de (Rolf Hempel)
Message-Id: <199801211554.QAA11663@sgi7.ccrl-nece.technopark.gmd.de>
To: parkbench-comm@CS.UTK.EDU
Subject: NEW COMMS1 benchmark
Cc: eckhard@ess.nec.de, tbeckers@ess.nec.de,
        lonsdale@ccrl-nece.technopark.gmd.de,
        maciej@ccrl-nece.technopark.gmd.de,
        ritzdorf@ccrl-nece.technopark.gmd.de,
        zimmermann@ccrl-nece.technopark.gmd.de, springstubbe@gmd.de,
        hempel@ccrl-nece.technopark.gmd.de
Reply-To: hempel@ccrl-nece.technopark.gmd.de

In the recent discussion on the low-level benchmarks, Roger repeatedly
asked us to base our evaluation of the COMMS1 benchmark on his new
version, and not on the one which is still in the official PARKBENCH
distribution. At NEC we now have repeated the tests on the NEC SX-4
machine, and I would like to make a few comments on the results.

First of all, the raw data as reported by the table Primary Measurements
more closely match the figures given by other ping-pong tests than the
older version. The correction for oeverheads, however, is still
problematic for the following reasons:
1. In every loop iteration, the returned message is compared with the
   message sent. If one is concerned with the correctnes of the MPI
   library, this could be checked in a separate loop before the timing
   loop. The check inside the timing loop, done only by the sender
   process, delays the sender and thus makes sure that the receiver
   is already waiting in the receive for the next message. This
   aggravates the "Receiver ready" situation which I discussed in an
   earlier mail.
2. The authors take great care in correcting for the overhead introduced
   by the do loop. This is done by the loop over the dummy routine
   before the main loop. On the other hand, the correction for the
   check routine call introduces an overhead of one timer call which
   is NOT taken into account. (Here I assume that the internal clock
   is read out at a fixed point in time during every call of
   DWALLTIME00().) I would argue that on most machines the loop overhead
   per iteration is negligible as compared to a function call. On our
   machine, MPI_Wtime calls a C function which in turn calls an
   assembly language routine. The time needed for this is about 10%
   of our message latency!

Another problem in the measuring procedure is that the test message
contains a single constant, repeated as many times as there are words in
the message. Did the authors never think about the possibility of
data compression in interconnect systems? I would not be surprised to
see bandwidths of Terabytes/sec on some Ethernet connection between
workstations.

Apart from this, the raw data are much better now than they were before,
and when the above points were fixed, the resulting table would be
satisfactory. The interesting question is, however, how much added value
we get from the parameter fitting. In my earlier note, I called the
fitting procedure in the earlier COMMS1 benchmark "crude". I cannot
find a more appropriate word for a model which in cases deviates from
the measured values by more than 100%.

So, how much improvement do we get from the revised COMMS1 version?
As Roger said himself, the increase in modeling sophistication led
to a more complicated output file. Results are now given for two
models, the first one using two parameters, and the second one three.
As could be expected, the two-parameter model does not work better than
in the previous version. For our machine, latency is over-estimated
by 18.9 percent, and the bandwidth at the last data point is off by
27%. Since a linear model is just too simple to be applied to modern
message-passing libraries, I wonder why these results are still in the
output file at all.

The three-parameter fit is better than the two-parameter one. The
major advantage is that it exactly matches the first data point in
time, and the last data point in bandwidth. That is what people would
look at, if there were no parameter fitting at all. So, the reported
latency is the time measured for a zero-byte message, and is as good or
as bad as this measurement. For our MPI library, the RMS fitting
error for the whole data set is 14.04%, and the maximum relative error
is 33.4%. We now can discuss the meaning of the word "crude" (and I
apologize if as a non-native speaker I don't use the right word here),
but I would at least call it unsatisfactory. Given those differences
between model and measurements, I was not surprised to see the
projected RINFINITY as being too high. The 7.65 GBytes/s are well
beyond a memcpy operation in our shared memory, and measured rates never
exceeded 7.1 GBytes/s.

To summarize, in my opinion there is no added value given by the
parameter fitting. The latency value is the first entry in the raw
data table, and the asymptotic bandwidth is easy to figure out by just
looking at the bandwidths as measured for very long messages. As
explained above, the extrapolation by the parametrized model does not
add any precision as compared with a guess based on the long-message
table entries. For message lengths in between, what does a model help
me if it deviates from the measurements by up to 33%? So, my conclusion
would be to drop the whole parameter fitting from the PARKBENCH
low-level routines.

In a separate mail I will send the COMMS1 benchmark output, as produced
with our MPI library, to Roger. I don't want to swamp the whole
PARKBENCH forum with the detailed data.

Best regards,
 Rolf
------------------------------------------------------------------------
Rolf Hempel      (email: hempel@ccrl-nece.technopark.gmd.de)
Senior Research Staff Member
C&C Research Laboratories, NEC Europe Ltd., Rathausallee 10,
53757 Sankt Augustin, Germany
Tel.: +49 (0) 2241 - 92 52 - 95
Fax:  +49 (0) 2241 - 92 52 - 99

From owner-parkbench-comm@CS.UTK.EDU Fri Jan 23 12:24:12 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id MAA07290; Fri, 23 Jan 1998 12:24:11 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA06737; Fri, 23 Jan 1998 12:04:42 -0500 (EST)
Received: from post.mail.demon.net (post-10.mail.demon.net [194.217.242.154]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA06686; Fri, 23 Jan 1998 12:04:23 -0500 (EST)
Received: from minnow.demon.co.uk ([158.152.73.63]) by post.mail.demon.net
           id aa1003594; 23 Jan 98 16:49 GMT
Message-ID: <1GgxMFAgVMy0EwfI@minnow.demon.co.uk>
Date: Fri, 23 Jan 1998 16:29:20 +0000
To: hempel@ccrl-nece.technopark.gmd.de
Cc: parkbench-comm@CS.UTK.EDU, eckhard@ess.nec.de, tbeckers@ess.nec.de,
        lonsdale@ccrl-nece.technopark.gmd.de,
        maciej@ccrl-nece.technopark.gmd.de,
        ritzdorf@ccrl-nece.technopark.gmd.de,
        zimmermann@ccrl-nece.technopark.gmd.de, springstubbe@gmd.de
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Re: NEW COMMS1 benchmark
In-Reply-To: <199801211554.QAA11663@sgi7.ccrl-nece.technopark.gmd.de>
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>

To: The Parkbench discussion group

From: Roger Hockney


First the 3-parameter fit that is produced by New COMMS1 and discussed
by Rolf can be found in the html version of this reply at:

              www.minnow.demon.co.uk/Pbench/emails/hempel1.htm

Or by bringing up the PICT tool on your browser at:

              www.minnow.demon.co.uk/pict/source/pict2a.html
Then:

(1) select a suitable frame size for the PICT display

(2) change the data URL at top from .../data/t3e.res to .../data/sx4.res

(3) press the "GET DATA at URL" button, and the data should download.

(4) press the 3-PARA button then the APPLY3 button, and the 3-para curve
should be drawn.

************************************************************************
Rolf has especialy asked me to point out that the results that he has
supplied are for the SX4 using Release 7.2 MPI software which is will
soon be replaced by a newer version with significantly better latency
and bandwidth. This data does not therefore represent the best that can
be achieved on the SX4.
************************************************************************

I now reply to specific points in Rolf Hempel's email to group on 21 Jan
1998.
 
>In the recent discussion on the low-level benchmarks, Roger repeatedly
>asked us to base our evaluation of the COMMS1 benchmark on his new
>version, and not on the one which is still in the official PARKBENCH
>distribution. At NEC we now have repeated the tests on the NEC SX-4
>machine, and I would like to make a few comments on the results.
>
Thank you, Rolf, for taking the trouble to install New COMMS1 and
sending me the results. I discuss the results below. In answer to your
other points: 

>First of all, the raw data as reported by the table Primary Measurements
>more closely match the figures given by other ping-pong tests than the
>older version. The correction for oeverheads, however, is still

The two points you raise could easily be incorporated in the code. I was
reluctant to tamper with the measurement part of the COMMS1 code because
it would introduce systematic differences in the measurements and make
comparison with older measurements invalid. But of course this has to be
done from time to time. My changes were deliberately kept to a minimum
and confined largely to the parameter fitting part which was causing the
main problems being reported.

>Another problem in the measuring procedure is that the test message
>contains a single constant, repeated as many times as there are words in
>the message. Did the authors never think about the possibility of
>data compression in interconnect systems? I would not be surprised to
>see bandwidths of Terabytes/sec on some Ethernet connection between
>workstations.

Yes I did think about this, but decided I did not know enough about
compression to devise a way to prevent it. Compression algorithms are so
clever now that this may be impossible to do. Anyway this is not yet a
problem, so I suggest we leave it until it becomes one. Perhaps software
should get benefit in its performance numbers for the use of compression
but then we need something more difficult than a sequence of constants
to use as a standard test.
>
>Apart from this, the raw data are much better now than they were before,
>and when the above points were fixed, the resulting table would be
>satisfactory. 

I would have no objection to this.

>The interesting question is, however, how much added value
>we get from the parameter fitting. In my earlier note, I called the

The added value provided in the case of the NEC SX4 results is that the
3-parameter fit (see graph) gives a satisfactory fit to ALL the data.
This reduces 112 numbers to 3 numbers and an analytic formula that can
be manipulated. This is called "Performance Characterisation" and
provides very useful data compression. Furthermore the parameters
themselves can be interpreted as characterising various aspects of the
shape and asymptotes of the performance curve. 

In contrast reporting just the first time and last performance value and
calling them the Latency and Bandwidth only tells us about these two
points. Further the choice of which message lengths to use for this type
of definition is entirely arbitrary and open to much argument at both
ends. However, New COMMS1 does provide this type of output in the lines
marked KEY SPOT VALUES but I deliberately avoided calling them values of
Latency and Bandwidth in order to avoid senseless argument.

Some people are very interested in the parametric representations,
others not. One is not obliged to use or look at the parametric
representations, but they are there for those who want them. For those
interested just in the Raw data those are reported first in the output
file of New COMMS1.


>As could be expected, the two-parameter model does not work better than
>in the previous version. For our machine, latency is over-estimated
>by 18.9 percent, and the bandwidth at the last data point is off by
>27%. Since a linear model is just too simple to be applied to modern
>message-passing libraries, I wonder why these results are still in the
>output file at all.

The 2-PARA results are reported just so that one can see that they are
unsatisfactory, and that therefore one must lose simplicity and consider
a 3-para fit. Actually there is a switch that can be set in the
comms1.inc file to suppress reporting of output if the errors exceed
specified values. Every time I have used this, however, I have tended to
rerun with the output on, in order to see just what the 2-para gave. If
the 2-para can be accepted it is much preferable to the 3-para because
of its simplicity and clearer interpretation of the significance of the
parameters.

>as bad as this measurement. For our MPI library, the RMS fitting
>error for the whole data set is 14.04%, and the maximum relative error
>is 33.4%. We now can discuss the meaning of the word "crude" (and I

If you look at the graph itself (see above), I think you will find the
agreement much more satisfactory than is apparent from the reported
errors. You also may have too high an expectation of what parametric
fitting can reasonably be expected to provide, especially for data with
discontinuities. 

In my experience agreement in RMS error rarely is better than 7% and
anything up to 30% is probably still useful. A maximum error of 30% is
not bad at all, and may be due to a single rogue point or an isolated
discontinuity. Although error numbers are reported in the output, one
really has to look at the graph of all data before drawing conclusions. 

>Given those differences
>between model and measurements, I was not surprised to see the
>projected RINFINITY as being too high. The 7.65 GBytes/s are well
>beyond a memcpy operation in our shared memory, and measured rates never
>exceeded 7.1 GBytes/s.

Actually 7.65 differs from 7.1 by 8% which is very good agreement
indeed. 

>To summarize, in my opinion there is no added value given by the
>parameter fitting. The latency value is the first entry in the raw
>data table, and the asymptotic bandwidth is easy to figure out by just
>looking at the bandwidths as measured for very long messages. As

Your definitions of Latency and Bandwidth will have to be more precise
than the above. What does "by looking at the B/W for very long messages"
actually mean. What are "very long messages?". "What message length
should the first entry in the Raw data table be for?" ... etc.

>explained above, the extrapolation by the parametrized model does not
>add any precision as compared with a guess based on the long-message
>table entries. 

Strictly-speaking it is invalid to extrapolate the fitted curve outside
the range of measured values. However we will always do this, and in
this case the fit predicts the known hardware limit as well as can be
reasonably expected.

>For message lengths in between, what does a model help
>me if it deviates from the measurements by up to 33%? So, my conclusion
>would be to drop the whole parameter fitting from the PARKBENCH
>low-level routines.

I think the graph of the results and the 3-para fit shows remarkably
good and useful agreement. But this is a subjective personal opinion.
What do others think?

Best wishes

Roger
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk
University of   and link to my new book: "The Science of Computer Benchmarking"
Westminster UK  suggestions welcome. Know any fish movies or suitable links?

From owner-parkbench-comm@CS.UTK.EDU Mon Jan 26 06:39:21 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA02920; Mon, 26 Jan 1998 06:39:21 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA09063; Mon, 26 Jan 1998 06:22:47 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA09055; Mon, 26 Jan 1998 06:22:36 -0500 (EST)
Received: from mordillo (p112.nas1.is3.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA12226; Mon, 26 Jan 98 11:19:15 GMT
Date: Mon, 26 Jan 98 10:14:38 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Re: Low Level Benchmarks 
To: Charles Grassl  <cmg@cray.com>, parkbench-comm@CS.UTK.EDU
Cc: solchenbach@pallas.de
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <199801151711.RAA07227@magnet.cray.com> 
Message-Id: <Chameleon.885813075.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Charles,

Thanks for your thoughts and experiences with the Pallas PMB codes -
I will  forward them to the authors...  The main points in favour of
the PMB codes are that they are in C and potentially produce results
for a variety of MPI calls... Obviously if the results they produce are
flawed...

Regarding new low-level codes I would be in favour of taking up your
kind offer of writing a set of codes in C/Fortran. I guess the main
problem is getting a concensus with regards methodology and measurements
that are used with these codes.

Maybe we can decide that a number of actions should be undertaken...

1) It seems clear that no one is 100% happy with the current version
   of the low-level codes. So, this implies that they need to be 
   replaced !?

2) If we are going to replace the codes we can go down a couple of routes;
   start from scratch, replace with Roger's new codes or some combination of
   both...

3) I would be happy to see us start from scratch and  create 
   C/Fortran codes where the methodology and design of each can be
   "hammered out" by discussion first and then implemented 
   (and iterated as necessary).

4) Assuming that we want to go down this route, I suggest we make a starting
   point of Charles' "suggestions and requirements for the low level 
   benchmark design" - towards the end of this email. I am happy to
   put these words on the web and update/change them as our dicussions
   evolve...

5) Charles has offered his services to help write/design/test these new codes -
   I'm willing to offer my services in a similar fashion. I'm sure that others
   interested in the low-level codes could contribute something here as well. 

Overall, it seems clear to me that we have enough energy and manpower to
produce a new set low-level codes whose methodology and design is correct
and relevant to todays systems...

I look forward to your comments...

Regards

Mark 






--- On Thu, 15 Jan 1998 11:11:39 -0600 (CST)  Charles Grassl <cmg@cray.com> wrote:
> 
> To:      Parkbench interests
> From:    Charles Grassl
> Subject: Low Level benchmarks
> 
> Date:    15 January, 1998
> 
> 
> Mark, thank you for pointing us to the PMB benchmark.  It is well written
> and coded, but has some discrepancies and shortcomings.  My comments
> lead to suggestions and recommendation regarding low level communication
> benchmarks.
> 
> First, in program PMB the PingPong tests are twice as fast (in time)
> as the corresponding message length tests in the PingPing tests (as run
> on a CRAY T3E).  The calculation of the time and bandwidth is incorrect
> by a factor of 100% in one of the programs.
> 
> This error can be fixed by recording, using and reporting the actual
> time, amount of data sent and their ratio.  That is, the time should not
> be divided by two in order to correct for a round trip.  This recorded
> time is for a round trip message, and is not precisely the time for
> two messages.  Half the round trip message passing time, as reported in
> the PMB tests, is not the time for a single message and should not be
> reported and such.  This same erroneous technique is used in the COMMS1
> and COMMS2 two benchmarks.  (Is Parkbench is responsible for propagating
> this incorrect methodology.)
> 
> In program PMB, the testing procedure performs a "warm up".  This
> procedure is a poor testing methodology because is discards important
> data.  Testing programs such as this should record all times and calculate
> the variance and other statistics in order to perform error analysis.
> 
> Program PMB does not measure contention or allow extraction of network
> contention data.  Tests "Allreduce" and "Bcast" and several others
> stress the inter-PE communication network with multiple messages,
> but it is not possible to extract information about the contention from
> these tests.  The MPI routines for Allreduce and Bcast have algorithms
> which change with respect to number of PEs and message lengths,  Hence,
> without detailed information about the specific algorithms used, we cannot
> extract information about network performance or further characterize
> the inter-PE network.
> 
> Basic measurements must be separated from algorithms.  Tests PingPong,
> PingPing, Barrier, Xover, Cshift and Exchange are low level.  Tests
> Allreduce and Bcast are algorithms.  The algorithms Allreduce and Bcast
> need additional (algorithmic) information in order to be described in
> terms of the basic level benchmarks.
> 
> 
> With respect to low level testing, the round trip exchange of messages,
> as per PingPing and PingPong in PMB or COMMS1 and COMMS2, is not
> characteristic of the lowest level of communication.  This pattern
> is actually rather rare in programming practice.  It is more common
> for tasks to send single messages and/or to receive single messages.
> In this scheme, messages do not make a round trip and there is not
> necessarily caching or other coherency effects.
> 
> The single message passing is a distinctly different case from that
> of round trip tests.  We should be worried that the round trip testing
> might introduce artifacts not characteristic of actual (low level) usage.
> We need a better test of basic bandwidth and latency in order to measure
> and characterize message passing performance.
> 
> 
> Here are suggestions and requirements, in an outline form, for a low
> level benchmark design:
> 
> 
> 
>     I. Single and double (bidirectional) messages.
> 
>        A. Test single messages, not round trips.
>          1. The round trip test is an algorithm and a pattern.  As
>             such it should not be used as the basic low level test of
>             bandwidth.
>          2. Use direct measurements where possible (which is nearly
>             always).  For experimental design, the simplest method is
>             the most desirable and best.
>          3. Do not perform least squares fits A PIORI.  We know that
>             the various message passing mechanisms are not linear or
>             analytic because different mechanisms are used for different
>             message sizes.  It is not necessarily known before hand
>             where this transition occurs.  Some computer systems have
>             more than two regimes and their boundaries are dynamic.
>          4. Our discussion of least squares fitting is loosing tract
>             of experimental design versus modeling.  For example, the
>             least squares parameter for t_0 from COMMS1 is not a better
>             estimate of latency than actual measurements (assuming
>             that the timer resolution is adequate).  A "better" way to
>             measure latency is to perform addition DIRECT measurements,
>             repetitions or otherwise, and hence decrease the statistical
>             error.  The fitting as used in the COMMS programs SPREADS
>             error.  It does not reduce error and hence it is not a
>             good technique for measuring such an important parameter
>             as latency.
> 
>        B. Do not test zero length messages.  Though valid, zero length
>           messages are likely to take special paths through library
>           routines.  This special case is not particularly interesting or
>           important.
>           1. In practice, the most common and important message size is 64
>              bits (one word).  The time for this message is the starting
>              point for bandwidth characterization.
> 
>        D. Record all times and use statistics to characterize the message
>           passing time.  That is, do not prime or warm up caches
>           or buffers.  Timings for unprimed caches and buffers give
>           interesting and important bounds.  These timings are also the
>           nearest to typical usage.  
>           1. Characterize message rates by a minimum, maximum, average
>              and standard deviation.
> 
>        E. Test inhomogeneity of the communication network.  The basic
>           message test should be performed for all pairs of PEs.
>    
> 
>    II. Contention.
> 
>        A. Measure network contention relative to all PEs sending and/or
>           receiving messages.
> 
>        B. Do not use high level routines where the algorithm is not known.
>           1. With high level algorithms, we cannot deduce which component
>              of the timing is attributable to the "operation count"
>              and which is attributable to the actual system (hardware)
>              performance.
> 
> 
>   III. Barrier.
> 
>        A. Simple test of barrier time for all numbers of processors.
> 
> 
> 
> 
> Additionally, the suite should be easy to use.  C and Fortran programs
> for direct measurements of message passing times are short and simple.
> These simple tests are of order 100 lines of code and, at least in
> Fortran 90, can be written in a portable and reliable manner.
> 
> The current Parkbench low level suite does not satisfy the above
> requirements.  It is inaccurate, as pointed out by previous letters, and
> uses questionable techniques and methodologies.  It is also difficult to
> use, witness the proliferation of files, patches, directories, libraries
> and the complexity and size of the Makefiles.
> 
> This Low Level suite is a burden for those who are expecting a tool to
> evaluate and investigate computer performance.  The suite is becoming
> a liability for our group.  As such, it should be withdrawn from
> distribution.
> 
> I offer to write, test and submit a new set of programs which satisfy
> most of the above requirements.
> 
> 
> Charles Grassl
> SGI/Cray Research
> Eagan, Minnesota  USA
> 

---------------End of Original Message-----------------

-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 01/26/98 - Time: 10:14:38
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Mon Jan 26 11:54:37 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id LAA07118; Mon, 26 Jan 1998 11:54:37 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id LAA18845; Mon, 26 Jan 1998 11:21:36 -0500 (EST)
Received: from timbuk.cray.com (timbuk-fddi.cray.com [128.162.8.102]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id LAA18837; Mon, 26 Jan 1998 11:21:33 -0500 (EST)
Received: from ironwood.cray.com (root@ironwood-fddi.cray.com [128.162.21.36]) by timbuk.cray.com (8.8.7/CRI-gate-news-1.3) with ESMTP id KAA23428 for <parkbench-comm@cs.utk.edu>; Mon, 26 Jan 1998 10:21:26 -0600 (CST)
Received: from magnet.cray.com (magnet [128.162.173.162]) by ironwood.cray.com (8.8.4/CRI-ironwood-news-1.0) with ESMTP id KAA29079 for <parkbench-comm@cs.utk.edu>; Mon, 26 Jan 1998 10:21:24 -0600 (CST)
From: Charles Grassl <cmg@cray.com>
Received: by magnet.cray.com (8.8.0/btd-b3)
          id QAA29329; Mon, 26 Jan 1998 16:21:23 GMT
Message-Id: <199801261621.QAA29329@magnet.cray.com>
Subject: Low Level Benchmarks
To: parkbench-comm@CS.UTK.EDU
Date: Mon, 26 Jan 1998 10:21:23 -0600 (CST)
X-Mailer: ELM [version 2.4 PL24-CRI-d]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit


To:      Parkbench interests
From:    Charles Grassl
Subject: Low Level benchmarks

Date:    26, January, 1998


A short review of where we have been and decided:

Last year we agreed (via email exchanges) that the Parkbench Low Level
benchmark suite is not intended to be an -MPI- -test- suite.  There was a
consensus that we intended to measure low level performance, not algorithm
design or implementation.  This is why the Pallas benchmark, though
useful for testing the performance of several important MPI functions,
is not the basic low level test which we desire.  (I believe that the
performance measurement of the MPI functions is a worthwhile project for
this group, but it needs to be separate from the low level benchmarks.)

At the May, 1997 Parkbench meeting in Knoxville, TN, we unanimously
decided that the measurement and analysis (fitting) portions of the
COMMS programs would be made into a separate program.  This from Michael
Berry's minutes (23 May 1997):

  After more discussion, the following COMMS changes/outputs were
  unanimously agreed upon:

	  1.  Maximum bandwidth with corresp. message size.
	  2.  Minimum message-passing time with corresp. message
	      size.
	  3.  Time for minimum message length (could be 0, 1, 8,
	      or 32 bytes but must be specified).
	  4.  The software will be split into two program: one
	      to report the spot measurements and the other for
	      the analysis.

Some of the objections with the Parkbench Low Level codes are that they
are difficult to build, run and analyze.  This attributable to their
organization and design.  Separating the analysis would greatly simply
the programs, but the programs still need to be rewritten.



I include in this email message a simple replacement code for COMMS1.
It uses the "back and forth" methodology, reports maximum and minimum
times with corresponding sizes and and does not include "analysis".
It is equivalent to the measurement portion of COMMS1, though it is much
simpler and easier to use.

I will comment on the experimental methodology used in this program.

 - The reported times in standard out are actual round trip times.  It is
   a poor experimental practice to modify raw measurements too early.
   We should not mix measured times with derived times.  The practice
   leads to confusion and errors (witness the Pallas benchmark code and
   and an earlier version of Parkbench).  If we desire to divide the
   times by two (because of the round trip), then this should be done
   in a analysis portion.  Otherwise we misrepresent round trip times
   as actual single trip times, which hay are not.

 - All times are saved and written to unit 7.  The reported times
   in standard out are the first and the last measurements for each
   message size.  The experimental principle is that no data should be
   discarded with out analysis.  We can use statistical analysis or
   graphics or fitting routines to analyze the raw output.  (I favor
   graphics and statistical analysis.)  If we look at the raw output,
   we will see interesting features, such as the actual "warm up" count
   (usually five or less repetitions) and the distribution of times
   (not Gaussian!).

 - Each repetition is individually timed.  If the timer does not have
   adequate resolution, then the times for a number of repetitions, from
   two to all, can be aggregated and used.  This aggregation can be done
   in the analysis phase.  (Most computers should be able to time and
   resolve single round trip messages.)  This aggregation should not be
   done before adequate analysis or evidence that it needs to be done.

 - Each message size is tested the same number of repetitions.  We prefer
   to keep this number a constant so that the experimental sampling
   error (proportional to 1/sqrt[repetitions]) is the same for each
   message size.  Also, it is difficult to cleanly and simply adjust
   the repetition count relative to the message size.

I also have one replacement program for both COMMS2 and COMMS3 (note
that the COMMS2 measurement is a subset of COMMS3 measurements).
More on that later.

Charles Grassl
SGI/Cray Research
Eagan, Minnesota USA


-----------------------------------------------------------------------

      program       Single ! Compile: f90 file.f -l mpi
      character*40  Title
      data          Title/' Single Messages --- MPI'/

      integer       log2nmax,nmax,n_repetitions
      parameter     (log2nmax=18,nmax=2**log2nmax,n_repetitions=50)
      integer       n_starts,n_mess
      parameter     (n_starts=2,n_mess=2)
      include       'mpif.h'
      integer       ier,status(MPI_STATUS_SIZE)
      integer       my_pe,npes
      integer       log2n,n,nrep,i
      real*8        t_call,timer,tf(0:n_repetitions)
      real*8        A(0:nmax-1)
      save          A

      call mpi_init( ier )
      call mpi_comm_rank(MPI_COMM_WORLD, my_pe, ier)
      call mpi_comm_size(MPI_COMM_WORLD, npes, ier)

      radian=1
      do i=0,nmax-1
        A(i) = acos(radian)*i
      end do
        
      tf(0) = timer()
      do nrep=1,n_repetitions
        tf(nrep) = timer()
      end do
      t_call=(tf(n_repetitions)-tf(0))/n_repetitions

      if (my_pe.eq.0) then
        call table_top(Title,npes,n_starts,n_mess,n_repetitions,t_call)
      end if

      do log2n=0,log2nmax
        n = 2**log2n
        call mpi_barrier( MPI_COMM_WORLD, ier )
        tf(0) = timer()
        do nrep=1,n_repetitions
          if (my_pe.eq.1) then
            call MPI_SEND(A,8*n,MPI_BYTE,0,10,MPI_COMM_WORLD,ier)
            call MPI_RECV(A,8*n,MPI_BYTE,0,20,MPI_COMM_WORLD,status,ier)
          end if
          if (my_pe.eq.0) then
            call MPI_RECV(A,8*n,MPI_BYTE,1,10,MPI_COMM_WORLD,status,ier)
            call MPI_SEND(A,8*n,MPI_BYTE,1,20,MPI_COMM_WORLD,ier)
          end if
          tf(nrep) = timer()
        end do
        if (my_pe.eq.0) then
          call table_body(8*n,n_mess,n_repetitions,tf,t_call)
        end if
      end do

      call mpi_finalize(ier)
      end
      subroutine table_top( Title,npes,
     .     n_starts,n_mess,n_repetitions,t_call)
      integer       M
      parameter     (M = 1 000 000)
      character*40  Title
      integer       npes,n_starts,n_mess,n_repetitions
      real*8        t_call

      write(6,9010) Title,npes,n_starts,n_mess,n_repetitions,t_call*M

      return
9010  format(//a40,
     .  // ' Number of PEs:  ',i8
     .  // ' Starts:         ',i8,
     .   / ' Messages:       ',i8,
     .   / ' Repetitions:    ',i8,
     .   / ' Timer overhead: ',f8.3,' microsecond',
     .  // 8x,'            First      ',
     .        '              Last     ',
     .   /'  Length',2x,2('      Time       Rate ',1x),
     .   /' [Bytes]',2x,2(' [Microsec.] [Mbyte/s]',1x),
     .   /' ',8('-'),2x,2(21('-'),2x))
      end
      subroutine table_body(n_byte,n_mess,n_repetitions,tf,t_call)
      integer       M
      parameter     (M = 1 000 000)
      integer   n_byte,n_mess,n_repetitions,i
      real*8    tf(0:n_repetitions)
      real*8    t_call
      real*8    t_first,t_last

      t_first = (tf(1)-tf(0))-t_call
      t_last  = (tf(n_repetitions)-tf(n_repetitions-1))-t_call
      write(6,9020) n_byte,t_first*M,n_mess*n_byte/(t_first*M),
     .                      t_last *M,n_mess*n_byte/(t_last *M)
      write(7) n_byte,n_repetitions,n_mess
      write(7) ((tf(i)-tf(i-1))-t_call,i=1,n_repetitions)

      return
9020  format(i8,     2x,2(f10.1,1x,f10.0,2x))
      end

From owner-parkbench-comm@CS.UTK.EDU Mon Jan 26 13:06:36 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id NAA08767; Mon, 26 Jan 1998 13:06:36 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA23400; Mon, 26 Jan 1998 12:31:16 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA23166; Mon, 26 Jan 1998 12:30:05 -0500 (EST)
Received: from mordillo ([195.102.195.125]) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA15447; Mon, 26 Jan 98 17:31:15 GMT
Date: Mon, 26 Jan 98 17:23:08 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Fw: Re: Low Level Benchmarks 
To: parkbench-comm@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <34CCB99F.2B3C3D63@cumbria.eng.sun.com> 
Message-Id: <Chameleon.885835450.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

This came direct to me... The rest of Parkbench are probably interested
in Bodo's comments.

Mark

--- On Mon, 26 Jan 1998 08:28:15 -0800  Bodo Parady - SMCC Performance Development 
<Bodo.Parady@Eng.Sun.COM> wrote:
> The key items to find are:
> 
>         Lock time (defined as time to release a lock remotely)
>             Example would be reader spinning on memory, waiting
>             for change in memory word, or receipt of interrupt.
>             This is the effective ping-pong half time.  Sadly
>             subroutine and library call overhead can render
>             this result meaningless.
> 
>             Measuring one way rates is no good here since the response
>             time must be factored in.  This is a two-way transfer
> 
>         Channel rate (defined as large block transfer rate).
> 
>         Block size at half channel rate.
> 
>         Block size at twice lock time latency.
> 
>         Full curve, stepping at 1, 2, 4, 8, 16, ..., 2*n byte block sizes
>                 at full issue rate.  This is probably the least important
>                 since it involves coalescence of transmitted data.
> 
> The fear is that given the limitations of MPI/PVM, and to some degree
> of C and Fortran that accurate measures of these quantities may
> not be practical.
> 
> Regards.
> 
>                 Bodo Parady
> 
> Mark Baker wrote:
> 
> > Charles,
> >
> > Thanks for your thoughts and experiences with the Pallas PMB codes -
> > I will  forward them to the authors...  The main points in favour of
> > the PMB codes are that they are in C and potentially produce results
> > for a variety of MPI calls... Obviously if the results they produce are
> > flawed...
> >
> > Regarding new low-level codes I would be in favour of taking up your
> > kind offer of writing a set of codes in C/Fortran. I guess the main
> > problem is getting a concensus with regards methodology and measurements
> > that are used with these codes.
> >
> > Maybe we can decide that a number of actions should be undertaken...
> >
> > 1) It seems clear that no one is 100% happy with the current version
> >    of the low-level codes. So, this implies that they need to be
> >    replaced !?
> >
> > 2) If we are going to replace the codes we can go down a couple of routes;
> >    start from scratch, replace with Roger's new codes or some combination of
> >    both...
> >
> > 3) I would be happy to see us start from scratch and  create
> >    C/Fortran codes where the methodology and design of each can be
> >    "hammered out" by discussion first and then implemented
> >    (and iterated as necessary).
> >
> > 4) Assuming that we want to go down this route, I suggest we make a starting
> >    point of Charles' "suggestions and requirements for the low level
> >    benchmark design" - towards the end of this email. I am happy to
> >    put these words on the web and update/change them as our dicussions
> >    evolve...
> >
> > 5) Charles has offered his services to help write/design/test these new codes -
> >    I'm willing to offer my services in a similar fashion. I'm sure that others
> >    interested in the low-level codes could contribute something here as well.
> >
> > Overall, it seems clear to me that we have enough energy and manpower to
> > produce a new set low-level codes whose methodology and design is correct
> > and relevant to todays systems...
> >
> > I look forward to your comments...
> >
> > Regards
> >
> > Mark
> >
> > --- On Thu, 15 Jan 1998 11:11:39 -0600 (CST)  Charles Grassl <cmg@cray.com> wrote:
> > >
> > > To:      Parkbench interests
> > > From:    Charles Grassl
> > > Subject: Low Level benchmarks
> > >
> > > Date:    15 January, 1998
> > >
> > >
> > > Mark, thank you for pointing us to the PMB benchmark.  It is well written
> > > and coded, but has some discrepancies and shortcomings.  My comments
> > > lead to suggestions and recommendation regarding low level communication
> > > benchmarks.
> > >
> > > First, in program PMB the PingPong tests are twice as fast (in time)
> > > as the corresponding message length tests in the PingPing tests (as run
> > > on a CRAY T3E).  The calculation of the time and bandwidth is incorrect
> > > by a factor of 100% in one of the programs.
> > >
> > > This error can be fixed by recording, using and reporting the actual
> > > time, amount of data sent and their ratio.  That is, the time should not
> > > be divided by two in order to correct for a round trip.  This recorded
> > > time is for a round trip message, and is not precisely the time for
> > > two messages.  Half the round trip message passing time, as reported in
> > > the PMB tests, is not the time for a single message and should not be
> > > reported and such.  This same erroneous technique is used in the COMMS1
> > > and COMMS2 two benchmarks.  (Is Parkbench is responsible for propagating
> > > this incorrect methodology.)
> > >
> > > In program PMB, the testing procedure performs a "warm up".  This
> > > procedure is a poor testing methodology because is discards important
> > > data.  Testing programs such as this should record all times and calculate
> > > the variance and other statistics in order to perform error analysis.
> > >
> > > Program PMB does not measure contention or allow extraction of network
> > > contention data.  Tests "Allreduce" and "Bcast" and several others
> > > stress the inter-PE communication network with multiple messages,
> > > but it is not possible to extract information about the contention from
> > > these tests.  The MPI routines for Allreduce and Bcast have algorithms
> > > which change with respect to number of PEs and message lengths,  Hence,
> > > without detailed information about the specific algorithms used, we cannot
> > > extract information about network performance or further characterize
> > > the inter-PE network.
> > >
> > > Basic measurements must be separated from algorithms.  Tests PingPong,
> > > PingPing, Barrier, Xover, Cshift and Exchange are low level.  Tests
> > > Allreduce and Bcast are algorithms.  The algorithms Allreduce and Bcast
> > > need additional (algorithmic) information in order to be described in
> > > terms of the basic level benchmarks.
> > >
> > >
> > > With respect to low level testing, the round trip exchange of messages,
> > > as per PingPing and PingPong in PMB or COMMS1 and COMMS2, is not
> > > characteristic of the lowest level of communication.  This pattern
> > > is actually rather rare in programming practice.  It is more common
> > > for tasks to send single messages and/or to receive single messages.
> > > In this scheme, messages do not make a round trip and there is not
> > > necessarily caching or other coherency effects.
> > >
> > > The single message passing is a distinctly different case from that
> > > of round trip tests.  We should be worried that the round trip testing
> > > might introduce artifacts not characteristic of actual (low level) usage.
> > > We need a better test of basic bandwidth and latency in order to measure
> > > and characterize message passing performance.
> > >
> > >
> > > Here are suggestions and requirements, in an outline form, for a low
> > > level benchmark design:
> > >
> > >
> > >
> > >     I. Single and double (bidirectional) messages.
> > >
> > >        A. Test single messages, not round trips.
> > >          1. The round trip test is an algorithm and a pattern.  As
> > >             such it should not be used as the basic low level test of
> > >             bandwidth.
> > >          2. Use direct measurements where possible (which is nearly
> > >             always).  For experimental design, the simplest method is
> > >             the most desirable and best.
> > >          3. Do not perform least squares fits A PIORI.  We know that
> > >             the various message passing mechanisms are not linear or
> > >             analytic because different mechanisms are used for different
> > >             message sizes.  It is not necessarily known before hand
> > >             where this transition occurs.  Some computer systems have
> > >             more than two regimes and their boundaries are dynamic.
> > >          4. Our discussion of least squares fitting is loosing tract
> > >             of experimental design versus modeling.  For example, the
> > >             least squares parameter for t_0 from COMMS1 is not a better
> > >             estimate of latency than actual measurements (assuming
> > >             that the timer resolution is adequate).  A "better" way to
> > >             measure latency is to perform addition DIRECT measurements,
> > >             repetitions or otherwise, and hence decrease the statistical
> > >             error.  The fitting as used in the COMMS programs SPREADS
> > >             error.  It does not reduce error and hence it is not a
> > >             good technique for measuring such an important parameter
> > >             as latency.
> > >
> > >        B. Do not test zero length messages.  Though valid, zero length
> > >           messages are likely to take special paths through library
> > >           routines.  This special case is not particularly interesting or
> > >           important.
> > >           1. In practice, the most common and important message size is 64
> > >              bits (one word).  The time for this message is the starting
> > >              point for bandwidth characterization.
> > >
> > >        D. Record all times and use statistics to characterize the message
> > >           passing time.  That is, do not prime or warm up caches
> > >           or buffers.  Timings for unprimed caches and buffers give
> > >           interesting and important bounds.  These timings are also the
> > >           nearest to typical usage.
> > >           1. Characterize message rates by a minimum, maximum, average
> > >              and standard deviation.
> > >
> > >        E. Test inhomogeneity of the communication network.  The basic
> > >           message test should be performed for all pairs of PEs.
> > >
> > >
> > >    II. Contention.
> > >
> > >        A. Measure network contention relative to all PEs sending and/or
> > >           receiving messages.
> > >
> > >        B. Do not use high level routines where the algorithm is not known.
> > >           1. With high level algorithms, we cannot deduce which component
> > >              of the timing is attributable to the "operation count"
> > >              and which is attributable to the actual system (hardware)
> > >              performance.
> > >
> > >
> > >   III. Barrier.
> > >
> > >        A. Simple test of barrier time for all numbers of processors.
> > >
> > >
> > >
> > >
> > > Additionally, the suite should be easy to use.  C and Fortran programs
> > > for direct measurements of message passing times are short and simple.
> > > These simple tests are of order 100 lines of code and, at least in
> > > Fortran 90, can be written in a portable and reliable manner.
> > >
> > > The current Parkbench low level suite does not satisfy the above
> > > requirements.  It is inaccurate, as pointed out by previous letters, and
> > > uses questionable techniques and methodologies.  It is also difficult to
> > > use, witness the proliferation of files, patches, directories, libraries
> > > and the complexity and size of the Makefiles.
> > >
> > > This Low Level suite is a burden for those who are expecting a tool to
> > > evaluate and investigate computer performance.  The suite is becoming
> > > a liability for our group.  As such, it should be withdrawn from
> > > distribution.
> > >
> > > I offer to write, test and submit a new set of programs which satisfy
> > > most of the above requirements.
> > >
> > >
> > > Charles Grassl
> > > SGI/Cray Research
> > > Eagan, Minnesota  USA
> > >
> >
> > ---------------End of Original Message-----------------
> >
> > -------------------------------------
> > CSM, University of Portsmouth, Hants, UK
> > Tel: +44 1705 844285    Fax: +44 1705 844006
> > E-mail: mab@sis.port.ac.uk
> > Date: 01/26/98 - Time: 10:14:38
> > URL http://www.sis.port.ac.uk/~mab/
> > -------------------------------------
> 
> 
> 
> 

---------------End of Original Message-----------------

-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 01/26/98 - Time: 17:23:08
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Mon Jan 26 14:08:38 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id OAA11289; Mon, 26 Jan 1998 14:08:37 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id NAA02837; Mon, 26 Jan 1998 13:52:54 -0500 (EST)
Received: from haven.EPM.ORNL.GOV (haven.epm.ornl.gov [134.167.12.69]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id NAA02817; Mon, 26 Jan 1998 13:52:50 -0500 (EST)
Received: (from worley@localhost) by haven.EPM.ORNL.GOV (8.8.3/8.8.3) id NAA11755; Mon, 26 Jan 1998 13:52:49 -0500 (EST)
Date: Mon, 26 Jan 1998 13:52:49 -0500 (EST)
From: Pat Worley <worley@haven.EPM.ORNL.GOV>
Message-Id: <199801261852.NAA11755@haven.EPM.ORNL.GOV>
To: parkbench-comm@CS.UTK.EDU
Subject: Re: Fw: Re: Low Level Benchmarks 
In-Reply-To: Mail from 'Mark Baker  <mab@sis.port.ac.uk>'
      dated: Mon, 26 Jan 98 17:23:08 GMT
Cc: worley@haven.EPM.ORNL.GOV

(From Charles Grassl)

> Last year we agreed (via email exchanges) that the Parkbench Low Level
> benchmark suite is not intended to be an -MPI- -test- suite.  There was a
> consensus that we intended to measure low level performance, not algorithm
> design or implementation. 

(From Bodo Parady via Mark Baker)

> The fear is that given the limitations of MPI/PVM, and to some degree
> of C and Fortran that accurate measures of these quantities may
> not be practical.
> 

I have a problem with attempting to determine low level communication
performance parameters independent of the communication library when it

a) is such a difficult task (I doubt that any portable program will be
   "accurate enough" across all the interesting platforms.) 

b) does not reflect what users would see in practice (since they will be
   using MPI or PVM in C or Fortran).

Am I missing something? The primary utility (for me) of the low level
benchmarks is to help explain the performance observed in the Parkbench
kernels and compact applications, or in my own codes. What level of accuracy
is required for such an application? Are more accurate or detailed
measurements useful or doable? 

Upon reflection, such low(er) level performance data would be useful to the
developer of a communication library, to help evaluate its performance, but
that appears to require system-specific measurements (and system-specific
interpretation). Is this really something we want to attempt?

Pat Worley










From owner-parkbench-comm@CS.UTK.EDU Thu Jan 29 16:29:33 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id QAA19023; Thu, 29 Jan 1998 16:29:33 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id QAA09768; Thu, 29 Jan 1998 16:18:48 -0500 (EST)
Received: from haven.EPM.ORNL.GOV (haven.epm.ornl.gov [134.167.12.69]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id QAA09756; Thu, 29 Jan 1998 16:18:45 -0500 (EST)
Received: (from worley@localhost) by haven.EPM.ORNL.GOV (8.8.3/8.8.3) id QAA01325; Thu, 29 Jan 1998 16:18:43 -0500 (EST)
Date: Thu, 29 Jan 1998 16:18:43 -0500 (EST)
From: Pat Worley <worley@haven.EPM.ORNL.GOV>
Message-Id: <199801292118.QAA01325@haven.EPM.ORNL.GOV>
To: parkbench-comm@CS.UTK.EDU
Subject: Re: Fw: Re: Low Level Benchmarks 
In-Reply-To: Mail from 'Mark Baker  <mab@sis.port.ac.uk>'
      dated: Mon, 26 Jan 98 17:23:08 GMT
Cc: worley@haven.EPM.ORNL.GOV

In a private exchange, Charles Grassl made a comment that he may come to
regret: 

" We need more input, such as yours, as to what are the important parameters
  and what accuracy is needed. "

so here are some random comments. 

I have been organizing my own performance data over the last couple of weeks.
I never paid too much attention to the detailed output of my own ping-ping
and ping-pong tests because it was not the end product of the research. It
has been enlightening to look at it now. The entry point is

  http://www.epm.ornl.gov/~worley/studies/pt2pt.html

I tried a couple of different fitting techniques, but decided that fits told
me nothing that I was interested in. What I have found mildly interesting is
to measure statistics of the data, and try to build a performance model using
those. The difference is that the interpretation and value of the statistics 
(maximum observed bandwidth, time to send 0 length message, etc.)
are not functions of any model error. The problem with fitting the data is
that, no matter how often I tell myself that it is simply a compact
representation of the data, I keep wanting to use assign meaning to the model
parameters and use them in interplatform comparisons. 

In summary, I have changed my mind. I no longer support even simple fits
to the data unless well-defined statistical measures of the data are also
included (and emphasized).

Pat Worley




From owner-parkbench-comm@CS.UTK.EDU Mon Feb  9 05:05:12 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id FAA29859; Mon, 9 Feb 1998 05:05:11 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id EAA10483; Mon, 9 Feb 1998 04:57:14 -0500 (EST)
Received: from gatekeeper.pallas.de (gatekeeper.pallas.de [194.45.33.1]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id EAA10476; Mon, 9 Feb 1998 04:57:07 -0500 (EST)
Received: from mailhost.pallas.de (gatekeeper [194.45.33.1])
	by gatekeeper.pallas.de (SMI-8.6/SMI-SVR4) with SMTP id KAA18803;
	Mon, 9 Feb 1998 10:50:10 +0100
Received: from schubert.pallas.de by mailhost.pallas.de (SMI-8.6/SMI-SVR4)
	id KAA03909; Mon, 9 Feb 1998 10:50:07 +0100
Received: from localhost by schubert.pallas.de (SMI-8.6/SMI-SVR4)
	id KAA11268; Mon, 9 Feb 1998 10:46:57 +0100
Date: Mon, 9 Feb 1998 10:46:45 +0100 (MET)
From: Hans Plum <hans@pallas.de>
X-Sender: hans@schubert
Reply-To: Hans Plum <hans@pallas.de>
To: cmg@cray.com, mab@sis.port.ac.uk, parkbench-comm@CS.UTK.EDU
cc: snelling@fecit.co.uk
Subject: Re: Low Level Benchmarks  (fwd)
In-Reply-To: <Pine.SOL.3.95q.980127105513.4334A-100000@sun5>
Message-ID: <Pine.SOL.3.95q.980209103111.10381D-100000@schubert>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="MimeMultipartBoundary"

--MimeMultipartBoundary
Content-Type: TEXT/PLAIN; charset=US-ASCII

Hi,

I am the "PMB person" at PALLAS Gmbh. I have heard about your discussions. 
First note that there is a new version PMB1.2, see

http://www.pallas.de/pages/pmb.htm

Also look at the PMB1.2_doc.ps.gz where we try to give the reasoning for
all decisions made in PMB. We think nothing has been designed sloppy ..

PMB has been developed from point of view of an application developer
which I am. Of course a single person's view is limited, but for myself
the information given by PMB provides a solid base for algorithmic
estimates and decisions. That exactly what we wanted: Something EASY 
(and not COMPLETE) that covers may be 80% of the realistic situations.

-------------------------------------------------------------   ---/---
Dr Hans-Joachim Plum        phone      : +49-2232-1896-0          / /
PALLAS GmbH                 direct line: +49-2232-1896-18        / / /
Hermuelheimer Strasse 10    fax        : +49-2232-1896-29       / / / /
D-50321 Bruehl              email      : plum@pallas.de          / / /  
Germany                     URL        : http://www.pallas.de     / / PALLAS
-------------------------------------------------------------   ---/---


--MimeMultipartBoundary--

From owner-parkbench-comm@CS.UTK.EDU Wed Apr 22 07:43:42 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id HAA03238; Wed, 22 Apr 1998 07:43:41 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA03111; Wed, 22 Apr 1998 07:05:23 -0400 (EDT)
Received: from post.mail.demon.net (post-10.mail.demon.net [194.217.242.39]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA03104; Wed, 22 Apr 1998 07:05:21 -0400 (EDT)
Received: from minnow.demon.co.uk ([158.152.73.63]) by post.mail.demon.net
           id aa1028865; 22 Apr 98 11:00 GMT
Message-ID: <XmM5ECAn2cP1EwUB@minnow.demon.co.uk>
Date: Wed, 22 Apr 1998 11:59:51 +0100
To: parkbench-comm@CS.UTK.EDU
From: Roger Hockney <roger@minnow.demon.co.uk>
Subject: Announcing PICT2.1 - Now fully Operational
MIME-Version: 1.0
X-Mailer: Turnpike Version 3.03a <kRL7V2isFfDmnKSZb08I5Tyfx$>


To: the Parkbench discussion group

From: Roger

                   ANNOUNCING PICT 2.1 (1 Mar 1998)
                   --------------------------------

I am pleased to announce the first fully-functional version of the 
Parkbench Interactive Curve-Fitting Tool (PICT).

Provision is made for a wide range of screen sizes in pixels by
allowing the user to make a suitable choice in the opening HTML page.

All buttons now work. In particular Jack can have his least-squares 
fitting of the 2-parameters direct from the tool, and this can be 
performed over partial ranges of the data as required. The same 
applies to the Three-point fitting procedure to obtain the 
3-parameter fits.

There is also a nice "Temperature Gauge" feature that helps you 
minimise the error during manual fitting.

The results of these fits can be assembled in a results file and 
annotated using the SAVE buttons. Under MSIE I find I am able to 
store these results in my local disk file system using SAVE as ...

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The methodology of the 2-parameter curve fitting is given in detail 
in my book "The Science of Computer Benchmarking", see: 

              http://www.siam.org/catalog/mcc07/hockney.htm 

The 3-parameter fit was described quite fully in my talk to the 
11 Sep 1997 Parkbench meeting. I have finally written this up with 
pretty pictures for the PEMCS Web Journal. Look at:

    http://hpc-journals.ecs.soton.ac.uk/Workshops/PEMCS/fall-97/
           talks/Roger-Hockney/perfprof1.html

To try out PICT 2.1 please first try my own Demon Web space which 
has a counter from which I can judge usage: 

       http://www.minnow.demon.co.uk/pict/source/pict2a.html

If this gives problems, it is also mounted on the University of 
Westminster server:

   http://perun.hscs.wmin.ac.uk/LocalInfo/pict/source/pict2a.html

We expect soon to make it available on the Southampton server.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

PICT 2.1 has been tested by a small number of friends. Most 
problems and frustrations arise from either slowness of the 
server or of the users' computer.

If download from Demon is slow or appears to hang, try the other
server or try Demon later. Please do not conclude the applet is 
broken. I am confident it is not. A 10 to 20 second wait is 
normal when bringing up the requested graphical window/frame 
even on a good day.

Once the graphical window is on your computer and the applet is 
running, the speed is determined by the speed of your computer.

You may even disconnect from the Web at this stage and continue 
curve fitting with the applet with the data displayed. If you 
want new data, you must, of course, reconnect to the Web and 
use the GET DATA at URL button.

Experience shows that the PICT applet will not respond 
satisfactorily on a computer with slower than a 100 MHz clock. 
This is because a lot of complex calculations must be performed 
as you drag the curves around the data. MSIE seems to work
noticeably faster than Netscape on my Win95 PC. There is no
cure for this except to use a faster computer. But again
please do not think the applet is brocken.

Please report experiences good or bad to:

              roger@minnow.demon.co.uk

Constructive suggestions for improvement are also welcome. 
-- 
Roger Hockney.  Checkout my new Web page at URL   http://www.minnow.demon.co.uk

From owner-parkbench-comm@CS.UTK.EDU Sun Jun 21 10:02:47 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id KAA22167; Sun, 21 Jun 1998 10:02:47 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA06272; Sun, 21 Jun 1998 09:47:56 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA06265; Sun, 21 Jun 1998 09:47:54 -0400 (EDT)
Received: from mordillo (p4.nas1.is5.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA17767; Sun, 21 Jun 98 14:50:05 BST
Date: Sun, 21 Jun 98 14:43:42 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: New PEMCS papers
To: parkbench-comm@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.898436744.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear All,

Two new papers have just been published by the PEMCS journal...

3.Comparing The Performance of MPI on the Cray T3E-900, The Cray
  Origin2000 And The IBM P2SC, by Glenn R. Luecke and James J. Coyle 
  Iowa State University, Ames, Iowa 50011-2251, USA. 
4.EuroBen Experiences with the SGI Origin 2000 and the Cray T3E, by
  A.J. van der Steen, Computational Physics, Utrecht University,
  Holland* 

See http://hpc-journals.ecs.soton.ac.uk/PEMCS/Papers/

Regards

Mark

-------------------------------------
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 06/21/98 - Time: 14:43:42
URL http://www.sis.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Fri Sep 11 12:05:18 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id MAA19578; Fri, 11 Sep 1998 12:05:18 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id LAA20703; Fri, 11 Sep 1998 11:54:29 -0400 (EDT)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id LAA20636; Fri, 11 Sep 1998 11:53:20 -0400 (EDT)
Received: from mordillo (p36.nas1.is5.u-net.net) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA11111; Fri, 11 Sep 98 16:48:17 BST
Date: Fri, 11 Sep 98 14:38:08 +0000
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: CPE - Call for papers - Message Passing Interface-based Parallel Programming with Java
To: javagrandeforum@npac.syr.edu,
        "'mpi-nt-users@erc.msstate.edu'"  <mpi-nt-users@ERC.MsState.EDU>,
        "Dr. Kenneth A. Williams"  <williams@ncat.edu>,
        "Stephen L. Scott"  <sscott@msr.epm.ornl.gov>,
        "Aad J. van der Steen"  <steen@fys.ruu.nl>,
        Advanced Java  <advanced-java@xcf.berkeley.edu>,
        Alexander Reinefeld  <ar@uni-paderborn.de>,
        Andy Grant  <andyg@manchester.sgi.com>,
        Anne Trefethen  <anne@nag.co.uk>, Bryan Capenter  <dbc@npac.syr.edu>,
        Charles Grassl  <cmg@cray.com>, Dave Beckett  <D.J.Beckett@ukc.ac.uk>,
        David Snelling  <snelling@fecit.co.uk>,
        DIS Everyone  <dis-all@sis.port.ac.uk>, fagg@CS.UTK.EDU,
        gentzsch@genias.de, Guy Robinson  <robinson@arsc.edu>,
        Hon W Yau  <hwyau@epcc.ed.ac.uk>, hpvm@cs.uiuc.edu,
        Jack Dongarra  <dongarra@CS.UTK.EDU>, java-for-cse@npac.syr.edu,
        Joao Gabriel Silva  <jgabriel@dei.uc.pt>,
        jtap-club-clusters@mailbase.ac.uk,
        Ken Hawick  <khawick@cs.adelaide.edu.au>,
        Mike Berry  <berry@CS.UTK.EDU>, mpijava-users@npac.syr.edu,
        owner-grounds@mail.software.ibm.com, parkbench-comm@CS.UTK.EDU,
        partners@globus.org, Paul Messina  <messina@cacr.caltech.edu>,
        Roland Wismueller  <wismuell@informatik.tu-muenchen.de>,
        Steve Larkin - AVS  <stevel@avsuk.com>,
        Terri Canzian  <kea@cacr.caltech.edu>,
        Tony Hey  <ajgh@ecs.soton.ac.uk>, topic@mcc.ac.uk,
        Vaidy Sunderam  <vss@mathcs.emory.edu>,
        Vladimir Getov  <vsg@ecs.soto.ac.uk>,
        William Gropp  <gropp@mcs.anl.gov>
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
Message-Id: <Chameleon.905526764.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Dear Colleague,,

Firstly, I apologise for any cross-posting of this email.

If this CFP is not in your field we would appreciate you forwarding it
to your colleagues who may be in the field.

This CFP can be found at http://hpc-journals.ecs.soton.ac.uk/CPE/Special/MPI-Java/

Regards

Dr Mark Baker
University of Portsmouth, UK

----------------------------------------------------------------------------
Call For Papers

Special Issue of Concurrency: Practice and Experience

Message Passing Interface-based Parallel Programming with Java

Guest Editors
Anthony Skjellum (MPI Software Technology, Inc.)
Mark Baker (University of Portsmouth)

A special issue of Concurrency: Practice and Experience (CPE) is being
planned for Fall of  1999. Papers submitted and accepted for this issue
will be published by John Wiley & Sons  Ltd. in the CPE Journal and in
addition will be made available electronically via the WWW. 

Background 
  
Recently there has been a great deal of interest in the idea that Java
may be a good language  for scientific and engineering computation, and
in particular for parallel computing. The  claims made on behalf of
Java, that it is simple, efficient and platform-neutral - a natural 
language for network programming - make it potentially attractive to
scientific programmers  hoping to harness the collective computational
power of networks of workstations and PCs,  or even of the Internet. 

A basic prerequisite for parallel programming is a good communication
API. Java comes with  various ready-made packages for communication,
notably an easy-to-use interface to BSD  sockets, and the Remote Method
Invocation (RMI) mechanism. Interesting as these  interfaces are, it is
questionable whether parallel programmers will find them especially 
convenient. Sockets and remote procedure calls have been around for
about as long as  parallel computing has been fashionable, and neither
of them has been popular in that field.  Both communication models are
optimized for client-server programming, whereas the  parallel computing
world is mainly concerned with "symmetric" communication, occurring in 
groups of interacting peers. 

This symmetric model of communication is captured in the successful
Message Passing  Interface standard (MPI), established a few years ago.
MPI directly supports the Single  Program Multiple Data (SPMD) model of
parallel computing, wherein a group of processes  cooperate by executing
identical program images on local data values. Reliable point-to-point 
communication is provided through a shared, group-wide communicator,
instead of socket  pairs. MPI allows numerous blocking, non-blocking,
buffered or synchronous communication  modes. It also provides a library
of true collective operations (broadcast is the most trivial  example).
An extended standard, MPI 2, allows for dynamic process creation and
access to  memory in remote processes. 

Call For Papers 

This is a call for papers about the designs, experience, and results
concerning the use of the  Message Passing Interface (MPI) with Java are
sought for a special issue of Concurrency  Practice and Experience.
Development of clear understanding of the opportunities,  challenges,
and state-of-the-art in scalable, peer-oriented messaging with Java are
of interest  and value to both the distributed computing and high
performance computing communities. 

Topics of interest for this special issue include but are not limited to: 

-- Practical systems that use MPI and Java to solve real distributed
   high performance  computing problems. 
-- Designs of systems for combining MPI-type functionality with Java. 
-- Approaches to APIs for object-oriented, group-oriented message passing with Java. 
-- Efforts to combine MPI with CORBA in a Java environment. 
-- Efforts to utilize aspects of the emerging MPI/RT standard are also
   of interest in the  Java context. 
-- Efforts to do MPI interoperability (IMPI) using Java. 
-- Issues and both tactical and strategic solutions concerning MPI-1 and
   MPI-2 standard  and features in conjunction with Java. 
-- Performance results and performance-enhancing techniques for such systems. 
-- Flexible frameworks and techniques for enabling High-Performance
   communication in Java

Timescales for Submission

There is a deadline of  15th December 1998 for submitted papers. Publication is currently 
scheduled for the third quarter of 1999. 
  
Activity 		Deadline
Call For Papers 	1st September 1998
Paper Submission	15th December 1998
Papers Returned		15th March 1999
Papers Approved		1st April 1999
Publication		Q3 1999

Further details about this special issue can be found at:

http://hpc-journals.ecs.soton.ac.uk/CPE/Special/MPI-Java/

----------------------------------------------------------------------------


-------------------------------------
Dr Mark baker
CSM, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 09/11/98 - Time: 14:38:08
URL http://www.dcs.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Tue Sep 15 22:24:43 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id WAA13441; Tue, 15 Sep 1998 22:24:43 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id WAA25353; Tue, 15 Sep 1998 22:23:26 -0400 (EDT)
Received: from octane11.nas.nasa.gov (octane11.nas.nasa.gov [129.99.34.116]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id WAA25343; Tue, 15 Sep 1998 22:23:08 -0400 (EDT)
Received: (from saini@localhost)
	by octane11.nas.nasa.gov (8.8.7/NAS8.8.7) id TAA24915;
	Tue, 15 Sep 1998 19:17:45 -0700 (PDT)
From: "Subhash Saini" <saini@nas.nasa.gov>
Message-Id: <9809151917.ZM24910@octane11.nas.nasa.gov>
Date: Tue, 15 Sep 1998 19:17:44 -0700
In-Reply-To: Mark Baker  <mab@sis.port.ac.uk>
        "CPE - Call for papers - Message Passing Interface-based Parallel Programming with Java" (Sep 11,  2:38pm)
References: <Chameleon.905526764.mab@mordillo>
X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail)
To: "'mpi-nt-users@erc.msstate.edu'"  <mpi-nt-users@ERC.MsState.EDU>,
        "Aad J. van der Steen"  <steen@fys.ruu.nl>,
        "Dr. Kenneth A. Williams"  <williams@ncat.edu>,
        "Stephen L. Scott"  <sscott@msr.epm.ornl.gov>,
        Advanced Java  <advanced-java@xcf.berkeley.edu>,
        Alexander Reinefeld  <ar@uni-paderborn.de>,
        Andy Grant  <andyg@manchester.sgi.com>,
        Anne Trefethen  <anne@nag.co.uk>, Bryan Capenter  <dbc@npac.syr.edu>,
        Charles Grassl  <cmg@cray.com>, DIS Everyone  <dis-all@sis.port.ac.uk>,
        Dave Beckett  <D.J.Beckett@ukc.ac.uk>,
        David Snelling  <snelling@fecit.co.uk>,
        Guy Robinson  <robinson@arsc.edu>, Hon W Yau  <hwyau@epcc.ed.ac.uk>,
        Jack Dongarra  <dongarra@CS.UTK.EDU>,
        Joao Gabriel Silva  <jgabriel@dei.uc.pt>,
        Ken Hawick  <khawick@cs.adelaide.edu.au>,
        Mark Baker  <mab@sis.port.ac.uk>, Mike Berry  <berry@CS.UTK.EDU>,
        Paul Messina  <messina@cacr.caltech.edu>,
        Roland Wismueller  <wismuell@informatik.tu-muenchen.de>,
        Steve Larkin - AVS  <stevel@avsuk.com>,
        Terri Canzian  <kea@cacr.caltech.edu>,
        Tony Hey  <ajgh@ecs.soton.ac.uk>,
        Vaidy Sunderam  <vss@mathcs.emory.edu>,
        Vladimir Getov  <vsg@ecs.soto.ac.uk>,
        William Gropp  <gropp@mcs.anl.gov>, fagg@CS.UTK.EDU,
        gentzsch@genias.de, hpvm@cs.uiuc.edu, java-for-cse@npac.syr.edu,
        javagrandeforum@npac.syr.edu, jtap-club-clusters@mailbase.ac.uk,
        mpijava-users@npac.syr.edu, owner-grounds@mail.software.ibm.com,
        parkbench-comm@CS.UTK.EDU, partners@globus.org, topic@mcc.ac.uk
Subject: AD _ Workshop 
Cc: mab@sis.port.ac.uk, saini@octane11.nas.nasa.gov
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii

You are invited to attend the workshop (see below).

Best regards, subhash

==============================================================================

                           ***** REGISTER NOW *****

                         *** NO REGISTRATION FEE ***

              **** Last Day to Register is Sept. 23, 1998 ****



     "First NASA Workshop on Performance-Engineered Information Systems"
      -----------------------------------------------------------------

			     Sponsored by

               Numerical Aerospace Simulation Systems Division
                         NASA Ames Research Center
                      Moffett Field, California, USA

			  September 28-29, 1998

                   Workshop Chairman: Dr. Subhash Saini

                http://science.nas.nasa.gov/Services/Training

Invited  Speakers:
------------------
Adve, Vikram (Rice University)
Aida, K. (Tokyo Institute of Technology, JAPAN)
Bagrodia, Rajive (University of California, Los Angeles)
Becker, Monique (Institute Nationale des Tele. FRANCE)
Berman, Francine (University of California, San Diego)
Browne, James C. (University of Texas)
Darema, Frederica (U.S. National Science Foundation-CISE)
Dongarra, Jack (Oak Ridge National Laboratory)
Feiereisen, Bill (NASA Ames Research Center)
Fox, Geoffrey (Syracuse University)
Gannon, Dennis (Indiana University)
Gerasoulis, Apostolos (Rutgers University)
Gunther, Neil J. (Performance Dynamics Company)
Hey, Tony (University of Southampton UK)
Hollingsworth, Jeff (University of Maryland)
Jain, Raj (Ohio State University)
Keahy, Kate (Los Alamos National Laboratory)
Mackenzie, Lewis M. (University of Glasgow, Scotland UK)
McCalpin, John (Silicon Graphics)
Menasce, Daniel A. (George Mason University)
Nudd, Graham (University of Warwick UK)
Reed, Dan (University of Illinois)
Saltz, Joel (University of Maryland)
Simmons, Margaret (San Diego Supercomputer Center)
Vernon, Mary (University of Wisconsin)

Topics include:
--------------

  - Performance-by-design techniques for high-performance distributed
    information systems
  - Large transients in packet-switched and circuit-switched networks
  - Workload characterization techniques
  - Integrated performance measurement, analysis, and prediction
  - Performance measurement and modeling in IPG
  - Performance models for threads and distributed objects
  - Application emulators and simulation models
  - Performance prediction engineering of Information Systems including IPG
  - Performance characterization of scientific and engineering applications
    of interest to NASA, DoE, DoD, and industry
  - Scheduling tools for performance prediction of parallel programs
  - Multi-resolution simulations for large-scale I/O-intensive applications
  - Capacity planning for Web performance: metrics, models, and methods


Contact: Marcia Redmond, redmond@nas.nasa.gov, (650) 604-4373

Registration: Advanced registration is required.

Registration Fee: NONE.

Registration Deadlines: Friday, September 23, 1998

There will be no onsite registration.

Contact: Send registration information and direct questions to
	 Marcia Redmond, redmond@nas.nasa.gov, (650) 604-4373.

DESCRIPTION:

The basic goal of performance modeling is to predict and understand the
performance of a computer program or set of programs on a computer
system. The applications of performance modeling are numerous,
including evaluation of algorithms, optimization of code
implementations, parallel library development, comparison of system
architectures, parallel system design, and procurement of new systems.
The most reliable technique for determining the performance of a
program on a computer system is to run and time the program multiple
times, but this can be very expensive and it rarely leads to any deep
understanding of the performance issues. It also does not provide
information on how performance will change under different
circumstances, for example with scaling the problem or system
parameters or porting to a different machine. The complexity of new
parallel supercomputer systems presents a daunting challenge to the
application scientists who must understand the system's behavior to
achieve a reasonable fraction of the peak performance. The NAS Parallel
Benchmarks (NPB) have exposed a large difference between peak and
achievable performance. Such a dismal performance is not surprising,
considering the complexity of these parallel distributed memory
systems.  At present, performance modeling, measurement, and analysis
tools are inadequate for distributed/networked systems such as
Information Power Grid (IPG). The purpose of performance-based
engineering is to develop new methods and tools that will enable
development of these information systems faster, better and cheaper.

================================================================================
                             Registration

     "First NASA Workshop on Performance-Engineered Information Systems"

Send the following information to redmond@nas.nasa.gov

Name _____________________________________________

Organization _____________________________________

Street Address ___________________________________

City ____________________ State __________________

Zip/Mail Code ___________ Country ________________

Phone ___________________ Fax ____________________

Email address ____________________________________

U.S. Citizen __________ Permanent Resident with Green Card ________

*******************************************************************************

Foreign National ________

(non-U.S. Citizen). Must complete the following information:

Passport number ______________________

Name as it appears on passport _______________________________________

Date issued _____________

Date expires _________________

Country of citizenship____________________________



From owner-parkbench-lowlevel@CS.UTK.EDU Wed Oct 21 02:28:56 1998
Return-Path: <owner-parkbench-lowlevel@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id CAA13270; Wed, 21 Oct 1998 02:28:55 -0400
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id CAA23157; Wed, 21 Oct 1998 02:24:45 -0400 (EDT)
Received: from mail2.one.net (mail2.one.net [206.112.192.100]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id CAA23150; Wed, 21 Oct 1998 02:24:43 -0400 (EDT)
Received: from port-29-44.access.one.net ([206.112.210.106] HELO aol.com ident: IDENT-NOT-QUERIED [port 22788]) by mail2.one.net with SMTP id <17237-27384>; Wed, 21 Oct 1998 02:10:48 -0400
From: Online@nj.com
To: Online@nj.com
Subject:  Advertise with Bulk Email!
Message-Id: <19981021061048Z17237-27384+1398@mail2.one.net>
Date: 	Wed, 21 Oct 1998 02:10:42 -0400

___________________________________________________________

Anouncing a Bulk Friendly Isp!

We Bulk Email!
 Are you tired of getting kicked offline for Bulk Emailing?
Well now you can bulk email without getting kicked offline. Call Online Direct a Bulk Friendly ISP.
513 874 7437

For only 125$  a month plus a 50$ setup fee we will send out 35,000 emails a week for you.
Plus provide you with a bullet proof pop 3 email acount so you can recieve all of your mail.
Ask About our special offers up to 100,000 emails per day!
 Any type of bulk adversting! We Do it Right!
Advertise Smart  Bulk Email Today!
Call Online Direct at 513 874 7437
We can also Provide bullet pop 3 email acounts!
CALL TODAY!

513 874 7437

if you wish to be removed from this list please type remove in reply box


From owner-parkbench-comm@CS.UTK.EDU Sun Oct 25 12:41:55 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id MAA29754; Sun, 25 Oct 1998 12:41:54 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA29327; Sun, 25 Oct 1998 12:36:00 -0500 (EST)
Received: from pan.ch.intel.com (pan.ch.intel.com [143.182.246.24]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id MAA29319; Sun, 25 Oct 1998 12:35:58 -0500 (EST)
Received: from sedona.intel.com (sedona.ch.intel.com [143.182.218.21])
	by pan.ch.intel.com (8.8.6/8.8.5) with ESMTP id RAA16591;
	Sun, 25 Oct 1998 17:35:56 GMT
Received: from ccm.intel.com ([143.182.69.127]) by sedona.intel.com (8.9.1a/8.9.1a-chandler01) with ESMTP id KAA27181; Sun, 25 Oct 1998 10:35:54 -0700 (MST)
Message-ID: <36336126.B26DEE2C@ccm.intel.com>
Date: Sun, 25 Oct 1998 10:34:30 -0700
From: Anjaneya Chagam <Anjaneya.Chagam@mail.intel.com>
X-Mailer: Mozilla 4.05 [en] (Win95; I)
MIME-Version: 1.0
To: parkbench-comm@CS.UTK.EDU, Anjaneya.Chagam@intel.com
Subject: Question on parkbench source code in c
X-Priority: 1 (Highest)
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi:
    I am looking for packbench benchmarking programs source code in c
language to do benchmarking comparison on Chime and PVM on NT platform @
Arizona State University. Could you please let me know if the parkbench
programs are ported to c, if so where can I find them?

Thanks a million.

Name: Anjaneya R. Chagam
Email: Anjaneya.Chagam@intel.com


From owner-parkbench-comm@CS.UTK.EDU Mon Oct 26 06:36:26 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id GAA11147; Mon, 26 Oct 1998 06:36:25 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA07390; Mon, 26 Oct 1998 06:27:13 -0500 (EST)
Received: from osiris.sis.port.ac.uk (root@osiris.sis.port.ac.uk [148.197.100.10]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id GAA07383; Mon, 26 Oct 1998 06:27:10 -0500 (EST)
Received: from mordillo (pc297.sis.port.ac.uk) by osiris.sis.port.ac.uk (4.1/SMI-4.1)
	id AA15115; Mon, 26 Oct 98 11:29:54 GMT
Date: Mon, 26 Oct 98 11:14:06 GMT
From: Mark Baker  <mab@sis.port.ac.uk>
Subject: Re: Question on parkbench source code in c 
To: Anjaneya.Chagam@intel.com, parkbench-comm@CS.UTK.EDU
X-Mailer: Chameleon ATX 6.0.1, Standards Based IntraNet Solutions, NetManage Inc.
X-Priority: 3 (Normal)
References: <36336126.B26DEE2C@ccm.intel.com> 
Message-Id: <Chameleon.909400696.mab@mordillo>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

Anjaneya,

The official Parkbench code are only  available in Fortran 77.

I rememer vaguely sometime back hearing about a graduate-students
attempt to "port" some of the low-level codes to C.

Charles Grassl (Cray) and I did a little work on some simple C
PingPong codes.

You can check-out these on...

http://www.sis.port.ac.uk/~mab/TOPIC/

Regards

Mark

--- On Sun, 25 Oct 1998 10:34:30 -0700  Anjaneya Chagam <Anjaneya.Chagam@mail.intel.com> 
wrote:
> Hi:
>     I am looking for packbench benchmarking programs source code in c
> language to do benchmarking comparison on Chime and PVM on NT platform @
> Arizona State University. Could you please let me know if the parkbench
> programs are ported to c, if so where can I find them?
> 
> Thanks a million.
> 
> Name: Anjaneya R. Chagam
> Email: Anjaneya.Chagam@intel.com
> 
> 

---------------End of Original Message-----------------

-------------------------------------
DCS, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: mab@sis.port.ac.uk
Date: 10/26/98 - Time: 11:14:07
URL:  http://www.dcs.port.ac.uk/~mab/
-------------------------------------


From owner-parkbench-comm@CS.UTK.EDU Mon Nov 16 10:06:23 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id KAA11375; Mon, 16 Nov 1998 10:06:23 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id JAA08949; Mon, 16 Nov 1998 09:01:33 -0500 (EST)
Received: from del2.vsnl.net.in (del2.vsnl.net.in [202.54.15.30]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id JAA08936; Mon, 16 Nov 1998 09:01:28 -0500 (EST)
Received: from sameer.myasa.com ([202.54.106.39])
	by del2.vsnl.net.in (8.9.1a/8.9.1) with SMTP id TAA13392
	for <parkbench-comm@cs.utk.edu>; Mon, 16 Nov 1998 19:30:37 -0500 (GMT)
From: "Kashmir Kessar Mart" <kkmartsgr@hotmail.com>
To: <parkbench-comm@CS.UTK.EDU>
Subject: Information
Date: Mon, 16 Nov 1998 19:30:48 +0530
Message-ID: <01be1169$838dd020$276a36ca@sameer.myasa.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_006D_01BE1197.9D460C20"
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 4.71.1712.3
X-MimeOLE: Produced By Microsoft MimeOLE V4.71.1712.3

This is a multi-part message in MIME format.

------=_NextPart_000_006D_01BE1197.9D460C20
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Dear Sir,=20
             I have seen your Web Site but could not understand what =
your company is.
Please let me know if you can provide me information regarding Walnut =
Kernels.

Regards
Azad.

------=_NextPart_000_006D_01BE1197.9D460C20
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD W3 HTML//EN">
<HTML>
<HEAD>

<META content=3Dtext/html;charset=3Diso-8859-1 =
http-equiv=3DContent-Type>
<META content=3D'"MSHTML 4.71.1712.3"' name=3DGENERATOR>
</HEAD>
<BODY bgColor=3D#fff8e0>
<DIV><FONT color=3D#000000 size=3D2>Dear Sir, </FONT></DIV>
<DIV><FONT color=3D#000000=20
size=3D2>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;=20
I have seen your Web Site but could not understand what your company=20
is.</FONT></DIV>
<DIV><FONT color=3D#000000 size=3D2>Please let me know if you can =
provide me=20
information regarding Walnut Kernels.</FONT></DIV>
<DIV><FONT color=3D#000000 size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT color=3D#000000 size=3D2>Regards</FONT></DIV>
<DIV><FONT color=3D#000000 size=3D2>Azad.</FONT></DIV></BODY></HTML>

------=_NextPart_000_006D_01BE1197.9D460C20--


From owner-parkbench-comm@CS.UTK.EDU Fri Dec  4 15:44:53 1998
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	id PAA21941; Fri, 4 Dec 1998 15:44:53 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA21231; Fri, 4 Dec 1998 15:18:56 -0500 (EST)
Received: from gimli.genias.de (qmailr@GIMLI.genias.de [192.129.37.12]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id PAA21223; Fri, 4 Dec 1998 15:18:51 -0500 (EST)
From: <info@genias.de>
Received: (qmail 14706 invoked by uid 233); 4 Dec 1998 20:14:46 -0000
Date: 4 Dec 1998 20:14:46 -0000
Message-ID: <19981204201446.14705.qmail@gimli.genias.de>
Reply-to: majordomo@genias.de
To: parkbench-comm@CS.UTK.EDU
Subject: Newsletter on Distributed and Parallel Computing

Dear Colleague,

as already announced a few weeks ago, this is now the very first issue of
our bi-monthly electronic Newsletter on Distributed and Parallel Computing,
DPC NEWS. 

!! If you want to receive DPC NEWS regularly, please just return this  !!
!! e-mail to majordomo@genias.de with                                  !!
!!                                                                     !!
!!  subscribe newsletter    or    subscribe newsletter <email address> !!
!!  end                           end                                  !!
!!                                                                     !!
!! in the first two lines of the email-body (text area).               !!

This newsletter is a FREE service to the DPC Distributed and Parallel
Computing community. It regularly informs on new developments and results
in DPC, e.g. conferences, important weblinks, new books and other relevant
news in distributed and parallel computing.

We also keep all the information in the special newsletter section of our
webpage ( http://www.genias.de/dpcnews/ ) to provide a wealth of infos for 
the DPC community.

If you have any information which might fit into these DPC subjects, please
send it to me together with the corresponding weblink, for publication in
DPC News. We aim to reach a very broad community with this DPC Newletter.

With  Season's Greetings from GENIAS
Wolfgang Gentzsch, CEO and President

=====================================================================
      DPC NEWSletter on Distributed and Parallel Computing
                GENIAS Software, December 1998
                ------------------------------
                http://www.genias.de/dpcnews/

GENIAS NEWS:

EASTMAN CHEMICAL USES CODINE FOR MOLECULAR MODELING
Eastman Chemical uses commercial quantum chemistry programs, like Gaussian,
Jaguar, and Cerius2, to model chemical products, intermediates, catalysts,
etc. The simulation jobs take between 1 hour and 6 days to complete. 
Queuing software is an important part of keeping the processors working at 
full utilization, without being overloaded. Since October, with the new
CODINE release 4.2, Eastman has maintained over 95% CPU utilization on 
the available systems:   http://www.genias.de/dpcnews/

BMW USES CODINE AND GRD FOR CRASH-SIMULATION
At the BMW crash department, very complex compute-intensive PAM-CRASH
simulations are performed on a cluster of 11 compute servers and more than
100 workstations, altogether over 370 CPUs. CODINE and GRD have optimized
the utilization of this big cluster by distributing the load equally,
dynamically and in an application oriented way, transparent to the 45
users:   http://www.genias.de/dpcnews/

GRD MANAGES ACADEMIC COMPUTER CENTER
http://www.genias.de/dpcnews/

QUEUING UP FOR GRD AT ARL ARMY RESEARCH LAB
http://www.genias.de/dpcnews/

GENIAS ADDS DYNAMIC RESOURCE & POLICY MGMT TO LINUX
http://www.genias.de/dpcnews/

GRD STOPPS FLOODING SYSTEM WITH MANY JOBS
http://www.genias.de/dpcnews/

PaTENT MPI ACCELERATES MARC K7.3 FE ANALYSIS CODE
http://www.genias.de/dpcnews/  +  http://www.marc.com/Techniques/


CONFERENCES on DPC, Dec'98 - March'99:

- Workshop on Performance Evaluation with Realistic Applications (sponsored
  by SPEC), San Jose, CA USA, Jan 25 1999:
  http://www.spec.org/news/specworkshop.html
- ACPC99, 4th Int. Conf. on Parallel Computation, ACPC Salzburg, Austria,
  February 16-18 1999: http://www.coma.sbg.ac.at/acpc99/index.html
- MPIDC'99, Message Passing Interface Developer's and User's Conference,
  Atlanta, Georgia USA, March 10-12 1999: http://www.mpidc.org
- 9th SIAM Conf. on Parallel Processing for Scientific Computing, San
  Antonio, Texas USA, March 22-24 1999: http://www.siam.org/meetings/pp99/
- 25th Speedup Workshop, Lugano, Switzerland, March 25-26 1999:
  http://www.speedup.ch/Workshops/Workshop25Ann.html
- CC99, 2nd German Workshop on Cluster Computing, Karlsruhe, Germany, March
  25-26 1999: http://www.tu-chemnitz.de/informati/RA/CC99
More on GENIAS Webpage. http://www.genias.de/dpcnews/


NEW DPC BOOKS:

- Parallel Computing Using Optimal Interconnections, Kequin Li, Yi Pan, Si
  Qing Zheng. Kluwer Publ 1998: http://www.mcs.newpaltz.edu/~li/pcuoi.html
- High-Performance Computing, Contributions to Society,T.Tabor(Ed.),1998: 
  http://www.tgc.com
- Special Issue on Metacomputing, W. Nagel, R. Williams (Eds.), Int. J.
  Parallel Computing, Vol. 24, No. 12-13, Elsevier Science 1998:
  http://www.elsevier.nl/locate/parco
More books on DPC on GENIAS Webpage:   http://www.genias.de/dpcnews/


DPC WEBPAGES:

- PRIMEUR: HPC electronic news magazine: http://www.hoise.com
- PRIMEUR List of ESPRIT Projects:
  http://www.hoise.com/CECupdate/contentscecdec98.html
- HPCwire, Email Newsletter: http://www.tgc.com/hpcwire.html/
- EuroTools, European HPCN Tools Working Group
  http://www.irisa.fr/eurotools
- PTOOLS, Parallel Tools Consortium: http://www.ptools.org
- TOP500: 500 fastest supercomputers: http://www.top500.org
- PROSOMA: Technology fair describing hundreds of European CEC funded
  projects: http://www.prosoma.lu/
- Links to Linux Cluster Projects: http://www.linux-magazin.de/cluster/
More DPC WebLinks:   http://www.genias.de/dpcnews/


NEWS ON HPC BENCHMARKS:

- STREAM, Memory Performance Benchmark from John McCalpin:
  http://www.cs.virginia.edu/stream/


GENIAS JOBS:

- For our CODINE/GRD Devel.Team: Software engineer with experience in GUI 
  development under OSF/Motif, Java and Windows, distributed computing,
  resource mgnt systems under Unix and NT:  http://www.genias.de/jobs/


CALL FOR PAPERS in upcoming Journals:

- Message Passing Interface-based Parallel Programming with Java: deadline
  Dec. 15 1999:   http://hpc-journals.ecs.soton.ac.uk/CPE/Special/MPI-Java

End of DPC Newsletter
==========================================================================



From owner-parkbench-comm@CS.UTK.EDU Sun Jan 24 12:15:02 1999
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU (CS.UTK.EDU [128.169.94.1]) 
           by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	   id MAA26189; Sun, 24 Jan 1999 12:15:01 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA08151; Sun, 24 Jan 1999 12:08:47 -0500 (EST)
Received: from serv1.is4.u-net.net ([195.102.240.252]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id MAA08144; Sun, 24 Jan 1999 12:08:44 -0500 (EST)
Received: from mordillo [195.102.198.114] 
	by serv1.is4.u-net.net with smtp (Exim 1.73 #1)
	id 104T1E-0003IJ-00; Sun, 24 Jan 1999 17:08:17 +0000
Date: Sun, 24 Jan 1999 17:05:53 +0000
From: Mark Baker  <Mark.Baker@port.ac.uk>
Subject: New PEMCS paper.
To: parkbench-comm@CS.UTK.EDU
X-Mailer: Z-Mail Pro 6.2, NetManage Inc. [ZM62_16H]
X-Priority: 3 (Normal)
Message-ID: <Chameleon.917197705.mab@mordillo>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; CHARSET=ISO-8859-1

A new PEMCS paper has just been accepted and published...

Comparing the Scalability of the Cray T3E-600 and the Cray Origin
2000 Using SHMEM Routines, by Glenn R. Luecke, Bruno Raffin and James
J. Coyle, Iowa Sate University, Ames, Iowa USA 

Check out...

http://hpc-journals.ecs.soton.ac.uk/PEMCS/Papers/

Regards

Mark      

-------------------------------------
DCS, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: Mark.Baker@port.ac.uk
Date: 01/24/1999 - Time: 17:05:53
URL:  http://www.dcs.port.ac.uk/~mab/
-------------------------------------

From owner-parkbench-comm@CS.UTK.EDU Tue Feb  2 08:17:19 1999
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU (CS.UTK.EDU [128.169.94.1]) 
           by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	   id IAA08459; Tue, 2 Feb 1999 08:17:19 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id HAA01393; Tue, 2 Feb 1999 07:42:18 -0500 (EST)
Received: from serv1.is1.u-net.net (serv1.is1.u-net.net [195.102.240.129]) 
        by CS.UTK.EDU with ESMTP (cf v2.9s-UTK)
	id HAA01386; Tue, 2 Feb 1999 07:42:16 -0500 (EST)
Received: from [148.197.205.63] (helo=mordillo)
	by serv1.is1.u-net.net with smtp (Exim 2.00 #2)
	for parkbench-comm@cs.utk.edu
	id 107f7u-0005uS-00; Tue, 2 Feb 1999 12:40:22 +0000
Date: Tue,  2 Feb 1999 12:40:29 +0000
From: Mark Baker  <Mark.Baker@port.ac.uk>
Subject: New PEMCS Paper - resend...
To: parkbench-comm@CS.UTK.EDU
X-Mailer: Z-Mail Pro 6.2, NetManage Inc. [ZM62_16H]
X-Priority: 3 (Normal)
Message-ID: <Chameleon.917959331.mab@mordillo>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; CHARSET=ISO-8859-1

Apologies for the resend - I think this email get lost when I sent it
a couple of weeks back.

---------------------------------------------------------------------------


A new PEMCS paper has just been accepted and published...

"Comparing the Scalability of the Cray T3E-600 and the Cray Origin
2000 Using SHMEM Routines", by Glenn R. Luecke, Bruno Raffin and James
J. Coyle, Iowa Sate University, Ames, Iowa USA 

Check out...

http://hpc-journals.ecs.soton.ac.uk/PEMCS/Papers/

Regards

Mark      

-------------------------------------
DCS, University of Portsmouth, Hants, UK
Tel: +44 1705 844285	Fax: +44 1705 844006
E-mail: Mark.Baker@port.ac.uk
Date: 02/02/1999 - Time: 12:40:29
URL:  http://www.dcs.port.ac.uk/~mab/
-------------------------------------

From owner-parkbench-comm@CS.UTK.EDU Tue Mar  2 10:35:47 1999
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU (CS.UTK.EDU [128.169.94.1]) 
           by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	   id KAA18531; Tue, 2 Mar 1999 10:35:46 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA01804; Tue, 2 Mar 1999 10:18:56 -0500 (EST)
Received: from gimli.genias.de (qmailr@GIMLI.genias.de [192.129.37.12]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id KAA01781; Tue, 2 Mar 1999 10:18:49 -0500 (EST)
Received: (qmail 8905 invoked from network); 2 Mar 1999 15:19:10 -0000
Received: from fangorn.genias.de (192.129.37.74)
  by gimli.genias.de with SMTP; 2 Mar 1999 15:19:10 -0000
Received: (from daemon@localhost)
	by FANGORN.genias.de (8.8.8/8.8.8) id QAA13715;
	Tue, 2 Mar 1999 16:19:05 +0100
Date: Tue, 2 Mar 1999 16:19:05 +0100
Message-Id: <199903021519.QAA13715@FANGORN.genias.de>
To: parkbench-comm@CS.UTK.EDU
From: Majordomo@genias.de
Subject: Welcome to newsletter
Reply-To: Majordomo@genias.de

--

Welcome to the newsletter mailing list!

Please save this message for future reference.  Thank you.

If you ever want to remove yourself from this mailing list,
send the following command in email to
<newsletter-request@FANGORN.genias.de>:

    unsubscribe

Or you can send mail to <Majordomo@FANGORN.genias.de> with the following
command in the body of your email message:

    unsubscribe newsletter

or from another account, besides parkbench-comm@CS.UTK.EDU:

    unsubscribe newsletter parkbench-comm@CS.UTK.EDU

If you ever need to get in contact with the owner of the list,
(if you have trouble unsubscribing, or have questions about the
list itself) send email to <owner-newsletter@FANGORN.genias.de> .
This is the general rule for most mailing lists when you need
to contact a human.

 Here's the general information for the list you've subscribed to,
 in case you don't already have it:

The GENIAS Newsletter keeps you informed about new products, 
services and  information about  High Performance Computing. 
It serves as an  addition to our printed  newsletter that is
distributed  to our  customers.  To see our printed version, 
just visit our web-site 

	http://www.genias.de 

and follow the link 'newsletter'.



From owner-parkbench-comm@CS.UTK.EDU Wed Mar  3 02:34:35 1999
Return-Path: <owner-parkbench-comm@CS.UTK.EDU>
Received: from CS.UTK.EDU (CS.UTK.EDU [128.169.94.1]) 
           by netlib2.cs.utk.edu with ESMTP (cf v2.9t-netlib)
	   id CAA01271; Wed, 3 Mar 1999 02:34:35 -0500
Received: from localhost (root@localhost) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id CAA00679; Wed, 3 Mar 1999 02:32:28 -0500 (EST)
Received: from gimli.genias.de (qmailr@GIMLI.genias.de [192.129.37.12]) 
        by CS.UTK.EDU with SMTP (cf v2.9s-UTK)
	id CAA00668; Wed, 3 Mar 1999 02:32:25 -0500 (EST)
Received: (qmail 10306 invoked from network); 3 Mar 1999 07:32:58 -0000
Received: from gandalf.genias.de (192.129.37.10)
  by gimli.genias.de with SMTP; 3 Mar 1999 07:32:58 -0000
Received: by GANDALF.genias.de (Smail3.1.28.1 #30)
	id m10I69J-000B10C; Wed, 3 Mar 99 08:32 MET
Message-Id: <m10I69J-000B10C@GANDALF.genias.de>
From: gentzsch@genias.de (Wolfgang Gentzsch)
Subject: sorry!
To: parkbench-comm@CS.UTK.EDU
Date: Wed, 3 Mar 99 8:32:57 MET
Cc: gent@genias.de (Wolfgang Gentzsch)
Reply-To: gentzsch@genias.de
X-Mailer: ELM [version 2.3 PL11]


Dear colleagues,

I just discovered that the parkbench-comm@CS.UTK.EDU has been collected
into our mailing list for our electronic DPC Newsletter.
I very much appologize for this mistake. 

Thank you for your understanding!

Kind regards
Wolfgang

-- 


        -- subscribe now to  http://www.genias.de/dpcnews/ --
        - - - - - - - - - - - - - - - - - - - - - - - - - - - 
        Wolfgang Gentzsch, CEO           Tel: +49 9401 9200-0 
        GENIAS Software GmbH & Inc      Fax: +49 9401 9200-92
        Erzgebirgstr. 2              http://www.geniasoft.com
        D-93073 Neutraubling, Germany  gentzsch@geniasoft.com
        - - - - - - - - - - - - - - - - - - - - - - - - - - -
        GENIAS Software Inc.                Tel: 410 455 5580
        UMBC Technology Center              Fax: 410 455 5567
        1450 S. Rolling Road         http://www.geniasoft.com
        Baltimore, MD 21227, USA       gentzsch@geniasoft.com
        = = = = = = = = = = = = = = = = = = = = = = = = = = =


