The following README describes the original form of BeEF provided
by BeEF's author, zaliu@eng.sun.com.   BeEF has been modified somewhat 
during incorporation into UCBTEST.

#	@(#)README	1.1 (BeEF) 1/5/95

	This is the Berkeley Elementary Functions test suite (BeEF),
	Release 1.0preFCS, written by Zhishun Alex Liu under the
	direction of Professor W. Kahan.  Due to its preFCS nature,

	****************************************************************
	re-distribution of BeEF without explicit written permission from
	the author is strictly prohibited.
	****************************************************************

	Please note that our implementation does not attempt to handle
	underflows or exceptional cases.  It is assumed that BeEF
	will be exercised *only*

	... on machines with BINARY floating-point arithmetic,
	... after running (Professor W. Kahan's) PARANOIA and
	... after running *and* passing the Cody/Waite's Argonne
	    ELEFUNT test suite with reasonable accuracy.
	
	Feedbacks and/or problem reports should be sent to the author
	electronically at "Alex.Liu@Eng.Sun.COM".

	Users of BeEF 1.0preFCS are requested to submit test results
	(the output equivalent of the UN*X C-shell script "doit"
	included with BeEF 1.0preFCS) of their math library along with
	the following information:

	1. a description and version number of the math library exercised,
	2. name and version number of the underlying operating system,
	3. brand and version of the C compiler,
	4. floating-point hardware and
	5. the file "local.h" generated by "phase1" and the output of
	   "findpi".

	Here are the few steps needed in order to exercise BeEF:

Step 1:	Compile and link "phase1.c".  Execute the resulting binary.
	Answer all questions as accurately as possible.  A header file
	named "local.h" will be generated in the current working directory.

Step 2: To verify that you have specified the correct version of PI sin(),
	cos() and atan() actually used, compile and link "findpi.c".
	Execute the resulting binary and examine the output.

Step 3:	Type "make" if you have it, otherwise simply compile the sources,
	generate object code from them, link the objects together to form
	an executable module.

Step 4: The executable module needs two input data, namely, the number of
	subregions per region and the number of random arguments per subregion.
	The number of subregions per region, if not already a multiple of 16,
	will be rounded to the next multiple of 16.

	You can specify the 2 input data items on command-line as the
	first 2 command-line arguments.

	Output will occupy, say, about 232K to 250K bytes of disk space if
	16 subregions per region is chosen, make sure there is enough disk
	space for it.

Step 5:	To obtain a very brief (i.e. a few pages in length) summary of
	results, compile and link "beefscan.c".  Execute the resulting
	binary with the names of output file(s) as its command-line
	argument(s).

Remark: Most of the CPU cycles will be spent on the log(x) part of our test.
	Here is a brief explanation.  Let's let log(x) be the theoretical
	value of log at x, [log](x) be the computed value of log at x by the
	target implementation of log under testing.  For efficiency reasons,
	once we obtained an accurate value of log(x) (stored as two separate
	floating-point numbers, high and low part of the accurate value, at
	testing precision) for a fixed randomly chosen x in the fundamental
	region [1/sqrt(2),sqrt(2)], we use it to test a whole series of
	computed [log](X) for X := x*(2**n) where n ranges from -16 to 16.

...............................................................................

	We have thoroughly tested the double precision (both D_floating
	*and* G_floating) version(s) of our code on a VAX 8800 running
	Ultrix 2.0 with G&H floating-point hardware/microcode and compared
	our approximators with the H_floating version of the corresponding
	elementary functions in the VAX/VMS Math library, the errors
	committed by our approximators were reasonably less than our
	proved error bounds.

	VAX D_floating format has 56-bit mantissa, 8-bit exponent and
	is the default VAX double precision floating-point format
	most people use.

	VAX G_floating format has 53-bit mantissa, 11-bit exponent and
	is an alternate double precision floating-point format which has
	been available in earlier models of VAXen only in microcoded form,
	it needs special WCS hardware which costs extra money and doesn't
	come standard with each machine.  However, VAX G_floating format
	is identical IN FORMAT to the IEEE 754 "Double".  Although the
	arithmetic performed on a VAX differs noticeably from IEEE 754
	arithmetic, for our purposes, testing the VAX G_floating version
	of our code should give us a fairly good idea as to how accurately
	our code will perform on IEEE 754 conforming machines.

	Here is a brief summary of our Quality-Assurance test results with
	input data of 64 & 2500, all numbers are expressed in terms of ULPs
	of the corresponding VAX/VMS H_floating results rounded to double.
	The column marked 'bounds proved' summarizes the error bounds we're
	able to prove (reasonably) rigorously.

	Name	NME/PME:D_float	NME/PME:G_float	Bounds	Intervals
		observed	observed	proved	covered

	sin	-0.0482/+0.0482 -0.0480/+0.0486	0.0600	[0,7)
	cos	-0.0479/+0.0476	-0.0475/+0.0479	0.0611	[0,7)
	atan	-0.0462/+0.0460	-0.0461/+0.0463	0.0480	[-2^16,2^16]
	exp     -0.0112/+0.0109	-0.0111/+0.0110	0.0280	[-B,B]
	expm1	-0.0444/+0.0432	-0.0444/+0.0431	0.0520	[-1,1]
	log	-0.0392/+0.0378	-0.0406/+0.0375	0.0520	[2^-16.5,2^16.5]
	log1p	-0.0402/+0.0391	-0.0419/+0.0394	0.0520	[sqrt(1/2)-1,sqrt(2)-1]

	Note that B is (127-64)*log(2) for D_floating and (1023-64)*log(2)
	for G_floating.
