Collective Communication



next up previous
Next: Extensible Services Up: Recent Enhancements to PVM Previous: Performance

Collective Communication

PVM 3 has always had a very flexible and powerful model of groups of tasks but until PVM 3.3 there were only two collective communication routines: broadcast to a group of tasks and barrier across a group of tasks. PVM 3.3 adds several new collective communication routines including global sum, global max, and scatter/gather.

The semantics of the PVM collective communication routines were developed using the MPI draft as a guide but also sticking with the PVM philosophy of keeping the user interface simple and easy to understand. The purpose of adding more collective routines is to avoid users reinventing the wheel, and to allow MPP implementations to exploit any built-in native collective routines

The pvm_reduce() function performs a global arithmetic operation across the group, for example, global sum or global max. The routine is called by all members of the group, and the result of the reduction operation appears on the member specified as root, also called the root task. PVM supplies four predefined reduce functions:

These reduction operations are performed element-wise on the input data. For example, if the data array contains two floating point numbers and function is PvmMax, then the result contains two numbers - the global maximum of each group members first number and the global maximum of each members second number. The Fortran code fragment to do this is:
     A(1) = localmax1
     A(2) = localmax2
     root = 0
     call pvmfreduce( PvmMax, A, 2, REAL8, msgtag, mygroup, root, info )
     if ( me .eq. root ) then
        globalmax1 = A(1)
        globalmax2 = A(2)
     endif
If all the group members need to know the result then the root task can broadcast the information to them.

Optionally, users can define their own global operation function to be used inside reduce. The PVM source distribution includes an example using a user defined function. The first argument in pvm_reduce() is a pointer to a function. A user can just substitute his own function. No additional PVM functions are required to define the user function, unlike MPI.

The pvm_reduce() function is built on top of the point-to-point routines and supports all the basic data types supported in point to point PVM messages.

As in pvm_reduce(), all members of the group must call pvm_gather() with consistent arguments. In particular a root must be specified. At the end of the gather the root task will have concatenated the data from all group members including itself into a single vector. The data is concatenated in rank order, as is done in MPI.

The use and syntax of pvm_gather() is illustrated in the following example where the PVM task IDs for the group members are collected in order into a vector.

call pvmfmytid(data)
call pvmfgather( result, data, 1, INTEGER4, msgtag, group, root, info)

After this call the root task has a result vector containing: task ID for group member 0, task ID for group member 1, etc. As in MPI, the result vector is significant only on the root task, all the other tasks can use a dummy argument for result.

The pvm_scatter() operation is the inverse of the gather operation. The root starts out with a large vector containing equal size pieces destined for individual group members. At the end of the scatter all group members have their piece of the vector. So for example, to scatter the previous task ID result back out to the group members (assuming result is still a dummy argument for every task except the root:

call pvmfscatter( data, result, 1, INTEGER4, msgtag, group, root, info)

After this call every task including the root has one integer in data. This integer is the same task ID that the task put into the pvm_gather().

Typically gather and scatter operations are used to gather data from a group of tasks, modify this data using some global information or information that requires all the data, and then scattering the modified data back out to the tasks.



next up previous
Next: Extensible Services Up: Recent Enhancements to PVM Previous: Performance



Jack Dongarra
Sun Dec 18 11:30:23 EST 1994