Unresolved Issues


Up: Contents Previous: Errors

The following are some issues that are still being discussed and that haven't yet reached consensus.


    1. An issue related to error number 2: Should the arguments POSITION and OUTCOUNT in MPI_Pack be of type int (as they are now) or of type MPI_Aint? That is, do we want to allow for a buffer with more than 2 Gbytes?
    2. In lines 26 and 27, page 88 of the May 5 MPI draft, shouldn't &lbuf just be lbuf ?
    3. There are some inconsistencies in the description of the use of MPI_NULL_FN when used with MPI_KEYVAL_CREATE. In particular, when describing the "copy_function", we say

    This is inconsistent with a null function (by definition, a null function should do nothing) and will fail when used as the "delete_function", where we say

    There are two obvious fixes: 1) Replace MPI_NULL_FN with MPI_NULL_COPY_FN (for correctness) and MPI_NULL_DELETE_FN (for consistency) in the discussion of MPI_KEYVAL_CREATE. 2) Change the wording under "copy_function" to

    I prefer (1) because it is clearer as to what is going on and (naturally) makes the implementation simpler. Implementing (2) is straightforward but unpleasant.
    4. Related to previous item, we have (From Hubertus Franke):

    I extract this out of the MPI-F include files.

     
     
    #define MPI_DUP_FN   mpi_dup_fn 
    /* note the draft provides MPI_NULL_FN, however this violates 
     * the prototype differences in usage for copy and delete function 
     * In agreement with the public domain libary developers we 
     * distinguish these functions 
       #define MPI_NULL_FN 
     */ 
    #define MPI_NULL_COPY_FN    ... 
    #define MPI_NULL_DELETE_FN  ... 
    
    I've discussed this with Nathan Doss and I believe this has been incorporated into the public domain version as well.
    5. There is a long, still unresolved discussion centering around PACK/UNPACK. At the end of it, (at least the last message I see), Marc Snir says:

    Conclusion:


    I don't understand why you came to the conclusion it is wrong to send the concatenation of several packing units with type MPI_PACKED.

    (end of Marc). Where do we stand here?
    6. MPI_BOTTOM in Fortran.

    There is a bug (or unintended effect) in the definition of MPI_BOTTOM for Fortran. Specifically, MPI_BOTTOM can be used as a buffer address; in C the appropriate value will often be (void *)0. But in Fortran, there are no pointer types; in most implementations, all values are passed by reference. Now consider the following two code fragments:

     
     
    C: 
    void *p; 
    

    p = (void *)MPI_BOTTOM; MPI_Send( p, ... );


     
     
    Fortran: 
    integer p 
    

    p = MPI_BOTTOM; MPI_Send( p, ... );

    The C case is clearly allowed; the Fortran case must not be.

    The only way that I can see to provide MPI_BOTTOM to Fortran is to make it a special constant that is like a reserved word - assigning it to a variable is erroneous (that is, there is a special location that is known as MPI_BOTTOM; the routines can test for that special location). This may seem obvious, but it is easily overlooked. We certainly did while writing the model implementation. I believe that this merits some comment in the standard, since to Fortran, MPI_BOTTOM is not a "constant" as defined by that standard.


    7. The requirement for an Environmental Enquiry that allows to find whether two nodes use the same data representation is reasonable, irrespective of the specific use that Rolf has in mind, for Parmacs emulation. More generally, users may want enquiry functions to find the type of node on which a process runs. This would help in situations where the user is willing to run on any available machine on a network, but may want to use different code on different machine types.

    It is not reasonable to assume that the MPI forum will develop its own nomenclature for data representations (size of each basic language data type, big vs little endian, floating point format, character codes, etc.). First because other standards try to handle this issue and, second, because a complete characterization of the data representation used by a process (which depends on the underlying machine architecture and on the compiler) is likely to be quite lengthy.

    We agreed that vendors can attach implementation specific predefined attributes that are associated with MPI_COMM_WORLD. I would suggest the following:

    Associate a predefined attribute key MPI_PROC_TYPE with MPI_COMM_WORLD. The value of this attribute encodes the "type" of the executing process, i.e. the type of data representation used by this process. The meaning of the values of this attribute is implementation dependent, except that if two processes return the same value, then they use the same data representation (i.e., data can be moved untyped between these processes).

    Implementers of MPI for homogeneous SPMD systems just need to always return the same value (0?) with this attribute. Implementers of MPI on heterogeneous systems may need some work.

    If this suggestion sounds reasonable, then the various MPI implementers may accept it now as "common practice", and we can, at a latter time, add this to the standard.



Return to MPI Standard Index
Return to MPI home page