Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] [patch]There are some collective communications that terminates abnormally when MPI_IN_PLACE is specified.
From: Matsumoto, Yuki (yuki.matsumoto_at_[hidden])
Date: 2012-10-31 04:55:43


Dear all,

There are some collective communications with the possibility of terminating abnormally when MPI_IN_PLACE is specified.
(MPI_Allgather/MPI_Allgatherv/MPI_Gather/MPI_Scatter)
They refer to sdtype or rdtype (For MPI_Scatter) unconditionally by the consideration leakage of the MPI standard at MPI_IN_PLACE.
It terminates abnormally when NULL is specified for a data type of the sending side or receiving side with which MPI_IN_PLACE is specifiable.
e.g.) MPI_Allgather(MPI_IN_PLACE, scount, NULL, rbuf, rcount, recvdtype, ...);

- Functions which have this MPI_IN_PLACE problem:
    Which data type: The sending side data type
    It terminates abnormally when sdtype=NULL is specified at sbuf=MPI_IN_PLACE.
    (sdtype must be ignored when sbuf = MPI_IN_PLACE.)
          ompi_coll_tuned_allgather_intra_dec_fixed
          ompi_coll_tuned_allgatherv_intra_dec_fixed
      MPI_Gather
          ompi_coll_tuned_gather_intra_binomial
      MPI_Allgather
          ompi_coll_tuned_allgather_intra_bruck
          ompi_coll_tuned_allgather_intra_recursivedoubling
          ompi_coll_tuned_allgather_intra_ring
          ompi_coll_tuned_allgather_intra_neighborexchange
          ompi_coll_tuned_allgather_intra_two_procs
      MPI_Allgatherv
          ompi_coll_tuned_allgatherv_intra_bruck
          ompi_coll_tuned_allgatherv_intra_ring
          ompi_coll_tuned_allgatherv_intra_neighborexchange
          ompi_coll_tuned_allgatherv_intra_two_procs
    Which data type: The receiving side data type
    It terminates abnormally when rdtype=NULL is specified at rbuf=MPI_IN_PLACE.
    ("rdtype" must be ignored when rbuf = MPI_IN_PLACE.)
      MPI_Scatter
          ompi_coll_tuned_scatter_intra_binomial

We attach a patch.

Best Regards,
Yuki Matsumoto,
MPI development team,
Fujitsu