Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Michael Kluskens (mklus_at_[hidden])
Date: 2006-10-24 14:06:47


This is a reminder about an issue I bought up back at the end of May
2006 <https://svn.open-mpi.org/trac/ompi/ticket/55> and the solution
was to disable with-mpi-f90-size=large till 1.2.

Testing 1.3a1r12274 and I see that no progress has been made on this
even though I submited the precise changes needed to expand large for
MPI_Gather to handle reasonable coding practices. I'm sure other MPI
routines are affected by this and the solution is not difficult.

Now I could manually repatch 1.3 every week but it would be better
for everyone if I was not the only Fortran MPI programmer that could
use with-mpi-f90-size=large and have arrays in MPI_Gather that are of
different dimensions.

Michael

Details below (edited)
--------

Look at limitations of the following:

    --with-mpi-f90-size=large
(medium + all MPI functions with 2 choice buffers, but only when both
buffers are the same type)

Not sure what "same type" was intended to mean here, but same
dimension is not a good idea and is what is currently implemented.

------------------------------------------------------------------------
subroutine MPI_Gather0DI4(sendbuf, sendcount, sendtype, recvbuf,
recvcount, &
          recvtype, root, comm, ierr)
    include 'mpif-common.h'
    integer*4, intent(in) :: sendbuf
    integer, intent(in) :: sendcount
    integer, intent(in) :: sendtype
    integer*4, intent(out) :: recvbuf
    integer, intent(in) :: recvcount
    integer, intent(in) :: recvtype
    integer, intent(in) :: root
    integer, intent(in) :: comm
    integer, intent(out) :: ierr
end subroutine MPI_Gather0DI4

Think about it, all processes are sending data back to root, if each
sends a single integer where does the second, third, fourth, etc.
integer go?
------------------------------------------------------------------------

The interfaces for MPI_GATHER do not include the possibility that the
sendbuf is an integer and the recvbuffer is an integer array, for
example the following does not exist but seems legal or should be
legal (and should at the very least replace the above interface):
------------------------------------------------------------------------
subroutine MPI_Gather01DI4(sendbuf, sendcount, sendtype, recvbuf,
recvcount, &
          recvtype, root, comm, ierr)
    include 'mpif-common.h'
    integer*4, intent(in) :: sendbuf
    integer, intent(in) :: sendcount
    integer, intent(in) :: sendtype
    integer*4, dimension(:), intent(out) :: recvbuf
    integer, intent(in) :: recvcount
    integer, intent(in) :: recvtype
    integer, intent(in) :: root
    integer, intent(in) :: comm
    integer, intent(out) :: ierr
end subroutine MPI_Gather01DI4
------------------------------------------------------------------------

Also, consider that there may be no reason to restrict sendbuf and
recvbuf have the same number of dimensions, but it is reasonable to
expect sendbuf to have the same or less dimensions as recvbuf (except
both being a scalar seems unreasonable). This does complicate the
issue from an order (N+1) problem to an order (N+1)*(N+2)/2 problem,
where is N = 4 unless otherwise restricted, but should be doable and
certain functions should have the 0,0 case eliminated.

----------

Below is my solution for the generating scripts for MPI_Gather for
F90). It might be acceptable and reasonable to reduce the
combinations to just equal or one dimension less (00,01,11,12,22).

Michael

---------- mpi-f90-interfaces.h.sh
#-----------------------------------------------------------------------

output_120() {
      if test "$output" = "0"; then
          return 0
      fi
      procedure=$1
      rank=$2
      rank2=$3
      type=$5
      type2=$6
      proc="$1$2$3D$4"
      cat <<EOF

subroutine ${proc}(sendbuf, sendcount, sendtype, recvbuf, recvcount, &
          recvtype, root, comm, ierr)
    include 'mpif-common.h'
    ${type}, intent(in) :: sendbuf
    integer, intent(in) :: sendcount
    integer, intent(in) :: sendtype
    ${type2}, intent(out) :: recvbuf
    integer, intent(in) :: recvcount
    integer, intent(in) :: recvtype
    integer, intent(in) :: root
    integer, intent(in) :: comm
    integer, intent(out) :: ierr
end subroutine ${proc}

EOF
}

start MPI_Gather large

for rank in $allranks
do
    case "$rank" in 0) dim='' ; esac
    case "$rank" in 1) dim=', dimension(:)' ; esac
    case "$rank" in 2) dim=', dimension(:,:)' ; esac
    case "$rank" in 3) dim=', dimension(:,:,:)' ; esac
    case "$rank" in 4) dim=', dimension(:,:,:,:)' ; esac
    case "$rank" in 5) dim=', dimension(:,:,:,:,:)' ; esac
    case "$rank" in 6) dim=', dimension(:,:,:,:,:,:)' ; esac
    case "$rank" in 7) dim=', dimension(:,:,:,:,:,:,:)' ; esac

    for rank2 in $allranks
    do
      case "$rank2" in 0) dim2='' ; esac
      case "$rank2" in 1) dim2=', dimension(:)' ; esac
      case "$rank2" in 2) dim2=', dimension(:,:)' ; esac
      case "$rank2" in 3) dim2=', dimension(:,:,:)' ; esac
      case "$rank2" in 4) dim2=', dimension(:,:,:,:)' ; esac
      case "$rank2" in 5) dim2=', dimension(:,:,:,:,:)' ; esac
      case "$rank2" in 6) dim2=', dimension(:,:,:,:,:,:)' ; esac
      case "$rank2" in 7) dim2=', dimension(:,:,:,:,:,:,:)' ; esac

      if [ ${rank2} != "0" ] && [ ${rank2} -ge ${rank} ]; then

      output_120 MPI_Gather ${rank} ${rank2} CH "character${dim}"
"character${dim2}"
      output_120 MPI_Gather ${rank} ${rank2} L "logical${dim}" "logical
${dim2}"
      for kind in $ikinds
      do
        output_120 MPI_Gather ${rank} ${rank2} I${kind} "integer*$
{kind}${dim}" "integer*${kind}${dim2}"
      done
      for kind in $rkinds
      do
        output_120 MPI_Gather ${rank} ${rank2} R${kind} "real*${kind}$
{dim}" "real*${kind}${dim2}"
      done
      for kind in $ckinds
      do
        output_120 MPI_Gather ${rank} ${rank2} C${kind} "complex*$
{kind}${dim}" "complex*${kind}${dim2}"
      done

      fi
    done
done
end MPI_Gather
----------
--- mpi_gather_f90.f90.sh
output() {
      procedure=$1
      rank=$2
      rank2=$3
      type=$5
      type2=$6
      proc="$1$2$3D$4"
      cat <<EOF

subroutine ${proc}(sendbuf, sendcount, sendtype, recvbuf, recvcount, &
          recvtype, root, comm, ierr)
    include "mpif-common.h"
    ${type}, intent(in) :: sendbuf
    integer, intent(in) :: sendcount
    integer, intent(in) :: sendtype
    ${type2}, intent(out) :: recvbuf
    integer, intent(in) :: recvcount
    integer, intent(in) :: recvtype
    integer, intent(in) :: root
    integer, intent(in) :: comm
    integer, intent(out) :: ierr
    call ${procedure}(sendbuf, sendcount, sendtype, recvbuf,
recvcount, &
          recvtype, root, comm, ierr)
end subroutine ${proc}

EOF
}

for rank in $allranks
do
    case "$rank" in 0) dim='' ; esac
    case "$rank" in 1) dim=', dimension(:)' ; esac
    case "$rank" in 2) dim=', dimension(:,:)' ; esac
    case "$rank" in 3) dim=', dimension(:,:,:)' ; esac
    case "$rank" in 4) dim=', dimension(:,:,:,:)' ; esac
    case "$rank" in 5) dim=', dimension(:,:,:,:,:)' ; esac
    case "$rank" in 6) dim=', dimension(:,:,:,:,:,:)' ; esac
    case "$rank" in 7) dim=', dimension(:,:,:,:,:,:,:)' ; esac

    for rank2 in $allranks
    do
      case "$rank2" in 0) dim2='' ; esac
      case "$rank2" in 1) dim2=', dimension(:)' ; esac
      case "$rank2" in 2) dim2=', dimension(:,:)' ; esac
      case "$rank2" in 3) dim2=', dimension(:,:,:)' ; esac
      case "$rank2" in 4) dim2=', dimension(:,:,:,:)' ; esac
      case "$rank2" in 5) dim2=', dimension(:,:,:,:,:)' ; esac
      case "$rank2" in 6) dim2=', dimension(:,:,:,:,:,:)' ; esac
      case "$rank2" in 7) dim2=', dimension(:,:,:,:,:,:,:)' ; esac

      if [ ${rank2} != "0" ] && [ ${rank2} -ge ${rank} ]; then

        output MPI_Gather ${rank} ${rank2} CH "character${dim}"
"character${dim2}"
        output MPI_Gather ${rank} ${rank2} L "logical${dim}" "logical$
{dim2}"
        for kind in $ikinds
        do
          output MPI_Gather ${rank} ${rank2} I${kind} "integer*${kind}$
{dim}" "integer*${kind}${dim2}"
        done
        for kind in $rkinds
        do
          output MPI_Gather ${rank} ${rank2} R${kind} "real*${kind}$
{dim}" "real*${kind}${dim2}"
        done
        for kind in $ckinds
        do
          output MPI_Gather ${rank} ${rank2} C${kind} "complex*${kind}$
{dim}" "complex*${kind}${dim2}"
        done

      fi
    done
done

_______________________________________________