Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_IN_PLACE in Fortran with MPI_REDUCE / MPI_ALLREDUCE
From: George Bosilca (bosilca_at_[hidden])
Date: 2009-07-27 17:13:23


Ricardo,

I can't reproduce your problem with the latest version (trunk r21734).
If I run the provided program on two nodes I get the following answer.
[***]$ mpif77 inplace.f -o inplace -g
[***]$ mpirun -bynode -np 2 ./inplace
  Result:
    3.0000000 3.0000000 3.0000000 3.0000000

This seems correct and in sync with the C answer.

   george.

On Jul 27, 2009, at 09:42 , Ricardo Fonseca wrote:

> program inplace
>
> use mpi
>
> implicit none
>
> integer :: ierr, rank, rsize, bsize
> real, dimension( 2, 2 ) :: buffer, out
> integer :: rc
>
> call MPI_INIT(ierr)
> call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
> call MPI_COMM_SIZE(MPI_COMM_WORLD, rsize, ierr)
>
> buffer = rank + 1
> bsize = size(buffer,1) * size(buffer,2)
>
> if ( rank == 0 ) then
> call mpi_reduce( MPI_IN_PLACE, buffer, bsize, MPI_REAL, MPI_SUM,
> 0, MPI_COMM_WORLD, ierr )
> else
> call mpi_reduce( buffer, 0, bsize, MPI_REAL, MPI_SUM,
> 0, MPI_COMM_WORLD, ierr )
> endif
>
> ! use allreduce instead
> ! call mpi_allreduce( MPI_IN_PLACE, buffer, bsize, MPI_REAL,
> MPI_SUM, MPI_COMM_WORLD, ierr )
>
> if ( rank == 0 ) then
> print *, 'Result:'
> print *, buffer
> endif
>
> rc = 0
> call mpi_finalize( rc )
>
> end program
>