Hi George

Thanks for the input. This might be an OS specific problem: I'm running Mac OS X 10.5.7, and this problem appears in openmpi versions 1.3.2, 1.3.3 and 1.4a1r21734, using Intel Ifort Compiler 11.0 and 11.1 (and also g95 + 1.3.2). I haven't tried older versions. Also, I'm running on a single machine:

zamb$ mpif90 inplace_test.f90
zamb$ mpirun -np 2 ./a.out
 Result:
   2.000000       2.000000       2.000000       2.000000    

I've tryed the same code under Linux (openmpi-1.3.3 + gfortran) and it works (and also other platforms / MPIs ).

Can you think of some --mca options I should try? (or any other ideas...)

Cheers,
Ricardo

--- 

Prof. Ricardo Fonseca


GoLP - Grupo de Lasers e Plasmas

Instituto de Plasmas e Fusão Nuclear

Instituto Superior Técnico

Av. Rovisco Pais

1049-001 Lisboa

Portugal


tel: +351 21 8419202

fax: +351 21 8464455

web: http://cfp.ist.utl.pt/golp/ 


On Jul 28, 2009, at 4:24 , users-request@open-mpi.org wrote:

Message: 4
Date: Mon, 27 Jul 2009 17:13:23 -0400
From: George Bosilca <bosilca@eecs.utk.edu>
Subject: Re: [OMPI users] MPI_IN_PLACE in Fortran with MPI_REDUCE /
MPI_ALLREDUCE
To: Open MPI Users <users@open-mpi.org>
Message-ID: <966A51C3-A15F-425B-A6B0-81221033CCF1@eecs.utk.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Ricardo,

I can't reproduce your problem with the latest version (trunk r21734).  
If I run the provided program on two nodes I get the following answer.
[***]$ mpif77 inplace.f -o inplace -g
[***]$ mpirun -bynode -np 2 ./inplace
 Result:
   3.0000000       3.0000000       3.0000000       3.0000000

This seems correct and in sync with the C answer.

  george.


On Jul 27, 2009, at 09:42 , Ricardo Fonseca wrote:

program inplace

 use mpi

 implicit none

 integer :: ierr, rank, rsize, bsize
 real, dimension( 2, 2 ) :: buffer, out
 integer :: rc

 call MPI_INIT(ierr)
 call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
 call MPI_COMM_SIZE(MPI_COMM_WORLD, rsize, ierr)

 buffer = rank + 1
 bsize = size(buffer,1) * size(buffer,2)

 if ( rank == 0 ) then
   call mpi_reduce( MPI_IN_PLACE, buffer, bsize, MPI_REAL, MPI_SUM,  
0, MPI_COMM_WORLD, ierr )
 else
   call mpi_reduce( buffer,       0,      bsize, MPI_REAL, MPI_SUM,  
0, MPI_COMM_WORLD, ierr )
 endif

 ! use allreduce instead
 ! call mpi_allreduce( MPI_IN_PLACE, buffer, bsize, MPI_REAL,  
MPI_SUM, MPI_COMM_WORLD, ierr )

 if ( rank == 0 ) then
   print *, 'Result:'
   print *, buffer
 endif

 rc = 0
 call mpi_finalize( rc )

end program