Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran withMPI_REDUCE / MPI_ALLREDUCE
From: Ricardo Fonseca (ricardo.fonseca_at_[hidden])
Date: 2009-07-30 10:41:54


(I just realized I had the wrong subject line, here it goes again)

Hi Jeff

Yes, I am using the right one. I've installed the freshly compiled
openmpi into /opt/openmpi/1.3.3-g95-32. If I edit the mpif.h file by
hand and put "error!" in the first line I get:

zamblap:sandbox zamb$ edit /opt/openmpi/1.3.3-g95-32/include/mpif.h

zamblap:sandbox zamb$ mpif77 inplace_test.f90

In file mpif.h:1

    Included at inplace_test.f90:7

error!

1

Error: Unclassifiable statement at (1)

(btw, if I use the F90 bindings instead I get a similar problem,
except the address for the MPI_IN_PLACE fortran constant is slightly
different from the F77 binding, i.e. instead of 0x50920 I get 0x508e0)

Thanks for your help,

Ricardo

On Jul 29, 2009, at 17:00 , users-request_at_[hidden] wrote:

> Message: 2
> Date: Wed, 29 Jul 2009 07:54:38 -0500
> From: Jeff Squyres <jsquyres_at_[hidden]>
> Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
> withMPI_REDUCE / MPI_ALLREDUCE
> To: "Open MPI Users" <users_at_[hidden]>
> Message-ID: <986510B6-7103-4D7B-B7D6-9D8AFDC19E71_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
>
> Can you confirm that you're using the right mpif.h?
>
> Keep in mind that each MPI implementation's mpif.h is different --
> it's a common mistake to assume that the mpif.h from one MPI
> implementation should work with another implementation (e.g., someone
> copied mpif.h from one MPI to your software's source tree, so the
> compiler always finds that one instead of the MPI-implementation-
> provided mpif.h.).
>
>
> On Jul 28, 2009, at 1:17 PM, Ricardo Fonseca wrote:
>
>> Hi George
>>
>> I did some extra digging and found that (for some reason) the
>> MPI_IN_PLACE parameter is not being recognized as such by
>> mpi_reduce_f (reduce_f.c:61). I added a couple of printfs:
>>
>> printf(" sendbuf = %p \n", sendbuf );
>>
>> printf(" MPI_FORTRAN_IN_PLACE = %p \n", &MPI_FORTRAN_IN_PLACE );
>> printf(" mpi_fortran_in_place = %p \n", &mpi_fortran_in_place );
>> printf(" mpi_fortran_in_place_ = %p \n", &mpi_fortran_in_place_ );
>> printf(" mpi_fortran_in_place__ = %p \n",
>> &mpi_fortran_in_place__ );
>>
>> And this is what I get on node 0:
>>
>> sendbuf = 0x50920
>> MPI_FORTRAN_IN_PLACE = 0x17cd30
>> mpi_fortran_in_place = 0x17cd34
>> mpi_fortran_in_place_ = 0x17cd38
>> mpi_fortran_in_place__ = 0x17cd3c
>>
>> This makes OMPI_F2C_IN_PLACE(sendbuf) fail. If I replace the line:
>>
>> sendbuf = OMPI_F2C_IN_PLACE(sendbuf);
>>
>> with:
>>
>> if ( sendbuf == 0x50920 ) {
>> printf("sendbuf is MPI_IN_PLACE!\n");
>> sendbuf = MPI_IN_PLACE;
>> }
>>
>> Then the code works and gives the correct result:
>>
>> sendbuf is MPI_IN_PLACE!
>> Result:
>> 3. 3. 3. 3.
>>
>> So my guess is that somehow the MPI_IN_PLACE constant for fortran is
>> getting the wrong address. Could this be related to the fortran
>> compilers I'm using (ifort / g95)?
>>
>> Ricardo
>