Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] problem with fortran, MPI_REDUCE and MPI_IN_PLACE
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-11-29 13:57:29


Ask and you shall receive!

I got a tip from the MPICH2 guys about how they handle this stuff; it seems that the magic gfortran compiler flag is -Wl,-commons,use_dylibs. Thanks Dave Goodell!

I will commit this to the OMPI SVN trunk tonight (because it's an autotools-level change, which we try not to do during the workday), and will file tickets to get this change over to v1.4 and v1.5.

While you're waiting for a release with this fix, you can either manually add -Wl,-commons,use_dylibs to your mpif77/mpif90 command lines, or edit your $prefix/share/ompi/mpif77-wrapper-data.txt file (and mpif90-wrapper-data.txt file) to set the "compiler_flags" line to include -Wl,-commons,use_dylibs. For example:

compiler_flags=-Wl,-commons,use_dylibs

Woo hoo!

On Nov 28, 2011, at 8:11 PM, Jeff Squyres wrote:

> Unfortunately, this is a known issue. :-\
>
> I have not found a reliable way to deduce that MPI_IN_PLACE has been passed as the parameter to MPI_REDUCE (and friends) on OS X. There's something very strange going on with regards to the Fortran compiler and common block variables (which is where we have MPI_IN_PLACE and other sentinel-value MPI constants defined).
>
> We have a very old ticket open on this issue:
>
> https://svn.open-mpi.org/trac/ompi/ticket/1982
>
> Any suggestions would be welcome. :-\
>
>
> On Nov 23, 2011, at 1:20 PM, Arjen van Elteren wrote:
>
>> Dear All,
>>
>> I'm running a complex program with a number of MPI_REDUCE calls, every call uses MPI_IN_PLACE as the first parameter (the send buffer).
>>
>> I'm currently testing this program on Mac 10.6 with macports installed.
>>
>> Unfortunately all MPI_REDUCE calls with MPI_IN_PLACE, seem to fail.
>>
>> I've pinpointed the problem to the MPI_IN_PLACE parameter location, it seems to matter if it is the first or the second parameter to the MPI_REDUCE call.
>>
>> This is specific for fortran, in C the sequence does not matter!
>>
>> A simple program to test this:
>>
>> PROGRAM MAIN
>> implicit none
>> include 'mpif.h'
>> integer :: x(10)
>> integer :: provided,ioerror
>> call MPI_INIT(ioerror)
>> x = 1
>>
>> print *, x
>> call MPI_REDUCE(x, MPI_IN_PLACE,10, MPI_INTEGER, MPI_SUM, 0,MPI_COMM_WORLD, ioerror)
>> print *, x
>> call MPI_REDUCE(MPI_IN_PLACE, x,10, MPI_INTEGER, MPI_SUM, 0,MPI_COMM_WORLD, ioerror)
>> print *, x
>>
>> call MPI_FINALIZE(ioerror)
>> END PROGRAM
>>
>> I run this on one process (mpiexec ./a.out)
>>
>> I'm running with openmpi version 1.5.4 (macports)
>>
>> The openmpi is compiled with gfortran 4.4.6
>>
>> Is this a bug in openmpi or is my understanding of MPI_REDUCE wrong?
>>
>> Best regards,
>>
>> Arjen
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/