Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?
From: rahmani (m_rahmani56_at_[hidden])
Date: 2008-09-13 14:50:37

----- Original Message -----
From: "Enrico Barausse" <enrico.barausse_at_[hidden]>
To: users_at_[hidden]
Sent: Saturday, September 13, 2008 8:50:50 AM (GMT-0500) America/New_York
Subject: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?


I apologize in advance if my question is naive, but I started to use
open-mpi only one week ago.
I have a complicated fortran 90 code which is giving me a segmentation
fault (address not mapped). I tracked down the problem to the
following lines:


the MPI_send is executed by a process (say 1) which sends the array
toroot to another process (say 0). Process 0 successfully receives the
array toroot (I print out its components and they are correct), does
some calculations on it and sends back an array tonode to process 1.
Nevertheless, the MPI_Send routine above never returns controls to
process 1 (although the array toroot seems to have been transmitted
alright) and gives a segmentation fault (Signal code: Address not
mapped (1))

Now, if replace the two lines above with


I get no errors and the code works perfectly (I tested it vs the
serial version from which I started). But, and here is my question,
shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?

thank you in advance for helping with this


users mailing list

I think if you use MPI_Isend it work correctly!
test this and write me what happen!