Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?
From: Eric Thibodeau (kyron_at_[hidden])
Date: 2008-09-13 11:38:12


Enrico Barausse wrote:
> Hello,
>
> I apologize in advance if my question is naive, but I started to use
> open-mpi only one week ago.
> I have a complicated fortran 90 code which is giving me a segmentation
> fault (address not mapped). I tracked down the problem to the
> following lines:
>
> call
> MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD
> call
> MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
>
Well, for starters, your receive count doesn't match the send count. (4
Vs 3). Is this a typo?
> the MPI_send is executed by a process (say 1) which sends the array
> toroot to another process (say 0). Process 0 successfully receives the
> array toroot (I print out its components and they are correct), does
> some calculations on it and sends back an array tonode to process 1.
> Nevertheless, the MPI_Send routine above never returns controls to
> process 1 (although the array toroot seems to have been transmitted
> alright) and gives a segmentation fault (Signal code: Address not
> mapped (1))
>
> Now, if replace the two lines above with
>
> call
> MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
>
> I get no errors and the code works perfectly (I tested it vs the
> serial version from which I started). But, and here is my question,
> shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?
>
> thank you in advance for helping with this
>
> cheers
>
> enrico
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>