Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] users Digest, Vol 1004, Issue 1
From: Enrico Barausse (enrico.barausse_at_[hidden])
Date: 2008-09-14 07:10:24


Hi

I think it's correct. what I want to to is to send a 3d array from the
process 1 to process 0 =root):
call MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD

in some other part of the code process 0 acts on the 3d array and
turns it into a 4d one and sends it back to process 1, which receives
it with

call MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

in practice, what I do i basically give by this simple code (which
doesn't give the segmentation fault unfortunately):

        a=(/1,2,3,4,5/)

        call MPI_INIT(ierr)
        call MPI_COMM_RANK(MPI_COMM_WORLD, id, ierr)
        call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr)

        if(numprocs/=2) stop

        if(id==0) then
                do k=1,5
                        a=a+1
                        call MPI_SEND(a,5,MPI_INTEGER,1,k,MPI_COMM_WORLD,ierr)
                        call
MPI_RECV(b,4,MPI_INTEGER,1,k,MPI_COMM_WORLD,status,ierr)
                end do
        else
                do k=1,5
                        call
MPI_RECV(a,5,MPI_INTEGER,0,k,MPI_COMM_WORLD,status,ierr)
                        b=a(1:4)
                        call MPI_SEND(b,4,MPI_INTEGER,0,k,MPI_COMM_WORLD,ierr)
                end do
        end if

On Sat, Sep 13, 2008 at 6:00 PM, <users-request_at_[hidden]> wrote:
> Send users mailing list submissions to
> users_at_[hidden]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-request_at_[hidden]
>
> You can reach the person managing the list at
> users-owner_at_[hidden]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
> 1. MPI_sendrecv = MPI_Send+ MPI_RECV ? (Enrico Barausse)
> 2. Re: MPI_sendrecv = MPI_Send+ MPI_RECV ? (Eric Thibodeau)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 13 Sep 2008 14:50:50 +0200
> From: "Enrico Barausse" <enrico.barausse_at_[hidden]>
> Subject: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?
> To: users_at_[hidden]
> Message-ID:
> <845f51b10809130550l194e798bx4a3031f6f978f794_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hello,
>
> I apologize in advance if my question is naive, but I started to use
> open-mpi only one week ago.
> I have a complicated fortran 90 code which is giving me a segmentation
> fault (address not mapped). I tracked down the problem to the
> following lines:
>
> call
> MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD
> call
> MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
>
> the MPI_send is executed by a process (say 1) which sends the array
> toroot to another process (say 0). Process 0 successfully receives the
> array toroot (I print out its components and they are correct), does
> some calculations on it and sends back an array tonode to process 1.
> Nevertheless, the MPI_Send routine above never returns controls to
> process 1 (although the array toroot seems to have been transmitted
> alright) and gives a segmentation fault (Signal code: Address not
> mapped (1))
>
> Now, if replace the two lines above with
>
> call
> MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
>
> I get no errors and the code works perfectly (I tested it vs the
> serial version from which I started). But, and here is my question,
> shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?
>
> thank you in advance for helping with this
>
> cheers
>
> enrico
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 13 Sep 2008 11:38:12 -0400
> From: Eric Thibodeau <kyron_at_[hidden]>
> Subject: Re: [OMPI users] MPI_sendrecv = MPI_Send+ MPI_RECV ?
> To: Open MPI Users <users_at_[hidden]>
> Message-ID: <48CBDE64.1080900_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Enrico Barausse wrote:
>> Hello,
>>
>> I apologize in advance if my question is naive, but I started to use
>> open-mpi only one week ago.
>> I have a complicated fortran 90 code which is giving me a segmentation
>> fault (address not mapped). I tracked down the problem to the
>> following lines:
>>
>> call
>> MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD
>> call
>> MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
>>
> Well, for starters, your receive count doesn't match the send count. (4
> Vs 3). Is this a typo?
>> the MPI_send is executed by a process (say 1) which sends the array
>> toroot to another process (say 0). Process 0 successfully receives the
>> array toroot (I print out its components and they are correct), does
>> some calculations on it and sends back an array tonode to process 1.
>> Nevertheless, the MPI_Send routine above never returns controls to
>> process 1 (although the array toroot seems to have been transmitted
>> alright) and gives a segmentation fault (Signal code: Address not
>> mapped (1))
>>
>> Now, if replace the two lines above with
>>
>> call
>> MPI_sendrecv(toroot,3,MPI_DOUBLE_PRECISION,root,n,tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)
>>
>> I get no errors and the code works perfectly (I tested it vs the
>> serial version from which I started). But, and here is my question,
>> shouldn't MPI_sendrecv be equivalent to MPI_Send followed by MPI_RECV?
>>
>> thank you in advance for helping with this
>>
>> cheers
>>
>> enrico
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 1004, Issue 1
> **************************************
>