Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI send recv confusion
From: Gus Correa (gus_at_[hidden])
Date: 2013-02-18 14:02:06


Hi Pradeep

For what it is worth, in the MPI Fortran bindings/calls the
datatype to use is "MPI_INTEGER", not "mpi_int" (which you used;
MPI_INT is in the MPI C bindings):

http://linux.die.net/man/3/mpi_integer

Also, just to prevent variables to inadvertently come with
the wrong type, you could add:

implicit none

to the top of your code.
You already have a non-declared "ierr" in "call mpi_send".
(You declared "ierror" as an integer, but not "ierr".)
Although this one may not cause any harm;
names starting with "i" are integers by default, in old Fortran.

I hope this helps,
Gus Correa

On 02/18/2013 01:26 PM, jody wrote:
> Hi Pradeep
>
> I am not sure if this is the reason, but usually it is a bad idea to
> force an order of receives (such as you do in your receive loop -
> first from sender 1 then from sender 2 then from sender 3)
> Unless you implement it so, there is no guarantee the sends are
> performed in this order. B
>
> It is better if you accept messages from all senders (MPI_ANY_SOURCE)
> instead of particular ranks and then check where the
> message came from by examining the status fields
> (http://www.mpi-forum.org/docs/mpi22-report/node47.htm)
>
> Hope this helps
> Jody
>
>
> On Mon, Feb 18, 2013 at 5:06 PM, Pradeep Jha
> <pradeep_at_[hidden]> wrote:
>> I have attached a sample of the MPI program I am trying to write. When I run
>> this program using "mpirun -np 4 a.out", my output is:
>>
>> Sender: 1
>> Data received from 1
>> Sender: 2
>> Data received from 1
>> Sender: 2
>>
>> And the run hangs there. I dont understand why does the "sender" variable
>> change its value after MPI_recv? Any ideas?
>>
>> Thank you,
>>
>> Pradeep
>>
>>
>> program mpi_test
>>
>> include 'mpif.h'
>>
>> !----------------( Initialize variables )--------------------
>> integer, dimension(3) :: recv, send
>>
>> integer :: sender, np, rank, ierror
>>
>> call mpi_init( ierror )
>> call mpi_comm_rank( mpi_comm_world, rank, ierror )
>> call mpi_comm_size( mpi_comm_world, np, ierror )
>>
>> !----------------( Main program )--------------------
>>
>> ! receive the data from the other processors
>> if (rank.eq.0) then
>> do sender = 1, np-1
>> print *, "Sender: ", sender
>> call mpi_recv(recv, 3, mpi_int, sender, 1,
>> & mpi_comm_world, status, ierror)
>> print *, "Data received from ",sender
>> end do
>> end if
>>
>> ! send the data to the main processor
>> if (rank.ne.0) then
>> send(1) = 3
>> send(2) = 4
>> send(3) = 4
>> call mpi_send(send, 3, mpi_int, 0, 1, mpi_comm_world, ierr)
>> end if
>>
>>
>> !----------------( clean up )--------------------
>> call mpi_finalize(ierror)
>>
>> return
>> end program mpi_test`
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users