Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
From: Pradeep Jha (pradeep_at_[hidden])
Date: 2013-02-28 11:24:44


Sorry for those mistakes. I addressed all the three problems
- I put "implicit none" at the top of main program
- I initialized tag.
- changed MPI_INT to MPI_INTEGER
- "send_length" should be just "send", it was a typo.

But the code is still hanging in sendrecv. The present form is below:

 main.f

  program main

  implicit none

  include 'mpif.h'

  integer me, np, ierror

  call MPI_init( ierror )
  call MPI_comm_rank( mpi_comm_world, me, ierror )
  call MPI_comm_size( mpi_comm_world, np, ierror )

  call sendrecv(me, np)

  call mpi_finalize( ierror )

  stop
  end

sendrecv.f

  subroutine sendrecv(me, np)

  include 'mpif.h'

  integer np, me, sender, tag
  integer, dimension(mpi_status_size) :: status

  integer, dimension(1) :: recv, send

  if (me.eq.0) then

     do sender = 1, np-1
        call mpi_recv(recv, 1, mpi_integer, sender, tag,
 & mpi_comm_world, status, ierror)

     end do
  end if

  if ((me.ge.1).and.(me.lt.np)) then
     send(1) = me*12

     call mpi_send(send, 1, mpi_integer, 0, tag,
 & mpi_comm_world, ierror)
  end if

  return
  end

2013/3/1 Jeff Squyres (jsquyres) <jsquyres_at_[hidden]>

> On Feb 28, 2013, at 9:59 AM, Pradeep Jha <pradeep_at_[hidden]>
> wrote:
>
> > Is it possible to call the MPI_send and MPI_recv commands inside a
> subroutine and not the main program?
>
> Yes.
>
> > I have written a minimal program for what I am trying to do. It is
> compiling fine but it is not working. The program just hangs in the
> "sendrecv" subroutine. Any ideas how can I do it?
>
> You seem to have several errors in the sendrecv subroutine. I would
> strongly encourage you to use "implicit none" to avoid many of these
> errors. Here's a few errors I see offhand:
>
> - tag is not initialized
> - what's send_length(1)?
> - use MPI_INTEGER, not MPI_INT (MPI_INT = C int, MPI_INTEGER = Fortran
> INTEGER)
>
>
> > main.f
> >
> >
> > program main
> >
> > include 'mpif.h'
> >
> > integer me, np, ierror
> >
> > call MPI_init( ierror )
> > call MPI_comm_rank( mpi_comm_world, me, ierror )
> > call MPI_comm_size( mpi_comm_world, np, ierror )
> >
> > call sendrecv(me, np)
> >
> > call mpi_finalize( ierror )
> >
> > stop
> > end
> >
> > sendrecv.f
> >
> >
> > subroutine sendrecv(me, np)
> >
> > include 'mpif.h'
> >
> > integer np, me, sender
> > integer, dimension(mpi_status_size) :: status
> >
> > integer, dimension(1) :: recv, send
> >
> > if (me.eq.0) then
> >
> > do sender = 1, np-1
> > call mpi_recv(recv, 1, mpi_int, sender, tag,
> > & mpi_comm_world, status, ierror)
> >
> > end do
> > end if
> >
> > if ((me.ge.1).and.(
> > me.lt.np
> > )) then
> > send_length(1) = me*12
> >
> > call mpi_send(send, 1, mpi_int, 0, tag,
> > & mpi_comm_world, ierror)
> > end if
> >
> > return
> > end
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>