Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Edgar Gabriel (gabriel_at_[hidden])
Date: 2007-02-23 11:10:23


your code is actually not correct. If you look at the MPI specification
you will see that s should also be an array of length nProcs (in your
test), since you send different elements to each process. If you want to
send the same s to each process, you have to use MPI_Bcast.

Thanks
Edgar

Chandan Basu wrote:
> I am trying to use MPI_Alltoall in the following program. After
> execution all the nodes should show same value for the array su.
> However only the root node is showing correct value. other nodes are
> giving garbage value. Please help.
>
> I have used openmpi version "1.1.4". The mpif90 uses intel fortran
>
> cbasu
>
> ------------------------------------------------
> program main
> implicit none
> include 'mpif.h'
>
> integer :: status(MPI_Status_size)
> integer :: ierr, rank, nProcs
> double precision :: s
> double precision, pointer :: su(:)
>
> call MPI_Init (ierr)
> call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
> call MPI_Comm_size(MPI_COMM_WORLD, nProcs, ierr)
>
> allocate(su(nProcs))
> su = 0.0d0
> s = 1.0d0
> call MPI_Alltoall(s, 1, MPI_DOUBLE_PRECISION, su, 1, &
> & MPI_DOUBLE_PRECISION, MPI_COMM_WORLD, ierr);
>
> ! all nodes should have su(1:nProcs) = 1.0 at this pont
> print *, rank, su
>
> deallocate(su)
>
> call MPI_Finalize(ierr)
> end program main
> ----------------------------------------------
>
>
>
> <http://adworks.rediff.com/cgi-bin/AdWorks/sigclick.cgi/www.rediff.com/signature-home.htm/1507191490@Middle5?PARTNER=3>
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users