Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Calling a variable from another processor
From: Pradeep Jha (pradeep_at_[hidden])
Date: 2014-01-17 01:28:23


Thanks a ton Christoph. That helps a lot.

2014/1/17 Christoph Niethammer <niethammer_at_[hidden]>

> Hello,
>
> Find attached a minimal example - hopefully doing what you intended.
>
> Regards
> Christoph
>
> --
>
> Christoph Niethammer
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstrasse 19
> 70569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: niethammer_at_[hidden]
> http://www.hlrs.de/people/niethammer
>
>
>
> ----- Ursprüngliche Mail -----
> Von: "Pradeep Jha" <pradeep_at_[hidden]>
> An: "Open MPI Users" <users_at_[hidden]>
> Gesendet: Freitag, 10. Januar 2014 10:23:40
> Betreff: Re: [OMPI users] Calling a variable from another processor
>
>
>
> Thanks for your responses. I am still not able to figure it out. I will
> further simply my problem statement. Can someone please help me with a
> fortran90 code for that.
>
>
> 1) I have N processors each with an array A of size S
> 2) On any random processor (say rank X), I calculate the two integer
> values, Y and Z. (0<=Y<N and 0<Z<=S)
> 3) On processor X, I want to get the value of A(Z) on processor Y.
>
>
> This operation will happen parallely on each processor. Can anyone please
> help me with this?
>
>
>
>
>
>
>
> 2014/1/9 Jeff Hammond < jeff.science_at_[hidden] >
>
>
> One sided is quite simple to understand. It is like file io. You
> read/write (get/put) to a memory object. If you want to make it hard to
> screw up, use passive target bss wrap you calls in lock/unlock so every
> operation is globally visible where it's called.
>
> I've never deadlocked RMA while p2p is easy to hang for nontrivial
> patterns unless you only do nonblocking plus waitall.
>
> If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM
> implementations over MPI-3 already (I wrote both...).
>
> The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2
> RMA stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3
> and OSHMPI (OpenSHMEM over MPI-3) require a late-model MPICH-derivative to
> work, but these are readily available on every platform normal people use
> (BGQ is the only system missing, and that will be resolved soon). I've run
> MPI-3 on my Mac (MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI
> (MPICH).
>
> Best,
>
> Jeff
>
> Sent from my iPhone
>
>
>
> > On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" <
> jsquyres_at_[hidden] > wrote:
> >
> > MPI one-sided stuff is actually pretty complicated; I wouldn't suggest
> it for a beginner (I don't even recommend it for many MPI experts ;-) ).
> >
> > Why not look at the MPI_SOURCE in the status that you got back from the
> MPI_RECV? In fortran, it would look something like (typed off the top of my
> head; forgive typos):
> >
> > -----
> > integer, dimension(MPI_STATUS_SIZE) :: status
> > ...
> > call MPI_Recv(buffer, ..., status, ierr)
> > -----
> >
> > The rank of the sender will be in status(MPI_SOURCE).
> >
> >
> >> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer < niethammer_at_[hidden]> wrote:
> >>
> >> Hello,
> >>
> >> I suggest you have a look onto the MPI one-sided functionality (Section
> 11 of the MPI Spec 3.0).
> >> Create a window to allow the other processes to access the arrays A
> directly via MPI_Get/MPI_Put.
> >> Be aware of synchronization which you have to implement via
> MPI_Win_fence or manual locking.
> >>
> >> Regards
> >> Christoph
> >>
> >> --
> >>
> >> Christoph Niethammer
> >> High Performance Computing Center Stuttgart (HLRS)
> >> Nobelstrasse 19
> >> 70569 Stuttgart
> >>
> >> Tel: ++49(0)711-685-87203
> >> email: niethammer_at_[hidden]
> >> http://www.hlrs.de/people/niethammer
> >>
> >>
> >>
> >> ----- Ursprüngliche Mail -----
> >> Von: "Pradeep Jha" < pradeep_at_[hidden] >
> >> An: "Open MPI Users" < users_at_[hidden] >
> >> Gesendet: Donnerstag, 9. Januar 2014 12:10:51
> >> Betreff: [OMPI users] Calling a variable from another processor
> >>
> >>
> >>
> >>
> >>
> >> I am writing a parallel program in Fortran77. I have the following
> problem: 1) I have N number of processors.
> >> 2) Each processor contains an array A of size S.
> >> 3) Using some function, on every processor (say rank X), I calculate
> the value of two integers Y and Z, where Z<S. (the values of Y and Z are
> different on every processor)
> >> 4) I want to get the value of A(Z) on processor Y to processor X.
> >>
> >> I thought of first sending the numerical value X to processor Y from
> processor X and then sending A(Z) from processor Y to processor X. But it
> is not possible as processor Y does not know the numerical value X and so
> it won't know from which processor to receive the numerical value X from.
> >>
> >> I tried but I haven't been able to come up with any code which can
> implement this action. So I am not posting any codes.
> >>
> >> Any suggestions?
> >>
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > --
> > Jeff Squyres
> > jsquyres_at_[hidden]
> > For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>