Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Brian Budge (brian.budge_at_[hidden])
Date: 2006-11-02 14:22:18


Thanks for the pointer, it was a very interesting read.

 It seems that by default OpenMPI uses the nifty pipelining trick with
pinning pages while transfer is happening. Also the pinning can be
(somewhat) perminant and the state is cached so that next usage requires no
registration. I guess it is possible to use pre-pinned memory, but do I
need to do anything special to do so? I will already have some buffers
pinned to allow DMAs to devices across PCI-Express, so it makes sense to use
one pinned buffer so that I can avoid memcpys.

Are there any HOWTO tutorials or anything? I've searched around, but it's
possible I just used the wrong search terms.

Thanks,
  Brian

On 11/2/06, Jeff Squyres <jsquyres_at_[hidden]> wrote:
>
> This paper explains it pretty well:
>
> http://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/
>
>
>
> On Nov 2, 2006, at 1:37 PM, Brian Budge wrote:
>
> > Hi all -
> >
> > I'm wondering how DMA is handled in OpenMPI when using the
> > infiniband protocol. In particular, will I get a speed gain if my
> > read/write buffers are already pinned via mlock?
> >
> > Thanks,
> > Brian
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>