Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] anybody tried OMPI with gpudirect?
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2011-02-28 11:16:04


Hello,

I am trying to play with nvidia's gpudirect. The test program given with
the gpudirect tarball just does a basic MPI ping-pong between two
process that allocated their buffers with cudaHostMalloc instead of
malloc. It seems to work with Intel MPI but Open MPI 1.5 hangs in the
first MPI_Send. Replacing the cuda buffer with a normally-malloc'ed
buffer makes the program work again. I assume that something goes wrong
when OMPI tries to register/pin the cuda buffer in the IB stack (that's
what gpudirect seems to be about), but I don't see why Intel MPI would
succeed there.

Has anybody ever looked at this?

FWIW, we're using OMPI 1.5, OFED 1.5.2, Intel MPI 4.0.0.28 and SLES11 w/
and w/o the gpudirect patch.

Thanks
Brice Goglin