I don't know whether its relevant for this problem or not, but a couple
of weeks ago we also found that we had to apply the following patch to
to compile ROMIO with OpenMPI over pvfs2. There is an additional header
pvfs2-compat.h included in the ROMIO version of MPICH, but is somehow
missing in the OpenMPI version....
--- a/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h Thu Sep 03
11:55:51 2009 -0500
+++ b/ompi/mca/io/romio/romio/adio/ad_pvfs2/ad_pvfs2.h Mon Sep 21
10:16:27 2009 -0500
@@ -11,6 +11,10 @@
Rob Latham wrote:
> On Tue, Jan 12, 2010 at 02:15:54PM -0800, Evan Smyth wrote:
>> OpenMPI 1.4 (had same issue with 1.3.3) is configured with
>> ./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \
>> --enable-mpi-threads --with-io-romio-flags="--with-filesystems=pvfs2+ufs+nfs"
>> PVFS 2.8.1 is configured to install in the default location (/usr/local) with
>> ./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs
> In addition to Jeff's request for the build logs, do you have
> 'pvfs2-config' in your path?
>> I build and install these (in this order) and setup my PVFS2 space using
>> instructions at pvfs.org. I am able to use this space using the
>> /usr/local/bin/pvfs2-ls types of commands. I am simply running a 2-server
>> config (2 data servers and the same 2 hosts are metadata servers). As I say,
>> manually, this all seems fine (even when I'm not root). It may be
>> relevant that I am *not* using the kernel interface for PVFS2 as I
>> am just trying to get a
>> better understanding of how this works.
> That's a good piece of information. I run in that configuration
> often, so we should be able to make this work.
>> It is perhaps relevant that I have not had to explicitly tell
>> OpenMPI where I installed PVFS. I have told PVFS where I installed
>> OpenMPI, though. This does seem slightly odd but there does not
>> appear to be a way of telling OpenMPI this information. Perhaps it
>> is not needed.
> PVFS needs an MPI library only to build MPI-based testcases. The
> servers, client libraries, and utilities do not use MPI.
>> In any event, I then build my test program against this OpenMPI and
>> in that program I have the following call sequence (i is 0 and where
>> mntPoint is the path to my pvfs2 mount point -- I also tried
>> prefixing a "pvfs2:" in the front of this as I read somewhere that
>> that was optional).
> In this case, since you do not have the PVFS file system mounted, the
> 'pvfs2:' prefix is mandatory. Otherwise, the MPI-IO library will try
> to look for a directory that does not exist.
>> Which will only execute on one of my ranks (the way I'm running it).
>> No matter what I try, the MPI_File_open call fails with an
>> MPI_ERR_ACCESS error code. This suggests a permission problem but I
>> am able to manually cp and rm from the pvfs2 space without problem
>> so I am not at all clear on what the permission problem is. My
>> access flags look fine to me (the MPI_MODE_UNIQUE_OPEN flag makes no
>> difference in this case as I'm only opening a single file anyway).
>> If I write this file to shared NFS storage, all is "fine"
>> (obviously, I do not consider that a permanent solution, though).
>> Does anyone have any idea why this is not working? Alternately or in
>> addition, does anyone have step-by-step instructions for how to
>> build and set up PVFS2 with OpenMPI as well as an example program
>> because this is the first time I've attempted this so I may well be
>> doing something wrong.
> It sounds like you're on the right track. I should update the PVFS
> quickstart for the OpenMPI specifics. In addition to pvfs2-ping and
> pvfs2-ls, make sure you can pvfs2-cp files to and from your volume.
> If those 3 utilities work, then your OpenMPI installation should work
> as well.
Parallel Software Technologies Lab http://pstl.cs.uh.edu
Department of Computer Science University of Houston
Philip G. Hoffman Hall, Room 524 Houston, TX-77204, USA
Tel: +1 (713) 743-3857 Fax: +1 (713) 743-3335