Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Problems Using PVFS2 with OpenMPI
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-01-13 08:24:14


Evan --

As a first step, can you send your build logs so that we can verify that OMPI+ROMIO built with proper PVFS2 support? See:

    http://www.open-mpi.org/community/help/

On Jan 12, 2010, at 5:15 PM, Evan Smyth wrote:

> I am unable to use PVFS2 with OpenMPI in a simple test program. My
> configuration is given below. I'm running on RHEL5 with GigE (probably not
> important).
>
> OpenMPI 1.4 (had same issue with 1.3.3) is configured with
> ./configure --prefix=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs \
> --enable-mpi-threads --with-io-romio-flags="--with-filesystems=pvfs2+ufs+nfs"
>
> PVFS 2.8.1 is configured to install in the default location (/usr/local) with
> ./configure --with-mpi=/work/rd/evan/archives/openmpi/openmpi/1.4/enable_pvfs
>
> I build and install these (in this order) and setup my PVFS2 space using
> instructions at pvfs.org. I am able to use this space using the
> /usr/local/bin/pvfs2-ls types of commands. I am simply running a 2-server
> config (2 data servers and the same 2 hosts are metadata servers). As I say,
> manually, this all seems fine (even when I'm not root). It may be relevant that
> I am *not* using the kernel interface for PVFS2 as I am just trying to get a
> better understanding of how this works.
>
> It is perhaps relevant that I have not had to explicitly tell OpenMPI where I
> installed PVFS. I have told PVFS where I installed OpenMPI, though. This does
> seem slightly odd but there does not appear to be a way of telling OpenMPI this
> information. Perhaps it is not needed.
>
> In any event, I then build my test program against this OpenMPI and in that
> program I have the following call sequence (i is 0 and where mntPoint is the
> path to my pvfs2 mount point -- I also tried prefixing a "pvfs2:" in the front
> of this as I read somewhere that that was optional).
>
> sprintf(aname, "%s/%d.fdm", mntPoint, i);
> for(int j = 0; j < numFloats; j++) buf[j] = (float)i;
> int retval = MPI_SUCCESS;
> if(MPI_SUCCESS == (retval = MPI_File_open(MPI_COMM_SELF, aname,
> MPI_MODE_RDWR|MPI_MODE_CREATE|MPI_MODE_UNIQUE_OPEN,
> MPI_INFO_NULL, &fh)))
> {
> MPI_File_write(fh, (void*)buf, numFloats, MPI_FLOAT,
> MPI_STATUS_IGNORE);
> MPI_File_close(&fh);
> } else {
> int errBufferLen;
> char errBuffer[MPI_MAX_ERROR_STRING];
> MPI_Error_string(retval, errBuffer, &errBufferLen);
> fprintf(stdout,"%d: open error on %s with code %s\n", rank,
> aname, errBuffer);
> }
>
> Which will only execute on one of my ranks (the way I'm running it). No matter
> what I try, the MPI_File_open call fails with an MPI_ERR_ACCESS error code.
> This suggests a permission problem but I am able to manually cp and rm from the
> pvfs2 space without problem so I am not at all clear on what the permission
> problem is. My access flags look fine to me (the MPI_MODE_UNIQUE_OPEN flag
> makes no difference in this case as I'm only opening a single file anyway). If
> I write this file to shared NFS storage, all is "fine" (obviously, I do not
> consider that a permanent solution, though).
>
> Does anyone have any idea why this is not working? Alternately or in addition,
> does anyone have step-by-step instructions for how to build and set up PVFS2
> with OpenMPI as well as an example program because this is the first time I've
> attempted this so I may well be doing something wrong.
>
> Thanks in advance,
> Evan
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Jeff Squyres
jsquyres_at_[hidden]