Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] New Romio for OpenMPI available in bitbucket
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-09-22 07:51:25


On Sep 17, 2010, at 6:36 AM, Pascal Deveze wrote:

> In charge of ticket 1888 (see at https://svn.open-mpi.org/trac/ompi/ticket/1888) ,
> I have put the resulting code in bitbucket at:
> http://bitbucket.org/devezep/new-romio-for-openmpi/

Sweet!

> The work in this repo consisted in refreshing ROMIO to a newer
> version: the one from the very last MPICH2 release (mpich2-1.3b1).

Great! I saw there was another MPICH2 release, and I saw a ROMIO patch or three go by on the MPICH list recently. Do you expect there to be major differences between what you have and those changes?

I don't have any parallel filesystems to test with, but if someone else in the community could confirm/verify at least one or two of the parallel filesystems supported in ROMIO, I think we should bring this stuff into the trunk soon.

> Testing:
> 1. runs fine except one minor error (see the explanation below) on various FS.
> 2. runs fine with Lustre, but:
> . had to add a small patch in romio/adio/ad_lustre_open.c

Did this patch get pushed upstream?

> ======== The minor error ===================
> The test error.c fails because OpenMPI does not handle correctly the
> "two level" error functions of ROMIO:
> error_code = MPIO_Err_create_code(MPI_SUCCESS, MPIR_ERR_RECOVERABLE,
> myname, __LINE__, MPI_ERR_ARG,
> "**iobaddisp", 0);
> OpenMPI limits its view to MPI_ERR_ARG, but the real error is "**iobaddisp".

Do you mean that we should be returning an error string "**iobaddisp" instead of "MPI_ERR_ARG"?

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/