Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] IO issue with OpenMPI 1.4.1 and earlier versions
From: Steve Jones (stevejones_at_[hidden])
Date: 2011-09-14 12:53:11


----- Original Message -----
> On Sep 12, 2011, at 10:44 PM, Steve Jones wrote:
>
> > We've run into an IO issue with 1.4.1 and earlier versions. We're
> > able to reproduce the issue in around 120 lines of code to help, I'd
> > like to find if there's something we're simply doing incorrectly
> > with the build or if it's in fact a known bug. I've included the
> > following in order:
> >
> > 1. Configure options used on all versions tested
> > 2. Successful run on 1.4.3
> > 3. Failed run on 1.3.1
> > 4. Failed run on 1.4.1
>
> It looks like https://svn.open-mpi.org/trac/ompi/changeset/22888 fixed
> a problem with OMPI's ROMIO that was included in 1.4.2. This could
> well be the issue.

Hi Jeff,

It looks like this was the issue. Thanks for pointing me towards it and the information on ABI compatibility. I must not have been following well as I was under the impression we needed to rebuild for each new version of MPI introduced.

Talk soon.

Steve

> Note, however, that MPI-IO-written files are not guaranteed to be
> readable outside of MPI-IO. What happens if you read the file back via
> MPI-IO?
>
> > An additional thing to note is we can load the 1.4.2 or 1.4.3
> > environment and successfully run the 1.4.1 or 1.3.1 executable.
>
> Open MPI's ABI guarantees started at 1.3.2, meaning that any MPI
> application executable compiled with 1.3.2 or later should be able to
> run with an OMPI environment 1.3.2 all the way through the end of the
> 1.4.x series.
>
> Hence, it is consistent that your 1.4.1 executable works properly when
> run in a 1.4.3 environment if the ROMIO fix was deployed in 1.4.2.
>
> NOTE: Your 1.3.1 executable *may* work with later OMPI environments,
> but it is not guaranteed (and I absolutely would not rely on it).
> Here's the text in the README about our ABI policy:
>
> -----
> Application Binary Interface (ABI) Compatibility
> ------------------------------------------------
>
> Open MPI provided forward application binary interface (ABI)
> compatibility for MPI applications starting with v1.3.2. Prior to
> that version, no ABI guarantees were provided.
>
> NOTE: Prior to v1.3.2, subtle and strange failures are almost
> guaranteed to occur if applications were compiled and linked
> against shared libraries from one version of Open MPI and then
> run with another. The Open MPI team strongly discourages making
> any ABI assumptions before v1.3.2.
>
> Starting with v1.3.2, Open MPI provides forward ABI compatibility --
> with respect to the MPI API only -- in all versions of a given feature
> release series and its corresponding super stable series. For
> example, on a single platform, an MPI application linked against Open
> MPI v1.3.2 shared libraries can be updated to point to the shared
> libraries in any successive v1.3.x or v1.4 release and still work
> properly (e.g., via the LD_LIBRARY_PATH environment variable or other
> operating system mechanism).
>
> Note that in v1.4.4, a fix was applied to the "large" size of the "use
> mpi" F90 MPI bindings module: two of MPI_SCATTERV's parameters had the
> wrong type and were corrected. Note that this fix *only* applies if
> Open MPI was configured with a Fortran 90 compiler and the
> --with-mpi-f90-size=large configure option.
>
> However, in order to preserve ABI with all releases since v1.3.2, the
> old/incorrect MPI_SCATTERV interface was preserved and a new/corrected
> interface was added (note that Fortran 90 has function overloading,
> similar to C++; hence, both the old and new interface can be accessed
> via "call MPI_Scatterv(...)").
>
> Applications that use the old/incorrect MPI_SCATTERV binding will
> continue to compile/link just like they did with releases since
> v1.3.2. However, application developers are ***STRONGLY*** encouraged
> to fix their applications to use the correct bindings for the
> following reasons:
>
> - The parameter type mismatch may cause application crashes or
> silent data corruption.
> - An annoying message (which cannot be disabled) is sent to stdout
> warning the user that they are using an incorrect interface.
> - The old/incorrect interface will be removed in Open MPI v1.7
> (i.e., applications that use the old/incorrect binding will not
> compile with Open MPI v1.7).
>
> Open MPI reserves the right to break ABI compatibility at new feature
> release series. For example, the same MPI application from above
> (linked against Open MPI v1.3.2 shared libraries) will *not* work with
> Open MPI v1.5 shared libraries.
> -----
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users