Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Greg Lindahl (lindahl_at_[hidden])
Date: 2005-03-10 20:06:27

On Thu, Mar 10, 2005 at 11:49:52AM -0500, Larry Stewart wrote:

> The presentation ignores the issue of instruction set. Even within
> the x86 family we have IA-32, EMT 64, and AMD-64.


Thanks for sending some interesting comments.

The presentation wasn't intended to be all things to all people. One
approach would be to start with only x86 and AMD64/EMT64; that would
cover most of the market. I don't think an ABI has to include all
processor families to succeed.

> Beyond that, we have the situation where toolchains have
> incompatible formats and calling standards, even for the same
> architecture. Shall we standardize on GCC? On IFC? (I note EKOPATH
> is GNU compatible.)

On Linux for x86 and AMD64/EM64T, gcc, icc, and pathcc and C++ are all
directly compatible.

The Fortrans are compatible enough that a single MPI library can deal
with all. The calling convention stuff happens to work because MPI
doesn't happen to have any calls that hit the "f2c abi" issue. The
underscore thing can be handled with multiple symbols for each entry
point. The command-line startup thing can be worked around by a clever
hack (tm) that I will be happy to share.

> Beyond that, an ABI stifles performance. The compiler (in principle
> at least) could do interprocedural optimizations between the
> application and MPI libraries. Or inlining.

I'm not proposing that an ABI be used 100% of the time. And the only
commercial compiler publically talking about doing such a thing in
production is PathScale, so I think I'd be the first to complain if it
actually hurt us -- it doesn't. We expect that ISVs will likely choose
the official ABI by default, or the better performance but
non-official ABI if they want. No one is stifled.

Another important group who isn't stifled is vendors of new
interconnect. Today a vendor of a new interconnect can sell easily to
anyone who recompiles everything, but anyone who doesn't recompile is
hard. Intel, Scali, Scyld, Verari, and HP are all out trying to
convince the "we distribute binaries" community that their MPI is the
right one to standardize on. A new interconnect vendor will lose in a
world where a closed-source MPI is the standard for the "we distribute
binary" community.

> Even just shipping code as binary forces the vendor into poorly
> optimized code, in order to assure functionality on different models
> of machines.

How much have you talked to ISVs? Most *like* being able to ship a
single binary for their application, because they'd rather lose
performance on a particular processor sub-model than do more testing.
We are encouraged by AMD to improve our EM64T performance so that ISVs
can use a single compiler and generate a single binary that's good on
both AMD64 and EM64T. AMD even has helped with performance engineering!

But there's no need argue this point, it's not really relevant to the
MPI ABI issue, and the people who prefer distributing binaries are
going to continue to do so no matter if there's an MPI ABI or not.

> Use the source.

If you'd like to use the source, then please continue to do so. The
people who find the idea of an ABI compelling are the people listed in
the presentation: ISVs, and open source projects which want to
distribute flexible binary RPMs because their typical user doesn't
want to recompile.

On the flip side, groups such as the OpenMPI project would gain by
supporting an ABI because they'd be able to run with applications
"built for MPICH" without having to recompile. The OpenMPI folks may
not find this compelling; your typical programmer at the national labs
doesn't mind recompiling. But if you wanted to do a study of whether
or not OpenMPI improved the performance of some ISV code, I assure you
that an ABI would make that a lot easier.

-- greg