This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Sep 23, 2011, at 6:52 AM, Uday Kumar Reddy B wrote:
> But that's not really the point - to re-install MPI from sources! One
> would like to choose between compilers depending on what's on the
> system, and also switch between them to experiment. And if I'm
> packacging a software that makes use of mpicc for building, I wouldn't
> want to check what kind of mpi a user has and customize cmdline flags;
> so environment variables don't really help - they just add to the
> complexity. The only portable solution is for all MPIs to support the
> same set of options (in particular, the frequently-used ones). Is
> there any fundamental difficulty in adding -cc to openmpi's mpicc to
> start with? mpich, mvapich already support it; in addition, it is
> standard to have a (-h/-help flag) to list usage options; again, mpich
> and mvapich list these with -help/-h.
MVAPICH is a fork of MPICH, so they're the same.
Unless there is an effort undertaken to standardize wrapper compiler flags, this is not going to happen. Indeed, as I mentioned in a prior email, some MPI implementations do not have wrapper compilers at all. This makes standardization difficult, if not impossible.
Open MPI's attitude towards wrapper compilers has always been to assume as absolutely minimum as possible -- we add a single command line flag (--showme, although it has a few different forms). We pass *everything* else to the underlying compiler because there is a *huge* array of compilers out there that take a multitude of different compiler flags. We wouldn't want to possibly, unknowingly intercept one of them (even worse, use a flag that *today* has no conflict with compilers, but some future compiler release uses that same flag). Hence, we went with the minimalistic approach.
The MPICH folks went in a different way that worked for them. Which is perfectly fine -- we have different attitudes on a bunch of different things in our implementations.
A key point that you're missing here is that compiling an MPI implementation for one compiler suite is not necessarily the same as compiling it for another. It's not always as simple to change compilers as just replacing argv in the wrapper compiler arguments -- sometimes replacing icc with gcc (or vice versa) can actually lead to compiler, linker, or run-time problems.
Such cross-compatibility is usually *supposed* to work for C codes, but definitely not for Fortran and C++ codes (although sometimes additional command line flags can make the symbol mangling between the compilers be the same). And even when it's supposed to work (for C codes), compilers have bugs just like any other software suite -- some prior frustrating incompatibilities have been well-publicized. Note that run-time incompatibilities like this are *exceedingly* difficult to debug because they're problems in the compiler-generated code, not the application code.
Such problems are beyond the scope of an MPI implementation to fix -- we have no control over compiler incompatibilities.
If you want to support multiple different compilers, the Open MPI team's attitude is that is *much* better to have multiple different Open MPI installations. Yes, this takes a little bit of disk space and is a bit annoying, but it avoids all kinds of wonky, real world compiler/linker/run-time cross-optimization problems that can (and do) occur. Software solutions like "modules" are good solutions to this issue; "modules" are used at many HPC sites for exactly this reason.
Plus, you need to multiple installations for different Fortran and C++ compilers, anyway. That's an unfortunate fact of life.
For corporate legal information go to: