How does it fail?
On Sep 3, 2013, at 1:19 PM, "Teranishi, Keita" <knteran_at_[hidden]> wrote:
> Thanks for the help. I can run a job using openmpi, assigning a signle
> process per node. However, I have been failing to run a job using
> multiple MPI ranks in a single node. In other words, "mpiexec
> --bind-to-core --npernode 16 --n 16 ./test" never works (apron -n 16 works
> fine). DO you have any thought about it?
> Keita Teranishi
> R&D Principal Staff Member
> Scalable Modeling and Analysis Systems
> Sandia National Laboratories
> Livermore, CA 94551
> On 8/30/13 8:49 AM, "Hjelm, Nathan T" <hjelmn_at_[hidden]> wrote:
>> Replace install_path to where you want Open MPI installed.
>> ./configure --prefix=install_path
>> make install
>> To use Open MPI just set the PATH and LD_LIBRARY_PATH:
>> You can then use mpicc, mpicxx, mpif90, etc to compile and either mpirun
>> or aprun to run. If you are running at scale I would recommend against
>> using aprun for now. I also recommend you change your programming
>> environment to either PrgEnv-gnu or PrgEnv-intel. The PGI compiler can be
>> a PIA. It is possible to build with the Cray compiler but it takes
>> patching the config.guess and changing some autoconf stuff.
>> Please excuse the horrible Outlook-style quoting.
>> From: users [users-bounces_at_[hidden]] on behalf of Teranishi, Keita
>> Sent: Thursday, August 29, 2013 8:01 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] [EXTERNAL] Re: What version of PMI (Cray XE6)
>> is working for OpenMPI-1.6.5?
>> Thanks for the info. Is it still possible to build by myself? What is
>> the procedure other than configure script?
>> On 8/23/13 2:37 PM, "Nathan Hjelm" <hjelmn_at_[hidden]> wrote:
>>> On Fri, Aug 23, 2013 at 09:14:25PM +0000, Teranishi, Keita wrote:
>>>> I am trying to install OpenMPI 1.6.5 on Cray XE6 and very curious
>>>> with the
>>>> current support of PMI. In the previous discussions, there was a
>>>> on the version of PMI (it works with 2.1.4, but fails with 3.0).
>>> Open MPI 1.6.5 does not have support for the XE-6. Use 1.7.2 instead.
>>>> machine has PMI2.1.4 and PMI4.0 (default). Which version do you
>>> There was a regression in PMI 3.x.x that still exists in 4.0.x that
>>> causes a warning to be printed on every rank when using mpirun. We are
>>> working with Cray to resolve the issue. For now use 2.1.4. See the
>>> platform files in contrib/platform/lanl/cray_xe6. The platform files you
>>> would want to use are debug-lustre or optimized-lusre.
>>> BTW, 1.7.2 is installed on Cielo and Cielito. Just run:
>>> module swap PrgEnv-pgi PrgEnv-gnu (PrgEnv-intel also works)
>>> module unload cray-mpich2 xt-libsci
>>> module load openmpi/1.7.2
>>> -Nathan Hjelm
>>> Open MPI Team, HPC-3, LANL
>>> users mailing list
>> users mailing list
>> users mailing list
> users mailing list