Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] [EXTERNAL] Re: What version of PMI (Cray XE6) is working for OpenMPI-1.6.5?
From: Teranishi, Keita (knteran_at_[hidden])
Date: 2013-09-03 16:19:07


Nathan,

Thanks for the help. I can run a job using openmpi, assigning a signle
process per node. However, I have been failing to run a job using
multiple MPI ranks in a single node. In other words, "mpiexec
--bind-to-core --npernode 16 --n 16 ./test" never works (apron -n 16 works
fine). DO you have any thought about it?

Thanks,
---------------------------------------------
Keita Teranishi
R&D Principal Staff Member
Scalable Modeling and Analysis Systems
Sandia National Laboratories
Livermore, CA 94551

On 8/30/13 8:49 AM, "Hjelm, Nathan T" <hjelmn_at_[hidden]> wrote:

>Replace install_path to where you want Open MPI installed.
>
>./configure --prefix=install_path
>--with-platform=contrib/platform/lanl/cray_xe6/optimized-luster
>make
>make install
>
>To use Open MPI just set the PATH and LD_LIBRARY_PATH:
>
>PATH=install_path/bin:$PATH
>LD_LIBRARY_PATH=install_path/lib:$LD_LIBRARY_PATH
>
>You can then use mpicc, mpicxx, mpif90, etc to compile and either mpirun
>or aprun to run. If you are running at scale I would recommend against
>using aprun for now. I also recommend you change your programming
>environment to either PrgEnv-gnu or PrgEnv-intel. The PGI compiler can be
>a PIA. It is possible to build with the Cray compiler but it takes
>patching the config.guess and changing some autoconf stuff.
>
>-Nathan
>
>Please excuse the horrible Outlook-style quoting.
>________________________________________
>From: users [users-bounces_at_[hidden]] on behalf of Teranishi, Keita
>[knteran_at_[hidden]]
>Sent: Thursday, August 29, 2013 8:01 PM
>To: Open MPI Users
>Subject: Re: [OMPI users] [EXTERNAL] Re: What version of PMI (Cray XE6)
>is working for OpenMPI-1.6.5?
>
>Thanks for the info. Is it still possible to build by myself? What is
>the procedure other than configure script?
>
>
>
>
>
>On 8/23/13 2:37 PM, "Nathan Hjelm" <hjelmn_at_[hidden]> wrote:
>
>>On Fri, Aug 23, 2013 at 09:14:25PM +0000, Teranishi, Keita wrote:
>>> Hi,
>>> I am trying to install OpenMPI 1.6.5 on Cray XE6 and very curious
>>>with the
>>> current support of PMI. In the previous discussions, there was a
>>>comment
>>> on the version of PMI (it works with 2.1.4, but fails with 3.0).
>>>Our
>>
>>Open MPI 1.6.5 does not have support for the XE-6. Use 1.7.2 instead.
>>
>>> machine has PMI2.1.4 and PMI4.0 (default). Which version do you
>>
>>There was a regression in PMI 3.x.x that still exists in 4.0.x that
>>causes a warning to be printed on every rank when using mpirun. We are
>>working with Cray to resolve the issue. For now use 2.1.4. See the
>>platform files in contrib/platform/lanl/cray_xe6. The platform files you
>>would want to use are debug-lustre or optimized-lusre.
>>
>>BTW, 1.7.2 is installed on Cielo and Cielito. Just run:
>>
>>module swap PrgEnv-pgi PrgEnv-gnu (PrgEnv-intel also works)
>>module unload cray-mpich2 xt-libsci
>>module load openmpi/1.7.2
>>
>>
>>-Nathan Hjelm
>>Open MPI Team, HPC-3, LANL
>>_______________________________________________
>>users mailing list
>>users_at_[hidden]
>>http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>_______________________________________________
>users mailing list
>users_at_[hidden]
>http://www.open-mpi.org/mailman/listinfo.cgi/users
>_______________________________________________
>users mailing list
>users_at_[hidden]
>http://www.open-mpi.org/mailman/listinfo.cgi/users