Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Compiling OpenMPI 1.7.x with core afinity
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2013-10-21 14:49:47

On Oct 21, 2013, at 12:25 PM, Patrick Begou <Patrick.Begou_at_[hidden]> wrote:

> kareline (front-end) is a R720XD and the nodes are C6100 sleds from DELL. All is running with Rocks-Cluster (based on RHEL6).

Are these AMD- or Intel-based systems? (I don't follow the model/series of non-Cisco servers, sorry...)

> The install of hwloc and numactl was requested I think for OpenMPI 1.7.x. It was installed on the front-end (without the devel packages that OpenMPI seams to request at compile time) but not on the nodes.

FWIW: Open MPI 1.7.x includes its own embedded copy of hwloc; it shouldn't need another standalone hwloc installation.

> At this time I was using cpusets and fake numa in the kernel to control cpu and memory use by the users (if someone request 2 cores and uses the whole node memory it can break other people's jobs).
> Now OpenMPI 1.7.3 compiles and --bind-to-core or --bind-to-socket seams to work fine (I still have to check in death tomorrow).


> I was needing to compile OpenMPI
> - to use my Intel infiniband architecture
> - because I have started to modify OpenMPI to interface it with my job scheduler ( My small modifications are working but I think they do not agree with the development concept of OpenMPI as I put all the stuff (20 lines) in orte/tools/orterun/orterun.c. I have to understand many concepts in OpenMPI development to contribute safely to this software (with a --use-oar may be) and it should be discussed later on the developper's forum.

Ok. If you want to discuss that in detail, please ask over on the devel list.

Jeff Squyres
For corporate legal information go to: