Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] Some Questions on Building OMPI on Linux Em64t
From: Michael E. Thomadakis (miket7777_at_[hidden])
Date: 2010-05-19 14:19:33


Hello,

I would like to build OMPI V1.4.2 and make it available to our users at the
Supercomputing Center at TAMU. Our system is a 2-socket, 4-core Nehalem
@2.8GHz, 24GiB DRAM / node, 324 nodes connected to 4xQDR Voltaire fabric,
CentOS/RHEL 5.4.

I have been trying to find the following information :

1) high-resolution timers: how do I specify the HRT linux timers in the
        --with-timer=TYPE
  line of ./configure ?

2) I have installed blcr V0.8.2 but when I try to built OMPI and I point to the
full installation it complains it cannot find it. Note that I build BLCR with
GCC but I am building OMPI with Intel compilers (V11.1)

3) Does OMPI by default use SHM for intra-node message IPC but revert to IB for
inter-node ?

4) How could I select the high-speed transport, say DAPL or OFED IB verbs ? Is
there any preference as to the specific high-speed transport over QDR IB?

5) When we launch MPI jobs via PBS/TORQUE do we have control on the task and
thread placement on nodes/cores ?

6) Can we suspend/restart cleanly OMPI jobs with the above scheduler ? Any
caveats on suspension / resumption of OMPI jobs ?

7) Do you have any performance data comparing OMPI vs say MVAPICVHv2 and
IntelMPI ? This is not a political issue since I am groing to be providing all
these MPI stacks to our users.

Thank you so much for the great s/w ...

best
Michael

% -------------------------------------------------------------------- \
% Michael E. Thomadakis, Ph.D. Senior Lead Supercomputer Engineer/Res \
% E-mail: miket AT tamu DOT edu Texas A&M University \
% web: http://alphamike.tamu.edu Supercomputing Center \
% Voice: 979-862-3931 Teague Research Center, 104B \
% FAX: 979-847-8643 College Station, TX 77843, USA \
% -------------------------------------------------------------------- \