Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] What flags for configure for a single machineinstallation ?
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-06-04 21:23:28


On Jun 4, 2009, at 12:01 PM, DEVEL Michel wrote:

> 1°) In fact I just want to install openmpi on my machine (single i7
> 920)
> to be able to develop parallel codes (using eclipse/photran/PTP)
> that I
> will execute on a cluster later (using SGE batch queue system).
> I therefore wonder what kind of configure flags I could put to have a
> basic single-machine installation ?
>

Nope, you shouldn't need anything special.

> 2°) For GCC, "./configure --prefix=/usr/local --with-sge
> --enable-static" worked but when I try to statically link a test
> code by
> gfortran -m64 -O3 -fPIC -fopenmp -fbounds-check -pthread --static
> testmpirun.f -o bin/testmpirun_gfortran_static -I/usr/local/include
> -L/usr/local/lib -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl
> -lnsl -lutil -lm -ldl
> It fails because the link step does not find Infiniband routines
> (ibv_*).
>

Per the other thread, static linking with OpenFabrics is not for the
meek. See the OMPI FAQ in the OpenFabrics section for a question on
exactly this issue.

> If I use dynamical link, it works but asks me for a password when I
> try
> to do
> "/usr/bin/local/mpirun -np 4 bin/testmpirun_gfortran_static" though I
> have an a priori valid .rhosts file...
>

Also per the other thread, this is not a static linking/dynamic
linking issue.

> 3°) for the intel compiler suite case
> "./configure --prefix=/opt/intel/Compiler/11.0/074 --with-sge
> --enable-static CC='icc' CFLAGS=' -xHOST -ip -O3 -C' LDFLAGS='-xHOST
> -ip
> -O3 -C -static-intel' AR='ar' F77='ifort' FC='ifort' FFLAGS=' -xHOST
> -ip
> -O3 -C' FCFLAGS=' -xHOST -ip -O3 -C' CXX='icpc' CXXFLAGS=' -xHOST -ip
> -O3 -C'"
> worked but I have the same problem with missing ibv_ * routines if I
> try
> a static link
> "ifort -Bdynamic -fast -C -openmp -check noarg_temp_created
> testmpirun.f -o bin/testmpirun_ifort_dynamic
> -I/opt/intel/Compiler/11.0/074/include
> -L/opt/intel/Compiler/11.0/074/lib -lmpi_f90 -lmpi_f77 -lmpi -lopen-
> rte
> -lopen-pal -ldl -lnsl -lutil -lm -ldl"
>
> (Remark: If I add "-static" to LDFLAGS in configure, it stops during
> the making of opal_wrapper).
>

Is there a reason you need static linking? It should be tremendously
simpler to get dynamic linking working.

> If I use dynamic link, I get the executable but then
> /opt/intel/Compiler/11.0/074/bin/mpirun -np 4
> ../../bin/testmpirun_ifort_dynamic
> gives
> --------------------------------------------------------------------------
> mpirun noticed that process rank 0 with PID 16664 on node mn2s-devel
> exited on signal 11 (Segmentation fault).
> --------------------------------------------------------------------------
> 2 total processes killed (some possibly by mpirun during cleanup)
>

What is your MPI application? Are you able to run simple MPI
applications, such as "hello world" and "ring"? (these are in the
examples/ directory in the OMPI tarball)

-- 
Jeff Squyres
Cisco Systems