Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Pummill (jpummil_at_[hidden])
Date: 2007-07-23 16:51:42


Hmmm...compilation SEEMED to go OK with the following .configure...

./configure --prefix=/nfsutil/openmpi-1.2.3
--with-mvapi=/usr/local/topspin/ CC=icc CXX=icpc F77=ifort FC=ifort
CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64

And the following looks promising...

./ompi_info | grep mvapi
MCA btl: mvapi (MCA v1.0, API v1.0.1, Component v1.2.3)

I have a post-doc that will test some application code in the next day
or so. Maybe the old stuff worked just fine!

Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701

Jeff Pummill wrote:
> Good morning all,
>
> I have been very impressed so far with OpenMPI on one of our smaller
> clusters running Gnu compilers and Gig-E interconnects, so I am
> considering a build on our large cluster. The potential problem is that
> the compilers are Intel 8.1 versions and the Infiniband is supported by
> three year old Topspin (now Cisco) drivers and libraries. Basically,
> this is a cluster that runs a very heavy workload using MVAPICH, thus we
> have adopted the "if it ain't broke, don't fix it" methodology...thus
> all of the drivers, libraries, and compilers are approximately 3 years old.
>
> Would it be reasonable to expect OpenMPI 1.2.3 to build and run in such
> an environment?
>
> Thanks!
>
> Jeff Pummill
> University of Arkansas
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>