Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Peter Kjellström (cap_at_[hidden])
Date: 2005-08-23 05:48:24


First I'd like to say that I'm really happy and excited that public access to
svn is now open :-)

Here is what went fine: check-out, autogen, configure, make, ompi_info and
simple mpi app (both build and run!!!)

Now I'd like to control over which channels/transports/networks the data
flows... I configured and built ompi against mvapi (mellanox ibgd-1.8.0) and
as far as I can tell it went well. Judging by the behaviour of the tests I
have done it defaults to tcp (over ethernet in my case). How do I select

Here's some detailed information:
ompi-version: 1.0a1r6976
configure : --prefix=/usr/local/openmpi-svn6976/intel-8.1e-027 \
compilers : icc, ifort 8.1.027 (64-bit for em64t)
os : centos-4.1 64-bit (el4u1 rebuild)
kernel : 2.6.9-11smp
mvapi : mellanox ibgd-1.8.0
ompi_info | grep -i mvapi:
             MCA mpool : mvapi (MCA v1.0, API v1.0, Component v1.0)
             MCA btl : mvapi (MCA v1.0, API v1.0, Component v1.0)
hardware : dual Xeon Nocona 2 GiB mem, mell. pci-exress HCAs


  Peter Kjellström               |
  National Supercomputer Centre  |
  Sweden                         |

  • application/pgp-signature attachment: stored