This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Apr 4, 2011, at 10:30 AM, Borenstein, Bernard S wrote:
> We have added clusters with different interconnects and decided to build one OPENMPI 1.4.3 version to handle all the possible interconnects
> and run everywhere. I have two questions about this :
> 1 is there a way for Openmpi to print out the interconnect it selected to use at run time? I am asking for an option similar to the prot argument in hpmpi/platform mpi to print the interconnect selected. If this is not implemented, I would like to suggest it as an enhancement.
Unfortunately, it is not implemented. There's a long-standing ticket requesting this feature -- it's unfortunately not a simple task, since Open MPI opens connections lazily.
> 2 I have built Openmpi to allow tcp, mx, gm and ib. When running on a tcp only cluster and specifying mca btl tcp,sm,self, I get errors like this
> [erb426:08967] Error in mx_init (error No MX device entry in /dev.)
Try also specifying:
mpirun --mca pml ob1 ...
See the README for details of the "ob1" vs. "cm" PMLs.
For corporate legal information go to: