This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Dear Open MPI developer,
in this post:
I already reported about a case when Open MPI silently (without any word of
caution!) changed the transport from InfiniBand to IPoIB, thus loosing the
Another case of 'secret' disabling of InfiniBand is the use of 'multiple'
threading level (assume the threading support is enabled by
--enable-mpi-thread-multiple). Please have a look at
ompi/mca/btl/openib/btl_openib_component.c (v.1.6.1, ll.2504-2508). In this
lines a message about disabling InfiniBand transport is compounded, but normally
it did not came out because is seem to be intended to be debug info only.
The problem is not the fallback itself but the muted way it is done. The user
has hardly a possibility to get it out about the application is creeping over
TCP, unless the performance loss will be noticed and analysed.
Well, we believe that disabling of high performance network _without any word a
caution_ is a bad thing, because it may lead to huge waste of resources (because
an actual problem may not be noticed for years - the program seem to work!). We
will probably forbid any fallback to workaround this scenarios in future.
Maybe a bit more verbosity at this place is a good idea?
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915