This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Teige, Scott W wrote:
> I have observed strange behavior with an application running with
> OpenMPI 1.2.8, OFED 1.2. The application runs in two "modes", fast
> and slow. The exectution time is either within one second of 108 sec.
> or within one second of 67 sec. My cluster has 1 Gig ethernet and
> DDR Infiniband so the byte transport layer is a prime suspect.
> So, is there a way to determine (from my application code) which
> BTL is really being used?
You may specify:
--mca btl openib,sm,self
And OpenMPI will use IB and shared memory for communication.
--mca btl tcp,sm,self
And OpenMPI will use TCP and shared memory for communication.