Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Galen Shipman (gshipman_at_[hidden])
Date: 2006-02-08 17:16:45


Sorry, more questions to answer:

> On the other hand I am not sure it could even work at all, as whenever
> I
> tried at run-time to limit the list to just one transport (be it tcp or
> openib, btw), mpi apps would not start.
>
you need to specify both the transport and self, such as:
mpirun -mca btl self,tcp

This is a simple loopback and leaving it out may be the problem.

> Either way, I'm curious if it's even worth trying and if there's other
> cuts that can be made to shave off one us or two (ok, I'l settle for
> 1.5 :-) )
>

For Heroic latencies on IB we would need to use small message RDMA and
poll each peers dedicated memory region for completion.

Thanks,

Galen