Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2006-07-28 17:18:09


On 7/26/06 5:55 PM, "Michael Kluskens" <mklus_at_[hidden]> wrote:

>> How is the message passing of Open-MPI implemented when I have
>> say 4 nodes with 4 processors (SMP) each, nodes connected by a gigabit
>> ethernet ?... in other words, how does it manage SMP nodes when I
>> want to use all CPUs, but each with its own process. Does it take
>> any advantage of the SMP at each node?
>
> Someone can give you a more complete/correct answer but I'll give you
> my understanding.
>
> All communication in OpenMPI is handled via the MCA module (term?)

We call them "components" or "plugins"; a "module" is typically an instance
of those plugins (e.g., if you have 2 ethernet NICs with TCP interfaces,
you'll get 2 instances -- modules -- of the TCP BTL component).
 
> self - process communicating with itself
> sm - ... via shared memory to other processes
> tcp - ... via tcp
> openib - ... via Infiniband OpenIB stack
> gm & mx - ... via Myrinet GM/MX
> mvapi - ... via Infiniband Mellanox Verbs

All correct.

> If you launch your process so that four processes are on a node then
> those would use shared memory to communicate.

Also correct.

Just chiming in with verifications! :-)

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems