Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] openmpi shared memory feature
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2012-10-27 13:00:57

On Oct 27, 2012, at 12:47 PM, Mahmood Naderan wrote:

> >Because communicating through shared memory when sending messages between processes on the same server is far faster than going through a network stack.
> I see... But that is not good for diskless clusters. Am I right? assume processes are on a node (which has no disk). In this case, their communication go though network (from computing node to server) then IO and then network again (from server to computing node).

I don't quite understand what you're saying -- what exactly is your distinction between "server" and "computing node"?

For the purposes of my reply, I use the word "server" to mean "one computational server, possibly containing multiple processors, a bunch of RAM, and possibly one or more disks." For example, a 1U "pizza box" style rack enclosure containing the guts of a typical x86-based system.

You seem to be relating two orthogonal things: whether a server has a disk and how MPI messages flow from one process to another.

When using shared memory, the message starts in one process, gets copied to shared memory, then then gets copied to the other process. If you use the knem Linux kernel module, we can avoid shared memory in some cases and copy the message directly from one process' memory to the other.

It's irrelevant as to whether there is a disk or not.

Jeff Squyres
For corporate legal information go to: