Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] strange IMB runs
From: Michael Di Domenico (mdidomenico4_at_[hidden])
Date: 2009-07-30 11:37:04

On Thu, Jul 30, 2009 at 10:08 AM, George Bosilca<bosilca_at_[hidden]> wrote:
> The leave pinned will not help in this context. It can only help for devices
> capable of real RMA operations and that require pinned memory, which
> unfortunately is not the case for TCP. What is [really] strange about your
> results is that you get a 4 times better bandwidth over TCP than over shared
> memory. Over TCP there are 2 extra memory copies (compared with sm) plus a
> bunch of syscalls, so there is absolutely no reason to get better
> performance.
> The Open MPI version is something you compiled or it came installed with the
> OS? If you compiled it can you please provide us the configure line?

OpenMPI was compiled from source v1.3 with only a --prefix line, no
other options.