Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Low Open MPI performance on InfiniBand and shared memory?
From: Ralph Castain (rhc_at_[hidden])
Date: 2010-07-09 05:22:20

Did you remember to set --bind-to-core or --bind-to-socket on the cmd line? Otherwise, the processes are running unbound, which makes a significant difference to performance.

On Jul 9, 2010, at 3:15 AM, Andreas Schäfer wrote:

> Maybe I should add that for tests I ran the benchmarks with two MPI
> processes: for InfiniBand one process per node and for shared memory
> both processes were located on one node.
> --
> ==========================================================
> Andreas Schäfer
> HPC and Grid Computing
> Chair of Computer Science 3
> Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
> +49 9131 85-27910
> PGP/GPG key via keyserver
> I'm a bright...
> ==========================================================
> (\___/)
> (+'.'+)
> (")_(")
> This is Bunny. Copy and paste Bunny into your
> signature to help him gain world domination!
> _______________________________________________
> users mailing list
> users_at_[hidden]