Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: aaron.mcdonough_at_[hidden]
Date: 2007-01-17 00:49:32

I found that MPT uses a *lot* of vmem for buffering/mem mapping. We
schedule based on requested vmem, so this can be a problem. Do you know
how vmem usage for buffering compares with OpenMPI?


-----Original Message-----
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
Behalf Of Brian W. Barrett
Sent: Wednesday, 17 January 2007 1:49 PM
To: Open MPI Users
Subject: Re: [OMPI users] openmpi on altix

On Jan 16, 2007, at 4:29 PM, Brock Palen wrote:

> What is the state of openMPI on a sgi altix? How does it compare to
> mpt. I assume for all operations OMPI will use the sm btl thus all
> others (other than self) could be disabled. Is there any other
> tweaks users use? Or is OMPI even recommend on at Altix?

We've run Open MPI on the Altix here at Los Alamos. For point-to-
point communication, we're slightly slower than MPT. But for
collectives, we're much slower. We just haven't done any work on
shared memory collectives, especially on platforms where the memory
hierarchies are as deep as they are on the Altix. That being said,
it should work and is a viable option if there's a feature of Open
MPI that a user needs that is not available in MPT.


   Brian Barrett
   Open MPI Team, CCS-1
   Los Alamos National Laboratory
users mailing list