Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Brian W. Barrett (bbarrett_at_[hidden])
Date: 2007-01-17 11:25:07

I would guess that for Open MPI v1.1, we will use more vmem than
MPT. Our strategy early on was get a huge buffer and never run out
of resources. Obviously, that's not a good long term plan ;). We've
scaled this down considerably in v1.2 (now in beta), where we by
default use about 16MB/process for shared memory, up to 512MB. THis
can be tuned down quite a bit -- we need about 128KB / process of
shared memory space to successfully run, and probably 4-8MB to run
with any efficiency.


On Jan 16, 2007, at 10:49 PM, <aaron.mcdonough_at_[hidden]>
<aaron.mcdonough_at_[hidden]> wrote:

> I found that MPT uses a *lot* of vmem for buffering/mem mapping. We
> schedule based on requested vmem, so this can be a problem. Do you
> know
> how vmem usage for buffering compares with OpenMPI?
> Cheers,
> Aaron
> -----Original Message-----
> From: users-bounces_at_[hidden] [mailto:users-bounces_at_open-
>] On
> Behalf Of Brian W. Barrett
> Sent: Wednesday, 17 January 2007 1:49 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] openmpi on altix
> On Jan 16, 2007, at 4:29 PM, Brock Palen wrote:
>> What is the state of openMPI on a sgi altix? How does it compare to
>> mpt. I assume for all operations OMPI will use the sm btl thus all
>> others (other than self) could be disabled. Is there any other
>> tweaks users use? Or is OMPI even recommend on at Altix?
> We've run Open MPI on the Altix here at Los Alamos. For point-to-
> point communication, we're slightly slower than MPT. But for
> collectives, we're much slower. We just haven't done any work on
> shared memory collectives, especially on platforms where the memory
> hierarchies are as deep as they are on the Altix. That being said,
> it should work and is a viable option if there's a feature of Open
> MPI that a user needs that is not available in MPT.
> Brian
> --
> Brian Barrett
> Open MPI Team, CCS-1
> Los Alamos National Laboratory
> _______________________________________________
> users mailing list
> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]

   Brian Barrett
   Open MPI Team, CCS-1
   Los Alamos National Laboratory