Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] [Fwd: mpi alltoall memory requirement]
From: Pavel Shamis (Pasha) (pashash_at_[hidden])
Date: 2009-04-26 04:11:24


You may try to use XRC, it should decrease openib btl memory footprint,
especially on multi-core system, like you have. The follow command will
switch default OMPI config to XRC:
" --mca btl_openib_receive_queues
X,128,256,192,128:X,2048,256,128,32:X,12288,256,128,32:X,65536,256,128,32"

Regards,
Pasha

Jeff Squyres wrote:
> I think Ashley still has the right general idea.
>
> You need to see how much memory the OS is taking off the top. Then
> see how much memory the application images consume (before using any
> memory). Open MPI itself then takes up a bunch of memory for its own
> internal buffering. Remember, too, that Open MPI will default to both
> shared memory and OpenFabrics -- both of which have their own,
> separate buffering.
>
> You can disable shared memory, if you want, by specifying
>
> mpirun --mca btl openib,self ...
>
> ("sm" is the shared memory btl, so by not specifying it, Open MPI
> won't use it)
>
> If you have recent Mellanox HCAs, you should probably be using OMPI's
> XRC support, which will decrease OMPI's memory usage even further
> (I'll let Mellanox comment on this further if they want).
>
> Finally, there's a bunch of information on this FAQ page describing
> how to tune Open MPI's OpenFabrics usage:
>
> http://www.open-mpi.org/faq/?category=openfabrics
>
>
>
> On Apr 23, 2009, at 1:35 PM, Viral Mehta wrote:
>
>> yes of course i m sure about wellness of providers.... and i m using
>> ofed-1.4.1-rc3
>> i m running 24 proc per node on 8 node cluster..... so as i showed in
>> calculation that i require 36G mem.....
>> i just need to know if my calculation has not some obvious flaw
>> and/or if i m missing anything about setting up system environment or
>> anything like that
>>
>> On Thu, Apr 23, 2009 at 10:36 PM, gossips J <polk678_at_[hidden]> wrote:
>> What is the NIC you use?
>> What OFED build?
>> Are you sure about wellness of provider lib/drivers..?
>>
>> It is strange that you get out of mem in all to all tests... should
>> not happen on 32G system,..!!!
>>
>> -polk.
>>
>> On 4/23/09, viral.vkm_at_[hidden] <viral.vkm_at_[hidden]> wrote:
>> or any link which helps to understand system reuirement for certain
>> test scenario ..
>>
>>
>> On Apr 23, 2009 12:42pm, viral.vkm_at_[hidden] wrote:
>> > Hi
>> > Thanks for your response.
>> > However, I am running
>> > mpiexec .... -ppn 24 -n 192 /opt/IMB-MPI1 alltaoll -msglen /root/temp
>> >
>> > And file /root/temp contains entry upto 65535 size only. That means
>> alltoall test will run upto 65K size only
>> >
>> > So, in that case I will require very less memory but then in that
>> case also test is running out-of-memory. Please help someone to
>> understand the scenario.
>> > Or do I need to switch to some algorithm or do I need to set some
>> other environment variables ? or anything like that ?
>> >
>> > On Apr 22, 2009 6:43pm, Ashley Pittman ashley_at_[hidden]> wrote:
>> > > On Wed, 2009-04-22 at 12:40 +0530, vkm wrote:
>> > >
>> > >
>> > >
>> > > > The same amount of memory required for recvbuf. So at the least
>> each
>> > >
>> > > > node should have 36GB of memory.
>> > >
>> > > >
>> > >
>> > > > Am I calculating right ? Please correct.
>> > >
>> > >
>> > >
>> > > Your calculation looks correct, the conclusion is slightly wrong
>> > >
>> > > however. The Application buffers will consume 36Gb of memory, the
>> rest
>> > >
>> > > of the application, any comms buffers and the usual OS overhead
>> will be
>> > >
>> > > on top of this so putting only 36Gb of ram in your nodes will still
>> > >
>> > > leave you short.
>> > >
>> > >
>> > >
>> > > Ashley,
>> > >
>> > >
>> > >
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>>
>> --
>> Thanks,
>> Viral Mehta
>
>