Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] huge VmRSS on rank 0 after MPI_Init when using "btl_openib_receive_queues" option
From: Eloi Gaudry (eg_at_[hidden])
Date: 2011-04-22 02:52:41


it varies with the receive_queues specification *and* with the number of
mpi processes: memory_consumed = nb_mpi_process * nb_buffers *
(buffer_size + low_buffer_count_watermark + credit_window_size )

éloi

On 04/22/2011 12:26 AM, Jeff Squyres wrote:
> Does it vary exactly according to your receive_queues specification?
>
> On Apr 19, 2011, at 9:03 AM, Eloi Gaudry wrote:
>
>> hello,
>>
>> i would like to get your input on this:
>> when launching a parallel computation on 128 nodes using openib and the "-mca btl_openib_receive_queues P,65536,256,192,128" option, i observe a rather large resident memory consumption (2GB: 65336*256*128) on the process with rank 0 (and only this process) just after a call to MPI_Init.
>>
>> i'd like to know why the other processes doesn't behave the same:
>> - other processes located on the same nodes don't use that amount of memory
>> - all others processes (i.e. located on any other nodes) neither
>>
>> i'm using OpenMPI-1.4.2, built with gcc-4.3.4 and '--enable-cxx-exceptions --with-pic --with-threads=posix' options.
>>
>> thanks for your help,
>> éloi
>>
>> --
>> Eloi Gaudry
>> Senior Product Development Engineer
>>
>> Free Field Technologies
>> Company Website: http://www.fft.be
>> Direct Phone Number: +32 10 495 147
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Eloi Gaudry
Senior Product Development Engineer
Free Field Technologies
Company Website: http://www.fft.be
Direct Phone Number: +32 10 495 147