Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] [OMPI users] huge VmRSS on rank 0 after MPI_Init when using "btl_openib_receive_queues" option
From: Eloi Gaudry (eg_at_[hidden])
Date: 2011-07-08 05:17:58


what i cannot understand is the reason why this extra memory would be
initialized on proc 0 only.
as far as i know, this doesn't make sense.
éloi

> On 22/04/2011 08:52, Eloi Gaudry wrote:
>> it varies with the receive_queues specification *and* with the number
>> of mpi processes: memory_consumed = nb_mpi_process * nb_buffers *
>> (buffer_size + low_buffer_count_watermark + credit_window_size )
>>
>> éloi
>>
>>
>> On 04/22/2011 12:26 AM, Jeff Squyres wrote:
>>> Does it vary exactly according to your receive_queues specification?
>>>
>>> On Apr 19, 2011, at 9:03 AM, Eloi Gaudry wrote:
>>>
>>>> hello,
>>>>
>>>> i would like to get your input on this:
>>>> when launching a parallel computation on 128 nodes using openib and
>>>> the "-mca btl_openib_receive_queues P,65536,256,192,128" option, i
>>>> observe a rather large resident memory consumption (2GB:
>>>> 65336*256*128) on the process with rank 0 (and only this process)
>>>> just after a call to MPI_Init.
>>>>
>>>> i'd like to know why the other processes doesn't behave the same:
>>>> - other processes located on the same nodes don't use that amount of
>>>> memory
>>>> - all others processes (i.e. located on any other nodes) neither
>>>>
>>>> i'm using OpenMPI-1.4.2, built with gcc-4.3.4 and
>>>> '--enable-cxx-exceptions --with-pic --with-threads=posix' options.
>>>>
>>>> thanks for your help,
>>>> éloi