does anyone have a clue here ?
On 22/04/2011 08:52, Eloi Gaudry wrote:
> it varies with the receive_queues specification *and* with the number
> of mpi processes: memory_consumed = nb_mpi_process * nb_buffers *
> (buffer_size + low_buffer_count_watermark + credit_window_size )
> On 04/22/2011 12:26 AM, Jeff Squyres wrote:
>> Does it vary exactly according to your receive_queues specification?
>> On Apr 19, 2011, at 9:03 AM, Eloi Gaudry wrote:
>>> i would like to get your input on this:
>>> when launching a parallel computation on 128 nodes using openib and
>>> the "-mca btl_openib_receive_queues P,65536,256,192,128" option, i
>>> observe a rather large resident memory consumption (2GB:
>>> 65336*256*128) on the process with rank 0 (and only this process)
>>> just after a call to MPI_Init.
>>> i'd like to know why the other processes doesn't behave the same:
>>> - other processes located on the same nodes don't use that amount of
>>> - all others processes (i.e. located on any other nodes) neither
>>> i'm using OpenMPI-1.4.2, built with gcc-4.3.4 and
>>> '--enable-cxx-exceptions --with-pic --with-threads=posix' options.
>>> thanks for your help,
>>> Eloi Gaudry
>>> Senior Product Development Engineer
>>> Free Field Technologies
>>> Company Website: http://www.fft.be
>>> Direct Phone Number: +32 10 495 147
>>> users mailing list
Senior Product Development Engineer
Free Field Technologies
Company Website: http://www.fft.be
Direct Phone Number: +32 10 495 147