Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] huge VmRSS on rank 0 after MPI_Init when using "btl_openib_receive_queues" option
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-04-21 18:26:27


Does it vary exactly according to your receive_queues specification?

On Apr 19, 2011, at 9:03 AM, Eloi Gaudry wrote:

> hello,
>
> i would like to get your input on this:
> when launching a parallel computation on 128 nodes using openib and the "-mca btl_openib_receive_queues P,65536,256,192,128" option, i observe a rather large resident memory consumption (2GB: 65336*256*128) on the process with rank 0 (and only this process) just after a call to MPI_Init.
>
> i'd like to know why the other processes doesn't behave the same:
> - other processes located on the same nodes don't use that amount of memory
> - all others processes (i.e. located on any other nodes) neither
>
> i'm using OpenMPI-1.4.2, built with gcc-4.3.4 and '--enable-cxx-exceptions --with-pic --with-threads=posix' options.
>
> thanks for your help,
> éloi
>
> --
> Eloi Gaudry
> Senior Product Development Engineer
>
> Free Field Technologies
> Company Website: http://www.fft.be
> Direct Phone Number: +32 10 495 147
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/