Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] Infiniband memory usage with XRC
From: Sylvain Jeaugey (sylvain.jeaugey_at_[hidden])
Date: 2010-05-19 08:01:22

On Mon, 17 May 2010, Pavel Shamis (Pasha) wrote:

> Sylvain Jeaugey wrote:
>>>> The XRC protocol seems to create shared receive queues, which is a good
>>>> thing. However, comparing memory used by an "X" queue versus and "S"
>>>> queue, we can see a large difference. Digging a bit into the code, we
>>>> found some
>>> So, do you see that X consumes more that S ? This is really odd.
>> Yes, but that's what we see. At least after MPI_Init.
> What is the difference (in Kb)?
At 32 nodes x 32 cores (1024 MPI processes), I get a difference of ~2300
KB in favor of "S,65536,16,4,1" versus "X,65536,16,4,1".

The proposed patch doesn't seem to solve the problem however, there's
still something that's taking more memory than expected.