Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] About MPI_TAG_UB
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2012-09-28 09:24:15


Beware that Open MPI uses negative tags for internal uses. You might conflict with that sometimes.

On Sep 28, 2012, at 9:08 AM, Sébastien Boisvert wrote:

> Hello,
>
> My application has 191 MPI tags allocated with allocateMessageTagHandle, so
> 7 bits is not enough.
>
> Indeed, tags can be valued from 0 to 2147483647 inclusively with this MPI_TAG_UB
> value in Open-MPI. I misused the returned pointer.
>
> In Open-MPI, MPI_ANY_TAG is -1. I removed the boundary check in MPI_Isend and
> MPI_Recv to allow values from -2147483648 to 2147483647 inclusively.
>
> https://raw.github.com/sebhtml/patches/master/ompi-1.6.2-ray-4096-routing.patch
>
> As long as my tag is not MPI_ANY_TAG, I guess it should work fine although
> it is not MPI-compliant. I will test that.
>
> On 28/09/12 03:50 AM, Iliev, Hristo wrote:
>> Hello,
>>
>> MPI_TAG_UB in Open MPI is INT_MAX == 2^31-1 == 2147483647. The value of
>> 17438272 (0x10A1640) is a bit strange for MPI_TAG_UB. I would rather say
>> that it is the value of a pointer to someplace in the heap, i.e. you have
>> missed the fact that the attribute value as returned by MPI_Comm_get_attr /
>> MPI_Attr_get is a pointer to the actual value (for MPI_TAG_UB it is a
>> pointer to int).
>>
>> MPI_TAG_UB is predefined attribute and according to §8.1.2 of the MPI
>> standard its value cannot be changed by the user.
>>
>> You have to find another solution, e.g. reduce the tag space to 7 bits or
>> put the routing info inside the message payload.
>>
>> Best regards,
>> Hristo Iliev
>> --
>> Hristo Iliev, Ph.D. -- High Performance Computing
>> RWTH Aachen University, Center for Computing and Communication
>> Rechen- und Kommunikationszentrum der RWTH Aachen
>> Seffenter Weg 23, D 52074 Aachen (Germany)
>>
>>> -----Original Message-----
>>> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]]
>>> On Behalf Of Sébastien Boisvert
>>> Sent: Friday, September 28, 2012 1:22 AM
>>> To: users_at_[hidden]
>>> Subject: [OMPI users] About MPI_TAG_UB
>>>
>>> Hello,
>>>
>>> I am running Ray (a distributed software in genomics) with Open-MPI on
>>> 2048 processes and everything runs fine. Ray has a any-to-any
>>> communication pattern.
>>> To avoid using too much memory, I implemented a virtual message router.
>>>
>>> Without the virtual message router, I get messages like these:
>>>
>>> [cp2558][[30209,1],0][connect/btl_openib_connect_oob.c:490:qp_create_o
>>> ne] error creating qp errno says Cannot allocate memory
>>>
>>> We did some tests on the Cray XE6 on 4096 processing elements (4096 MPI
>>> ranks) without the virtual message router and everything runs fine as is.
>> So
>>> using the virtual message router is not required.
>>>
>>> The real message tag, the real source and the real destination are stored
>> in
>>> the MPI tag. I know, this is ugly, but it works. I can not store this
>> information
>>> in the message buffer because the buffer can be NULL.
>>>
>>> bits 0 to 7: tag (8 bits, values from 0 to 255, 256 possible values) bits
>> 8 to 19:
>>> true source (12 bits, values from 0 to 4095, 4096 possible values) bits 20
>> to 31:
>>> true destination (12 bits, values from 0 to 4095, 4096 possible values)
>>>
>>> Without the virtual router, my code is compliant with the fact that
>>> MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_TAG_UB,...) is at least
>>> 32767 (my tags are <= 255).
>>>
>>>
>>> When I try jobs with 4096 processes with the virtual message router, I get
>> the
>>> error:
>>>
>>> MPI_ERR_TAG: invalid tag.
>>>
>>> Without the virtual message router I get:
>>>
>>> [cp2558][[30209,1],0][connect/btl_openib_connect_oob.c:490:qp_create_o
>>> ne] error creating qp errno says Cannot allocate memory
>>>
>>> With Open-MPI 1.5.4, the upper bound is 17438272 (at least in our build).
>>> That explains MPI_ERR_TAG.
>>>
>>>
>>> My 2 questions:
>>>
>>> 1. Is there a better way to store routing information ?
>>>
>>> 2. Can I create my own communicator and set its MPI_TAG_UB to whatever I
>>> want ?
>>>
>>>
>>> Thanks !
>>>
>>> ***
>>> Sébastien Boisvert
>>> Ph.D. student
>>> http://boisvert.info/
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/