Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Test bug?
From: jody (jody.xha_at_[hidden])
Date: 2009-02-05 08:36:12


Hi Gabriele
In OpenMPI 1.3 it doesn't matter:

[jody_at_aim-plankton ~]$ mpirun -np 4 mpi_test5
aim-plankton.uzh.ch: rank 0 : MPI_Test # 0 ok. [3...3]
aim-plankton.uzh.ch: rank 1 : MPI_Test # 0 ok. [0...0]
aim-plankton.uzh.ch: rank 2 : MPI_Test # 0 ok. [1...1]
aim-plankton.uzh.ch: rank 3 : MPI_Test # 0 ok. [2...2]
[jody_at_aim-plankton ~]$ mpirun -np 4 mpi_test5_rev
aim-plankton.uzh.ch: rank 1 : MPI_Test # 0 ok. [0...0]
aim-plankton.uzh.ch: rank 2 : MPI_Test # 0 ok. [1...1]
aim-plankton.uzh.ch: rank 3 : MPI_Test # 0 ok. [2...2]
aim-plankton.uzh.ch: rank 0 : MPI_Test # 0 ok. [3...3]

Jody

On Thu, Feb 5, 2009 at 11:48 AM, Gabriele Fatigati <g.fatigati_at_[hidden]> wrote:
> Hi Jody,
> thanks four your quick reply. But what's the difference?
>
> 2009/2/5 jody <jody.xha_at_[hidden]>:
>> Hi Gabriele
>>
>> Shouldn't you reverse the order of your send and recv from
>> MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
>> MPI_COMM_WORLD, &request);
>> MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
>>
>> to
>>
>> MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
>> MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
>> MPI_COMM_WORLD, &request);
>>
>> ?
>> Jody
>>
>> On Thu, Feb 5, 2009 at 11:37 AM, Gabriele Fatigati <g.fatigati_at_[hidden]> wrote:
>>> Dear OpenMPI developer,
>>> i have found a very strange behaviour of MPI_Test. I'm using OpenMPI
>>> 1.2 over Infiniband interconnection net.
>>>
>>> I've tried to implement net check with a series of MPI_Irecv and
>>> MPI_Send beetwen processors, testing with MPI_Wait the end of Irecv.
>>> For strange reasons, i've noted that, when i launch the test in one
>>> node, it works well. If i launch over 2 or more procs over different
>>> nodes, MPI_Test fails many time before to tell that the IRecv is
>>> finished.
>>>
>>> I've tried that it fails also after one minutes, with very small
>>> buffer( less than eager limit). It's impossible that the communication
>>> is pending after one minutes, with 10 integer sended. To solve this,
>>> I need to implement a loop over MPI_Test, and only after 3 or 4
>>> MPI_Test it returns that IRecv finished successful. Is it possible
>>> that MPI_Test needs to call many time also if the communication is
>>> already finished?
>>>
>>> In attach you have my simple C test program.
>>>
>>> Thanks in advance.
>>>
>>> --
>>> Ing. Gabriele Fatigati
>>>
>>> Parallel programmer
>>>
>>> CINECA Systems & Tecnologies Department
>>>
>>> Supercomputing Group
>>>
>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>
>>> www.cineca.it Tel: +39 051 6171722
>>>
>>> g.fatigati [AT] cineca.it
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>
>
>
> --
> Ing. Gabriele Fatigati
>
> Parallel programmer
>
> CINECA Systems & Tecnologies Department
>
> Supercomputing Group
>
> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>
> www.cineca.it Tel: +39 051 6171722
>
> g.fatigati [AT] cineca.it
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>