Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] is loop unrolling safe for MPI logic?
From: Tim Prince (n8tm_at_[hidden])
Date: 2010-07-19 02:15:32


On 7/18/2010 9:09 AM, Anton Shterenlikht wrote:
> On Sat, Jul 17, 2010 at 09:14:11AM -0700, Eugene Loh wrote:
>
>> Jeff Squyres wrote:
>>
>>
>>> On Jul 17, 2010, at 4:22 AM, Anton Shterenlikht wrote:
>>>
>>>
>>>
>>>> Is loop vectorisation/unrolling safe for MPI logic?
>>>> I presume it is, but are there situations where
>>>> loop vectorisation could e.g. violate the order
>>>> of execution of MPI calls?
>>>>
>>>>
>>>>
>>> I *assume* that the intel compiler will not unroll loops that contain MPI function calls. That's obviously an assumption, but I would think that unless you put some pragmas in there that tell the compiler that it's safe to unroll, the compiler will be somewhat conservative about what it automatically unrolls.
>>>
>>>
>>>
>> More generally, a Fortran compiler that optimizes aggressively could
>> "break" MPI code.
>>
>> http://www.mpi-forum.org/docs/mpi-20-html/node236.htm#Node241
>>
>> That said, you may not need to worry about this in your particular case.
>>
> This is a very important point, many thanks Eugene.
> Fortran MPI programmer definitely needs to pay attention to this.
>
> MPI-2.2 provides a slightly updated version of this guide:
>
> http://www.mpi-forum.org/docs/mpi22-report/node343.htm#Node348
>
> many thanks
> anton
>
>
 From the point of view of the compiler developers, auto-vectorization
and unrolling are distinct questions. An MPI or other non-inlined
function call would not be subject to vectorization. While
auto-vectorization or unrolling may expose latent bugs, MPI is not
particularly likely to make them worse. You have made some misleading
statements about vectorization along the way, but these aren't likely to
relate to MPI problems.
Upon my return, I will be working on a case which was developed and
tested succeessfully under ifort 10.1 and other compilers, which is
failing under current ifort versions. Current Intel MPI throws a run
time error indicating that the receive buffer has been lost; the openmpi
failure is more obscure. I will have to change the code to use distinct
tags for each MPI send/receive pair in order to track it down. I'm not
counting on that magically making the bug go away. ifort is not
particularly aggressive about unrolling loops which contain MPI calls,
but I agree that must be considered.

-- 
Tim Prince