Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI stats argument in Fortran mpi module
From: Jed Brown (jedbrown_at_[hidden])
Date: 2014-01-08 21:08:41


"Jeff Squyres (jsquyres)" <jsquyres_at_[hidden]> writes:
>> Totally superficial, just passing "status(1)" instead of "status" or
>> "status(1:MPI_STATUS_SIZE)".
>
> That's a different type (INTEGER scalar vs. INTEGER array). So the
> compiler complaining about that is actually correct.

Yes, exactly.

> Under the covers, Fortran will (most likely) pass both by reference,
> so they'll both actually (most likely) *work* if you build with an MPI
> that doesn't provide an interface for MPI_Recv, but passing status(1)
> is actually incorrect Fortran.

Prior to slice notation, this would be the only way to build an array of
statuses. I.e., receives go into status(1:MPI_STATUS_SIZE),
status(1+MPI_STATUS_SIZE:2*MPI_STATUS_SIZE), etc. Due to
pass-by-reference semantics, I think this will always work, despite not
type-checking with explicit interfaces. I don't know what the language
standard says about backward-compatibility of such constructs, but
presumably we need to know the dialect to understand whether it's
acceptable. (I actually don't know if the Fortran 77 standard defines
the behavior when passing status(1), status(1+MPI_STATUS_SIZE), etc., or
whether it works only as a consequence of the only reasonable
implementation.

> I think you're saying that you agree with my above statements about
> the different types, and you're just detailing how you got to asking
> about WTF we were providing an MPI_Recv interface in the first place.
> Kumbaya. :-)

Indeed.



  • application/pgp-signature attachment: stored