Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Mixing the FORTRAN and C APIs.
From: Tim Prince (tcprince_at_[hidden])
Date: 2011-05-06 13:50:58


On 5/6/2011 10:22 AM, Tim Hutt wrote:
> On 6 May 2011 16:45, Tim Hutt<tdhutt_at_[hidden]> wrote:
>> On 6 May 2011 16:27, Tim Prince<tcprince_at_[hidden]> wrote:
>>> If you want to use the MPI Fortran library, don't convert your Fortran to C.
>>> It's difficult to understand why you would consider f2c a "simplest way,"
>>> but at least it should allow you to use ordinary C MPI function calls.
>>
>> Sorry, maybe I wasn't clear. Just to clarify, all of *my* code is
>> written in C++ (because I don't actually know Fortran), but I want to
>> use some function from PARPACK which is written in Fortran.
>
> Hmm I converted my C++ code to use the C OpenMPI interface instead,
> and now I get link errors (undefined references). I remembered I've
> been linking with -lmpi -lmpi_f77, so maybe I need to also link with
> -lmpi_cxx or -lmpi++ ... what exactly do each of these libraries
> contain?
>
> Also I have run into the problem that the communicators are of type
> "MPI_Comm" in C, and "integer" in Fortran... I am using MPI_COMM_WORLD
> in each case so I assume that will end up referring to the same
> thing... but maybe you really can't mix Fortran and C. Expert opinion
> would be very very welcome!
>
If you use your OpenMPI mpicc wrapper to compile and link, the MPI
libraries should be taken care of.
Style usage in an f2c translation is debatable, but you have an #include
"f2c.h" or "g2c.h" which translates the Fortran data types to legacy C
equivalent. By legacy I mean that in the f2c era, the inclusion of C
data types in Fortran via USE iso_c_binding had not been envisioned.
One would think that you would use the MPI header data types on both the
Fortran and the C side, even though you are using legacy interfaces.
Slip-ups in MPI data types often lead to run-time errors. If you have
an error-checking MPI library such as the Intel MPI one, you get a
little better explanation at the failure point.

-- 
Tim Prince