Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Issue with Profiling Fortran code
From: Nick Wright (nwright_at_[hidden])
Date: 2008-12-05 12:36:57


> I hope you are aware, that *many* tools and application actually profile
> the fortran MPI layer by intercepting the C function calls. This allows
> them to not have to deal with f2c translation of MPI objects and not
> worry about the name mangling issue. Would there be a way to have both
> options e.g. as a configure flag? The current commit basically breaks
> all of these applications...

Edgar,

I haven't seen the fix so I can't comment on that.

Anyway, in general though this can't be true. Such a profiling tool
would *only* work with openmpi if it were written that way today. I
guess such a fix will break openmpi specific tools (are there any?).

For MPICH for example, one must provide a hook into eg mpi_comm_rank_ as
that calls PMPI_Comm_rank (as it should) and thus if one was only
intercepting C calls one would not see any fortran profiling information.

Nick.

> George Bosilca wrote:
>> Nick,
>>
>> Thanks for noticing this. It's unbelievable that nobody noticed that
>> over the last 5 years. Anyway, I think we have a one line fix for this
>> problem. I'll test it asap, and then push it in the 1.3.
>>
>> Thanks,
>> george.
>>
>> On Dec 5, 2008, at 10:14 , Nick Wright wrote:
>>
>>> Hi Antony
>>>
>>> That will work yes, but its not portable to other MPI's that do
>>> implement the profiling layer correctly unfortunately.
>>>
>>> I guess we will just need to detect that we are using openmpi when
>>> our tool is configured and add some macros to deal with that
>>> accordingly. Is there an easy way to do this built into openmpi?
>>>
>>> Thanks
>>>
>>> Nick.
>>>
>>> Anthony Chan wrote:
>>>> Hope I didn't misunderstand your question. If you implement
>>>> your profiling library in C where you do your real instrumentation,
>>>> you don't need to implement the fortran layer, you can simply link
>>>> with Fortran to C MPI wrapper library -lmpi_f77. i.e.
>>>> <OMPI>/bin/mpif77 -o foo foo.f -L<OMPI>/lib -lmpi_f77 -lYourProfClib
>>>> where libYourProfClib.a is your profiling tool written in C. If you
>>>> don't want to intercept the MPI call twice for fortran program,
>>>> you need to implment fortran layer. In that case, I would think you
>>>> can just call C version of PMPI_xxx directly from your fortran
>>>> layer, e.g.
>>>> void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
>>>> printf("mpi_comm_rank call successfully intercepted\n");
>>>> *info = PMPI_Comm_rank(comm,rank);
>>>> }
>>>> A.Chan
>>>> ----- "Nick Wright" <nwright_at_[hidden]> wrote:
>>>>> Hi
>>>>>
>>>>> I am trying to use the PMPI interface with OPENMPI to profile a
>>>>> fortran program.
>>>>>
>>>>> I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
>>>>> on.
>>>>>
>>>>> The problem seems to be that if one eg. intercepts to call to
>>>>> mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then
>>>>>
>>>>> calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.
>>>>>
>>>>> So if one wants to create a library that can profile C and Fortran
>>>>> codes at the same time one ends up intercepting the mpi call twice.
>>>>> Which is
>>>>>
>>>>> not desirable and not what should happen (and indeed doesn't happen in
>>>>>
>>>>> other MPI implementations).
>>>>>
>>>>> A simple example to illustrate is below. If somebody knows of a fix to
>>>>>
>>>>> avoid this issue that would be great !
>>>>>
>>>>> Thanks
>>>>>
>>>>> Nick.
>>>>>
>>>>> pmpi_test.c: mpicc pmpi_test.c -c
>>>>>
>>>>> #include<stdio.h>
>>>>> #include "mpi.h"
>>>>> void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
>>>>> printf("mpi_comm_rank call successfully intercepted\n");
>>>>> pmpi_comm_rank_(comm,rank,info);
>>>>> }
>>>>> int MPI_Comm_rank(MPI_Comm comm, int *rank) {
>>>>> printf("MPI_comm_rank call successfully intercepted\n");
>>>>> PMPI_Comm_rank(comm,rank);
>>>>> }
>>>>>
>>>>> hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o
>>>>>
>>>>> program hello
>>>>> implicit none
>>>>> include 'mpif.h'
>>>>> integer ierr
>>>>> integer myid,nprocs
>>>>> character*24 fdate,host
>>>>> call MPI_Init( ierr )
>>>>> myid=0
>>>>> call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
>>>>> call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
>>>>> call getenv('HOST',host)
>>>>> write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
>>>>> call mpi_finalize(ierr)
>>>>> end
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>