Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Open-MPI and gprof
From: jody (jody.xha_at_[hidden])
Date: 2009-04-23 12:22:05


@Daniel: thanks - i could compile vprof now!

Jody

On Thu, Apr 23, 2009 at 12:35 PM, Daniel Spångberg <daniels_at_[hidden]> wrote:
> Regarding miscompilation of vprof and bfd_get_section_size_before_reloc.
> Simply change the call from bfd_get_section_size_before_reloc to
> bdf_get_section_size in exec.cc and recompile.
>
> Daniel Spångberg
>
> Den 2009-04-23 10:16:07 skrev jody <jody.xha_at_[hidden]>:
>
>> Hi all
>> Thanks for all the input.
>>
>> I have not gotten around to try any of the tools (Sun Studio, Tau or
>> vprof).
>> Actually, i can't compile vprof - make fails with
>>  exec.cc: In static member function ‘static void
>> BFDExecutable::find_address_in_section(bfd*, asection*, void*)’:
>>  exec.cc:144: error: ‘bfd_get_section_size_before_reloc’ was not
>> declared in this scope
>> Does anybody have an idea how to get around this problem?
>>
>> Anyway, the GMON_OUT_PREFIX hint was very helpful - thanks, Jason!
>>
>> If i  get vprof or one of the other tools running, i'll write something up
>> -
>> perhaps the profiling subject would be worthy for a FAQ entry...
>>
>> Thanks
>>  Jody
>>
>> On Thu, Apr 23, 2009 at 9:12 AM, Daniel Spångberg <daniels_at_[hidden]>
>> wrote:
>>>
>>> I have used vprof, which is free, and also works well with openmpi:
>>> http://sourceforge.net/projects/vprof/
>>>
>>> One might need slight code modifications to get output, depending on
>>> compilers used, such as adding
>>> vmon_begin();
>>> to start profiling and
>>> vmon_done_task(rank);
>>> to end profiling where rank is the MPI rank integer.
>>>
>>> vprof can also use papi, but I have not (yet) tried this.
>>>
>>> Daniel Spångberg
>>>
>>>
>>> Den 2009-04-23 02:00:01 skrev Brock Palen <brockp_at_[hidden]>:
>>>
>>>> There is a tool (not free)  That I have liked that works great with
>>>> OMPI,
>>>> and can use gprof information.
>>>>
>>>> http://www.allinea.com/index.php?page=74
>>>>
>>>> Also I am not sure but Tau (which is free)  Might support some gprof
>>>> hooks.
>>>> http://www.cs.uoregon.edu/research/tau/home.php
>>>>
>>>> Brock Palen
>>>> www.umich.edu/~brockp
>>>> Center for Advanced Computing
>>>> brockp_at_[hidden]
>>>> (734)936-1985
>>>>
>>>>
>>>>
>>>> On Apr 22, 2009, at 7:37 PM, jgans wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Yes you can profile MPI applications by compiling with -pg. However, by
>>>>> default each process will produce an output file called "gmon.out",
>>>>> which is
>>>>> a problem if all processes are writing to the same global file system
>>>>> (i.e.
>>>>> all processes will try to write to the same file).
>>>>>
>>>>> There is an undocumented feature of gprof that allows you to specify
>>>>> the
>>>>> filename for profiling output via the environment variable
>>>>> GMON_OUT_PREFIX.
>>>>> For example, one can set this variable in the .bashrc file for every
>>>>> node to
>>>>> insure unique profile filenames, i.e.:
>>>>>
>>>>> export GMON_OUT_PREFIX='gmon.out-'`/bin/uname -n`
>>>>>
>>>>> The filename will appear as GMON_OUT_PREFIX.pid, where pid is the
>>>>> process
>>>>> id on a given node (so this will work when multiple nodes are contained
>>>>> in a
>>>>> single host).
>>>>>
>>>>> Regards,
>>>>>
>>>>> Jason
>>>>>
>>>>> Tiago Almeida wrote:
>>>>>>
>>>>>> Hi,
>>>>>> I've never done this, but I believe that an executable compiled with
>>>>>> profilling support (-pg) will generate the gmon.out file in its
>>>>>> current
>>>>>> directory, regardless of running under MPI or not. So I think that
>>>>>> you'll
>>>>>> have a gmon.out on each node and therefore you can "gprof" them
>>>>>> independently.
>>>>>>
>>>>>> Best regards,
>>>>>> Tiago Almeida
>>>>>> ---------------------------------
>>>>>> jody wrote:
>>>>>>>
>>>>>>> Hi
>>>>>>> I wanted to profile my application using gprof, and proceeded like
>>>>>>> when profiling a normal application:
>>>>>>> - compile everything with option -pg
>>>>>>> - run application
>>>>>>> - call gprof
>>>>>>> This returns a normal-looking output, but i don't know
>>>>>>> whether this is the data for node 0 only or accumulated for all
>>>>>>> nodes.
>>>>>>>
>>>>>>> Does anybody have experience in profiling parallel applications?
>>>>>>> Is there a way to have profile data for each node separately?
>>>>>>> If not, is there another profiling tool which can?
>>>>>>>
>>>>>>> Thank You
>>>>>>>  Jody
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> users_at_[hidden]
>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> users_at_[hidden]
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>> --
>>> Daniel Spångberg
>>> Materialkemi
>>> Uppsala Universitet
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> Daniel Spångberg
> Materialkemi
> Uppsala Universitet
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users