Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Error in VT
From: Matthias Jurenz (matthias.jurenz_at_[hidden])
Date: 2009-04-01 05:17:24


Hi Leonardo,

I guess that your program uses POSIX threads and needs the MPI thread
support level MPI_THREAD_MULTIPLE, right?
Unfortunately, the OMPI integrated version of VT doesn't support neither
Pthreads nor any MPI thread level.

The latest "stand-alone-version" of VT (5.6.3) supports at least
Pthreads and the MPI thread support levels MPI_THREAD_SINGLE and
MPI_THREAD_FUNNELED. So if you can change the MPI thread level
requirement to MPI_THREAD_SINGLE or MPI_THREAD_FUNNELED tracing of your
code should work.
You can download the latest VT version at
http://www.tu-dresden.de/zih/vampirtrace. Please give it a try.

Regards,
Matthias Jurenz

On Mon, 2009-03-30 at 19:04 +0200, Leonardo Fialho wrote:
> Hi Jeff,
>
> There are...
>
> Thanks a lot,
> Leonardo
>
> Jeff Squyres escribió:
> > Can you send all the information listed here:
> >
> > http://www.open-mpi.org/community/help/
> >
> >
> > On Mar 30, 2009, at 11:46 AM, Leonardo Fialho wrote:
> >
> >> Hi,
> >>
> >> I'm experimenting the following errors while using Open MPI release
> >> 1.3.1 combined with VT.
> >>
> >> STAT P 2.258062 43.0000% 488.997562 0
> >> STAT P 2.260121 44.0000% 485.672638 0
> >> STAT P 2.262175 45.0000% 486.854935 0
> >> RFG_Regions_stackPop(): Error: Stack underflow
> >> RFG_Regions_stackPop(): Error: Stack underflow
> >> VampirTrace [vt_otf_trc.c:1300]: Resource temporarily unavailable
> >> [nodo1][[43845,1],0][btl_tcp_frag.c:216:mca_btl_tcp_frag_recv]
> >> mca_btl_tcp_frag_recv: readv failed: Connection reset by peer (104)
> >> VampirTrace [vt_otf_trc.c:1300]: Resource temporarily unavailable
> >> RFG_Regions_stackPop(): Error: Stack underflow
> >> VampirTrace [vt_otf_trc.c:1300]: Resource temporarily unavailable
> >> --------------------------------------------------------------------------
> >>
> >> mpirun has exited due to process rank 1 with PID 8814 on
> >> node nodo2 exiting without calling "finalize". This may
> >> have caused other processes in the application to be
> >> terminated by signals sent by mpirun (as reported here).
> >> --------------------------------------------------------------------------
> >>
> >> [fialho_at_aoclsd gmwat]$
> >>
> >> Along different executions the error occurs in different situations.
> >>
> >> Any help?
> >>
> >> Thanks,
> >>
> >> --
> >> Leonardo Fialho
> >> Computer Architecture and Operating Systems Department - CAOS
> >> Universidad Autonoma de Barcelona - UAB
> >> ETSE, Edifcio Q, QC/3088
> >> http://www.caos.uab.es
> >> Phone: +34-93-581-2888
> >> Fax: +34-93-581-2478
> >>
> >> _______________________________________________
> >> devel mailing list
> >> devel_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> >
> >
>
>
> plain text document attachment (environ.txt)
> declare -x LD_LIBRARY_PATH="/home/fialho/local/tau-2.18.1p1/i386_linux/lib:/home/fialho/OSS/lib:/home/fialho/gmate/lib:/home/fialho/local/openmpi-1.3.1/lib:/home/fialho/dyninst/lib:/home/fialho/local/lib:/home/fialho/local/gsl-1.9/lib/:/home/fialho/dyninst/lib"
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel



  • application/x-pkcs7-signature attachment: smime.p7s