Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Segmentation
From: Shafagh Jafer (barfy27_at_[hidden])
Date: 2008-09-21 04:38:30

Ok. I noticed that whenever in my code, i use an MPI fucntion that has
"OMPI_DECLSPEC"  in front of it in mpi.h , I get this segfault error. Could some one please tell me what is "OMPI_DECLSPEC"?? is it a macro that I need to enable ?!?
forexample, in MPICH the function getsize in mpi.h looks like the following:
int MPI_Comm_size(MPI_Comm, int *);
but the same function in OMPI apears as follows:
OMPI_DECLSPEC int MPI_Comm_size(MPI_Comm comm, int *size);

--- On Sat, 9/20/08, Shafagh Jafer <barfy27_at_[hidden]> wrote:

From: Shafagh Jafer <barfy27_at_[hidden]>
Subject: Re: [OMPI users] Segmentation
To: "Open MPI Users" <users_at_[hidden]>
Date: Saturday, September 20, 2008, 9:50 PM

My code was working perfect when I had it with MPICH now I have replaced that with OMPI. Could that be the problem?? Do I need to change any part of my source code if I migrate from MPICH-1.2.6 to OpenMPI-1.2.7?? Please let me know.

--- On Sat, 9/20/08, Aurélien Bouteiller <bouteill_at_[hidden]> wrote:

From: Aurélien Bouteiller <bouteill_at_[hidden]>
Subject: Re: [OMPI users] Segmentation
To: "Open MPI Users" <users_at_[hidden]>
Date: Saturday, September 20, 2008, 6:54 AM


You have a segfault in your own code. Because Open MPI detects it, it
forwards the error to you and pretty print it but Open MPI is not the
source of the bug. From the stack trace, I suggest you gdb debug the
PhysicalGetID function.


Le 19 sept. 08 à 22:22, Shafagh Jafer a écrit :

> Hi every one,
> I need urgent help plz :-(
> I am getting the following error when i run my program. OpenMPI
> compilation was all fine and went well, but now i dont understand
> the source of this error:
> ============================================
> [node01:29264] *** Process received signal ***
> [node01:29264] Signal: Segmentation fault (11)
> [node01:29264] Signal code: Address not mapped (1)
> [node01:29264] Failing at address: 0xcf
> [node01:29264] [ 0] /lib/tls/ [0x7ccf80]
> [node01:29264] [ 1] /nfs/sjafer/phd/openMPI/latest_cd++_timewarp/cd++
> (physicalGetId__C10CommPhyMPI+0x14) [0x8305880]
> [node01:29264] [ 2] /nfs/sjafer/phd/openMPI/latest_cd++_timewarp/cd++
> (physicalCommGetId__Fv+0x43) [0x82ff81b]
> [node01:29264] [ 3] /nfs/sjafer/phd/openMPI/latest_cd++_timewarp/cd++
> (openComm__16StandAloneLoader+0x1f) [0x80fdf43]
> [node01:29264] [ 4] /nfs/sjafer/phd/openMPI/latest_cd++_timewarp/cd++
> (run__21ParallelMainSimulator+0x1640) [0x81ea53c]
> [node01:29264] [ 5] /nfs/sjafer/phd/openMPI/latest_cd++_timewarp/cd++
> (main+0xde) [0x80a58ce]
> [node01:29264] [ 6] /lib/tls/
> [0xe3d79a]
> [node01:29264] [ 7] /nfs/sjafer/phd/openMPI/latest_cd++_timewarp/cd++
> (sinh+0x4d) [0x80a2221]
> [node01:29264] *** End of error message ***
> mpirun noticed that job rank 0 with PID 29264 on node node01 exited
> on signal 11 (Segmentation fault).
> ===========================================
> _______________________________________________
> users mailing list
> users_at_[hidden]

* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321
users mailing list