Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-03-22 14:29:20


No, not a known problem -- my cluster is RHEL4U4 -- I use it for many
thousands of runs of the OMPI v1.2 branch every day...

Can you see where it's dying in orte_init_stage1?

On Mar 22, 2007, at 2:17 PM, Greg Watson wrote:

> Is this a known problem? Building ompi 1.2 on RHEL4:
>
> ./configure --with-devel-headers --without-threads
>
> (actually tried without '--without-threads' too, but no change)
>
> $ mpirun -np 2 test
> [beth:06029] *** Process received signal ***
> [beth:06029] Signal: Segmentation fault (11)
> [beth:06029] Signal code: Address not mapped (1)
> [beth:06029] Failing at address: 0x2e342e33
> [beth:06029] [ 0] /lib/tls/libc.so.6 [0x21b890]
> [beth:06029] [ 1] /usr/local/lib/libopen-rte.so.0(orte_init_stage1
> +0x293) [0xb7fc50cb]
> [beth:06029] [ 2] /usr/local/lib/libopen-rte.so.0(orte_system_init
> +0x1e) [0xb7fc84be]
> [beth:06029] [ 3] /usr/local/lib/libopen-rte.so.0(orte_init+0x6a)
> [0xb7fc4cee]
> [beth:06029] [ 4] mpirun(orterun+0x14b) [0x8049ecb]
> [beth:06029] [ 5] mpirun(main+0x2a) [0x8049d7a]
> [beth:06029] [ 6] /lib/tls/libc.so.6(__libc_start_main+0xd3)
> [0x208de3]
> [beth:06029] [ 7] mpirun [0x8049cc9]
> [beth:06029] *** End of error message ***
> Segmentation fault
>
> Thanks,
>
> Greg
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

-- 
Jeff Squyres
Cisco Systems