Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Terry D. Dontje (Terry.Dontje_at_[hidden])
Date: 2006-06-28 10:35:03


Can you set your limit coredumpsize to non-zero rerun the program
and then get the stack via dbx?

So, I have a similar case of BUS_ADRALN on SPARC systems with an
older version (June 21st) of the trunk. I've since run using the latest
trunk and the
bus went away. I am now going to try this out with v1.1 to see if I get
results. Your stack would help me try and determine if this is an
OpenMPI issue
or possibly some type of platform problem.

There is another thread with Eric Thibodeau that I am unsure if it is
the same issue
as either of our situation.


>Message: 3
>Date: Wed, 28 Jun 2006 14:30:12 +0200
>From: openmpi-user <openmpi-user_at_[hidden]>
>Subject: Re: [OMPI users] OpenMPI 1.1: Signal:10
> info.si_errno:0(Unknown, error: 0), si_code:1(BUS_ADRALN) (Terry D.
> Dontje)
>To: users_at_[hidden]
>Message-ID: <44A27654.9060002_at_[hidden]>
>Content-Type: text/plain; charset="iso-8859-1"
>Hi Terry,
>unfortunately I haven't got a stack trace.
>OS: Mac OS X 10.4.7 Server on the Xgrid-server and Mac OS X 10.4.7
>Client on every node (G4 and G5). For testing-purposes I've installed
>OpenMPI 1.1 on a Dual-G4-node and on a Dual-G5-node with my Xgrid
>consisting of only either the Dual-G4- or the Dual-G5-node. No matter
>which configuration, I ran into the bus error.