Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Terry D. Dontje (Terry.Dontje_at_[hidden])
Date: 2006-06-28 13:02:16


Well, I've been using the trunk and not 1.1. I also just built
1.1.1a1r10538 and ran
it with no bus error. Though you are running 1.1b5r10421 so we're not
running the
same thing, as of yet.

I have a cluster of two v440 that have 4 cpus each running Solaris 10.
The tests I
am running are np=2 one process on each node.

--td

Eric Thibodeau wrote:

>Terry,
>
> I was about to comment on this. could you tell me the specs of your machine. As you will notice in "my thread", I am running into problems on Sparc SPM systems where the CPU borad's RTC are in a doubtfull state. Are-you running 1.1 on SMP machines. If so, on how many procs and what hardware/OS version is this running off?
>
>ET
>
>Le mercredi 28 juin 2006 10:35, Terry D. Dontje a écrit :
>
>
>>Frank,
>>
>>Can you set your limit coredumpsize to non-zero rerun the program
>>and then get the stack via dbx?
>>
>>So, I have a similar case of BUS_ADRALN on SPARC systems with an
>>older version (June 21st) of the trunk. I've since run using the latest
>>trunk and the
>>bus went away. I am now going to try this out with v1.1 to see if I get
>>similar
>>results. Your stack would help me try and determine if this is an
>>OpenMPI issue
>>or possibly some type of platform problem.
>>
>>There is another thread with Eric Thibodeau that I am unsure if it is
>>the same issue
>>as either of our situation.
>>
>>--td
>>
>>
>[...snip...]
>
>