This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I have tested for the MPI_ABORT problem I was seeing and it appears
to be fixed in the trunk.
On Oct 28, 2006, at 8:45 AM, Jeff Squyres wrote:
> Sorry for the delay on this -- is this still the case with the OMPI
> We think we finally have all the issues solved with MPI_ABORT on the
> On Oct 16, 2006, at 8:29 AM, Åke Sandgren wrote:
>> On Mon, 2006-10-16 at 10:13 +0200, Åke Sandgren wrote:
>>> On Fri, 2006-10-06 at 00:04 -0400, Jeff Squyres wrote:
>>>> On 10/5/06 2:42 PM, "Michael Kluskens" <mklus_at_[hidden]> wrote:
>>>>> System: BLACS 1.1p3 on Debian Linux 3.1r3 on dual-opteron, gcc
>>>>> Intel ifort 9.0.32 all tests with 4 processors (comments below)
>>>>> OpenMPi 1.1.1 patched and OpenMPI 1.1.2 patched:
>>>>> C & F tests: no errors with default data set. F test slowed
>>>>> in the middle of the tests.
>>>> Good. Can you expand on what you mean by "slowed down"?
>>> Lets add some more data to this...
>>> BLACS 1.1p3
>>> Ubuntu Dapper 6.06
>>> dual opteron
>>> gcc 4.0
>>> gfortran 4.0 (for both f77 and f90)
>>> standard tests with 4 tasks on one node (i.e. 2 tasks per cpu)
>>> OpenMPI 1.1.2rc3
>>> The tests comes to a complete standstill at the integer bsbr tests
>>> It consumes cpu all the time but nothing happens.
>> Actually if i'm not too inpatient i will progress but VERY slowly.
>> A complete run of the blacstest takes +30min cpu-time...
>>> From the bsbr tests and onwards everything takes "forever".