Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] 1.2.8 testing
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2008-10-13 14:47:06


K, thanks. I don't think I've run the tests up to 16ppn to be able to
help out here, sorry...

On Oct 13, 2008, at 2:41 PM, Ralph Castain wrote:

> I'll test 1.2.8 on our Lobo system tomorrow (out today). Primary issue
> we are seeing there frankly is that some of the tests simply fail when
> you get up to 16ppn - in one case, it appears that the memory
> allocated during the test overflows available memory on the node when
> you get that many procs. So sorting out which tests run at 16ppn and
> which don't has become a bit of a challenge.
>
> I'll see what I can do, though.
> Ralph
>
>
> On Oct 13, 2008, at 12:12 PM, Jeff Squyres wrote:
>
>> On Oct 13, 2008, at 1:34 PM, Jeff Squyres wrote:
>>
>>> MPI_Bsend_init_rtoa_f
>>> MPI_Bsend_rtoa_f
>>> MPI_Ibsend_rtoa_f
>>> MPI_Bsend_init_rtoa_f
>>> MPI_Bsend_rtoa_f
>>> MPI_Ibsend_rtoa_f
>>
>>
>> These tests fail with the PGI fortran compiler because they are
>> trying to allocate a 1.5MB buffer on the stack (i.e., they segv
>> before the first executable line of code). Reducing the size of the
>> buffer makes the tests pass.
>>
>> The size of the buffer was increased by Rolf when he made the intel
>> tests able to be run with more than 64 procs. So I'm pretty sure
>> this is a new failure.
>>
>> Rolf and I will work out what to do about the intel test, but for
>> 1.2.8, I think we're good to one. It would be good to get one more
>> confirmation from someone else, though.
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

-- 
Jeff Squyres
Cisco Systems