Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] possible bug in 1.3.2 sm transport
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-05-11 22:29:57


Hi Bryan

I have seen similar issues on LANL clusters when message sizes were
fairly large. How big are your buffers when you call Allreduce? Can
you send us your Allreduce call params (e.g., the reduce operation,
datatype, num elements)?

If you don't want to send that to the list, you can send it to me at
LANL.

Thanks
Ralph

On May 11, 2009, at 7:29 PM, Bryan Lally wrote:

> Eugene Loh wrote:
>
>> Another user reports something somewhat similar at http://www.open-mpi.org/community/lists/users/2009/04/9154.php
>> . That problem seems to be associated with GCC 4.4.0. What
>> compiler are you using?
>
> OMPI was built with gcc v4.3.0, which is what's packaged with Fedora
> 9. The application code was built with that and NAG's fortran, and
> in another test with Pathscale's C and Fortran. We're only using
> the C bindings to OMPI, never the Fortran bindings.
>
> - Bryan
>
> --
> Bryan Lally, lally_at_[hidden]
> 505.667.9954
> CCS-2
> Los Alamos National Laboratory
> Los Alamos, New Mexico
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel