Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] Is trunk broken ?
From: Ralph H Castain (rhc_at_[hidden])
Date: 2008-06-19 14:05:44


You'll have to tell us something more than that, Pasha. What kind of
environment, what rev level were you at, etc.

So far as I know, the trunk is fine.

On 6/19/08 12:01 PM, "Pavel Shamis (Pasha)" <pasha_at_[hidden]>
wrote:

> I tried to run trunk on my machines and I got follow error:
>
> [sw214:04367] [[16563,1],1] ORTE_ERROR_LOG: Data unpack would read past
> end of buffer in file base/grpcomm_base_modex.c at line 451
> [sw214:04367] [[16563,1],1] ORTE_ERROR_LOG: Data unpack would read past
> end of buffer in file grpcomm_basic_module.c at line 560
> [sw214:04365]
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> orte_grpcomm_modex failed
> --> Returned "Data unpack would read past end of buffer" (-26) instead
> of "Success" (0)
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel