This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Sorry for the super-late reply. :-\
Yes, ERR_TRUNCATE means that the receiver didn't have a large enough buffer.
Have you tried upgrading to a newer version of Open MPI? 1.4.3 is the current stable release (I have a very dim and not guaranteed to be correct recollection that we fixed something in the internals of collectives somewhere with regards to ERR_TRUNCATE...?).
On Apr 25, 2011, at 4:44 PM, Wei Hao wrote:
> I'm running openmpi 1.2.8. I'm working on a project where one part involves communicating an integer, representing the number of data points I'm keeping track of, to all the processors. The line is simple:
> where np and geo_N are integers, np is the result of a local calculation, and geo_N has been declared on all the processors. geo_N is nondecreasing. This line works the first time I call it (geo_N goes from 0 to some other integer), but if I call it later in the program, I get the following error:
> [woodhen-039:26189] *** An error occurred in MPI_Allreduce
> [woodhen-039:26189] *** on communicator MPI_COMM_WORLD
> [woodhen-039:26189] *** MPI_ERR_TRUNCATE: message truncated
> [woodhen-039:26189] *** MPI_ERRORS_ARE_FATAL (goodbye)
> As I understand it, MPI_ERR_TRUNCATE means that the output buffer is too small, but I'm not sure where I've made a mistake. It's particularly frustrating because it seems to work fine the first time. Does anyone have any thoughts?
> users mailing list
For corporate legal information go to: