Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI_Reduce error over Infiniband or TCP
From: Ralph Castain (rhc_at_[hidden])
Date: 2011-07-05 11:28:54


Looks like your code is passing an invalid argument to MPI_Reduce...

On Jul 5, 2011, at 9:20 AM, yanyg_at_[hidden] wrote:

> Dear all,
>
> We are testing Open MPI over Infiniband, and got a MPI_Reduce
> error message when we run our codes either over TCP or
> Infiniband interface, as follows,
>
> ---
> [gulftown:25487] *** An error occurred in MPI_Reduce
> [gulftown:25487] *** on communicator MPI COMMUNICATOR 3
> CREATE FROM 0
> [gulftown:25487] *** MPI_ERR_ARG: invalid argument of some
> other kind
> [gulftown:25487] *** MPI_ERRORS_ARE_FATAL (your MPI job will
> now abort)
>
> Elapsed time: 6:33.78
> <Done>
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 25428 on
> node gulftown exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
>
> ---
>
> Any hints?
>
> Thanks,
> Yiguang
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users