Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Reduce error over Infiniband or TCP
From: Ralph Castain (rhc_at_[hidden])
Date: 2011-07-05 11:28:54


Looks like your code is passing an invalid argument to MPI_Reduce...

On Jul 5, 2011, at 9:20 AM, yanyg_at_[hidden] wrote:

> Dear all,
>
> We are testing Open MPI over Infiniband, and got a MPI_Reduce
> error message when we run our codes either over TCP or
> Infiniband interface, as follows,
>
> ---
> [gulftown:25487] *** An error occurred in MPI_Reduce
> [gulftown:25487] *** on communicator MPI COMMUNICATOR 3
> CREATE FROM 0
> [gulftown:25487] *** MPI_ERR_ARG: invalid argument of some
> other kind
> [gulftown:25487] *** MPI_ERRORS_ARE_FATAL (your MPI job will
> now abort)
>
> Elapsed time: 6:33.78
> <Done>
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 25428 on
> node gulftown exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
>
> ---
>
> Any hints?
>
> Thanks,
> Yiguang
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users