Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] MPI_Reduce error over Infiniband or TCP
From: yanyg_at_[hidden]
Date: 2011-07-05 11:20:55

Dear all,

We are testing Open MPI over Infiniband, and got a MPI_Reduce
error message when we run our codes either over TCP or
Infiniband interface, as follows,

[gulftown:25487] *** An error occurred in MPI_Reduce
[gulftown:25487] *** on communicator MPI COMMUNICATOR 3 
[gulftown:25487] *** MPI_ERR_ARG: invalid argument of some 
other kind
[gulftown:25487] *** MPI_ERRORS_ARE_FATAL (your MPI job will 
now abort)
Elapsed time: 6:33.78
mpirun has exited due to process rank 0 with PID 25428 on
node gulftown exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
Any hints?