Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] running osu mpi benchmark tests on Infiniband setup
From: Ralph Castain (rhc_at_[hidden])
Date: 2011-10-19 16:08:35


I don't think we handle this:

> -H 192.168.4.91 -H 192.168.4.92

You need to have only one -H option - use comma to separate the values

On Oct 19, 2011, at 12:48 PM, ramu wrote:

> Hi,
> I am trying to run osu mpi benchmark tests on Infiniband setup (connected
> back-to-back via Mellanox hw). I am using the below command
> "mpirun --prefix /usr/local/ -np 2 --mca btl openib,self -H 192.168.4.91 -H
> 192.168.4.92 --mca orte_base_help_aggregate 0 --mca btl_openib_cpc_include oob
> /root/osu_benchmarks-3.1.1/osu_latency
> "
> But I am getting the error as
> "[Isengard:05030] *** An error occurred in MPI_Barrier
> [Isengard:05030] *** on communicator MPI_COMM_WORLD
> [Isengard:05030] *** MPI_ERR_IN_STATUS: error code in status
> [Isengard:05030] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
> [Rohan:05010] *** An error occurred in MPI_Barrier
> [Rohan:05010] *** on communicator MPI_COMM_WORLD
> [Rohan:05010] *** MPI_ERR_IN_STATUS: error code in status
> [Rohan:05010] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
> "
>
> Am I missing anything in the above command ? Please suggest me.
>
> Regards,
> Ramu
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users