Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Neeraj Chourasia (neeraj_ch1_at_[hidden])
Date: 2007-10-11 01:16:08


Dear All,    Could anyone tell me the important tuning parameters in openmpi with IB interconnect? I tried setting eager_rdma, min_rdma_size, mpi_leave_pinned parameters from the mpirun command line on 38 nodes cluster (38*2 processors) but in vain. I found simple mpirun with no mca parameters performing better. I conducted test on P2P send/receive with data size of 8MB.    Similarly i patched HPL linpack code with libnbc(non blocking collectives) and found no performance benefits. I went through its patch and found that, its probably not overlapping computation with communication.Any help in this direction would be appreciated.-Neeraj