Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] openib btl and cq overflows
From: Steve Wise (swise_at_[hidden])
Date: 2012-07-02 11:24:27


Hello,

I'm debugging an issue with openmpi-1.4.5 and the openib btl over
chelsio iwarp devices. I am the iwarp driver developer for this
device. I have debug code that detects cq overflows, and I'm seeing rcq
overflows during finalize for certain IMB runs with ompi. So as the
recv wrs are flushed, I am seeing an overflow in the rcq for that qp.
Note chelsio iwarp uses non-shared rqs and its default .ini is:
receive_queues = P,65536,256,192,128

Here's the job details:

NP=16; mpirun -np ${NP} --host core96b1,core96b2,core96b3,core96b4 --mca
btl openib,sm,self /opt/openmpi-1.4.5/tests/IMB-3.2/IMB-MPI1 -npmin
${NP} alltoall

The nodes have 4 port iwarp adapters in them so there are rdma
connections setup over each port. As the alltoall IO size hits 256, we
end up with 192 qps per node. And that seems to be the stable qp count
until the test finishes and we see the overflow.

I added further debug code in my rdma provider library to track the
total depth of all the qps bound to each cq to see if the application is
oversubscribing the cqs. I see that for these jobs, OMPI is in fact
oversubscribing the cqs. Here's a snipit of my debug output:

warning, potential SCQ overflow: total_qp_depth 3120 SCQ depth 1088
warning, potential RCQ overflow: total_qp_depth 3312 RCQ depth 1088
warning, potential SCQ overflow: total_qp_depth 3120 SCQ depth 1088
warning, potential RCQ overflow: total_qp_depth 3312 RCQ depth 1088

I realize that OMPI can in fact be flow controlling such that the cq
won't overflow even if the total qp depths exceeds the cq depth. But I
do see overflows. And it seems that a cq depth of 1088 is quite small
given the size of the sq or rq in the above debug output. So it seems
that ompi isn't scaling the CQ depth according to the job.

As an experiment, I overrode the cq depth by adding '--mca
btl_openib_cq_size 16000' to the mpirun line and I don't see the
overflow anymore.

Can all you openib btl experts out there describe the CQ sizing logic
and point me to the code that I can dig into to see why we're
overflowing the RCQ on finalize operations? Also, does the cq depth of
1088 seem reasonable for this type of work load?

Thanks in advance!

Steve.