I am not sure if it will help, but newer fw usually have some bug fixes.
try to disable leave_pinned during the run. It's on by default in 1.3.3
On Thu, Aug 13, 2009 at 5:12 AM, Allen Barnett <firstname.lastname@example.org>
I recently tried to build my MPI application against OpenMPI 1.3.3. It
worked fine with OMPI 1.2.9, but with OMPI 1.3.3, it hangs part way
through. It does a fair amount of comm, but eventually it stops in a
Send/Recv point-to-point exchange. If I turn off the openib btl, it runs
to completion. Also, I built 1.3.3 with memchecker (which is very nice;
thanks to everyone who worked on that!) and it runs to completion, even
with openib active.
Our cluster consists of dual dual-core opteron boxes with Mellanox
MT25204 (InfiniHost III Lx) HCAs and a Mellanox MT47396 Infiniscale-III
switch. We're running RHEL 4.8 which appears to include OFED 1.4. I've
built everything using GCC 4.3.2. Here is the output from ibv_devinfo.
"ompi_info --all" is attached.
state: active (4)
max_mtu: 2048 (4)
active_mtu: 2048 (4)
I'd appreciate any tips for debugging this.
users mailing list