Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] random IB failures when running medium core counts
From: Joshua Bernstein (jbernstein_at_[hidden])
Date: 2010-08-30 14:53:39


Hello Brock,

While it doesn't solve the problem, have you tried increasing the btl
timeouts like the message suggest? With 1884 cores in use perhaps there
is some over subscription in the fabric?

-Joshua Bernstein
Penguin Computing

Brock Palen wrote:
> We recently installed a modest IB network to our cluster,
> When running a 1884 core IB HPL job after a run we will get an error about IB, it does not always happen in the same place, some iterations will pass others will fail the error is below, we are using openmpi/1.4.2 with the intel 11 compilers.
> Note that 1000 core jobs and other sizes work well also but this larger one does not. Thanks!
>
> [[62713,1],1867][btl_openib_component.c:3224:handle_wc] from nyx5011.engin.umich.edu to: nyx5120 error polling LP CQ with status RETRY EXCEEDED ERROR status number 12 for wr_id 413569408 opcode 11112 vendor error 129 qp_idx 0
> --------------------------------------------------------------------------
> The InfiniBand retry count between two MPI processes has been
> exceeded. "Retry count" is defined in the InfiniBand spec 1.2
> (section 12.7.38):
>
> The total number of times that the sender wishes the receiver to
> retry timeout, packet sequence, etc. errors before posting a
> completion error.
>
> This error typically means that there is something awry within the
> InfiniBand fabric itself. You should note the hosts on which this
> error has occurred; it has been observed that rebooting or removing a
> particular host from the job can sometimes resolve this issue.
>
> Two MCA parameters can be used to control Open MPI's behavior with
> respect to the retry count:
>
> * btl_openib_ib_retry_count - The number of times the sender will
> attempt to retry (defaulted to 7, the maximum value).
> * btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
> to 10). The actual timeout value used is calculated as:
>
> 4.096 microseconds * (2^btl_openib_ib_timeout)
>
> See the InfiniBand spec 1.2 (section 12.7.34) for more details.
>
> Below is some information about the host that raised the error and the
> peer to which it was connected:
>
> Local host: nyx5011.engin.umich.edu
> Local device: mlx4_0
> Peer host: nyx5120
>
> You may need to consult with your system administrator to get this
> problem fixed.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 1867 with PID 3474 on
> node nyx5011 exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [nyx5049.engin.umich.edu:07901] [[62713,0],32] ORTED_CMD_PROCESSOR: STUCK IN INFINITE LOOP - ABORTING
> [nyx5049:07901] *** Process received signal ***
> [nyx5049:07901] Signal: Aborted (6)
> [nyx5049:07901] Signal code: (-6)
> [nyx5049:07901] [ 0] /lib64/libpthread.so.0 [0x2b5dcbc70b10]
> [nyx5049:07901] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x2b5dcbeae265]
> [nyx5049:07901] [ 2] /lib64/libc.so.6(abort+0x110) [0x2b5dcbeafd10]
> [nyx5049:07901] [ 3] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-rte.so.0(orte_daemon_cmd_processor+0x216) [0x2b5dcacdb7e6]
> [nyx5049:07901] [ 4] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-pal.so.0(opal_event_loop+0x2ca) [0x2b5dcaf3a9aa]
> [nyx5049:07901] [ 5] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-pal.so.0(opal_progress+0x5e) [0x2b5dcaf2d26e]
> [nyx5049:07901] [ 6] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/openmpi/mca_rml_oob.so [0x2b5dcce37e5c]
> [nyx5049:07901] [ 7] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-rte.so.0(orte_daemon_cmd_processor+0x3ae) [0x2b5dcacdb97e]
> [nyx5049:07901] [ 8] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-pal.so.0(opal_event_loop+0x2ca) [0x2b5dcaf3a9aa]
> [nyx5049:07901] [ 9] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-pal.so.0(opal_progress+0x5e) [0x2b5dcaf2d26e]
> [nyx5049:07901] [10] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/openmpi/mca_rml_oob.so [0x2b5dcce37e5c]
> [nyx5049:07901] [11] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-rte.so.0(orte_daemon_cmd_processor+0x3ae) [0x2b5dcacdb97e]
> [nyx5049:07901] [12] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-pal.so.0(opal_event_loop+0x2ca) [0x2b5dcaf3a9aa]
> [nyx5049:07901] [13] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-pal.so.0(opal_event_dispatch+0x8) [0x2b5dcaf3a6d8]
> [nyx5049:07901] [14] /home/software/rhel5/openmpi-1.4.2/intel-11.0/lib/libopen-rte.so.0(orte_daemon+0xaaf) [0x2b5dcacdb15f]
> [nyx5049:07901] [15] orted [0x401ad6]
> [nyx5049:07901] [16] /lib64/libc.so.6(__libc_start_main+0xf4) [0x2b5dcbe9b994]
> [nyx5049:07901] [17] orted [0x401999]
> [nyx5049:07901] *** End of error message ***
>
>
>
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users