Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Galen M. Shipman (gshipman_at_[hidden])
Date: 2005-10-11 17:06:59


When running the NPB - FT using 128 nodes problem size C, I get the
following error with both btl_tcp and btl_mvapi:

-bash-3.00$ mpirun -np 128 -machinefile ~/dqlist -mca btl self,tcp -
mca mpi_leave_pinned 0 ./bin/ft.C.128

NAS Parallel Benchmarks 2.3 -- FT Benchmark

No input file inputft.data. Using compiled defaults
Size : 512x512x512
Iterations : 20
Number of processes : 128
Processor array : 1x128
Layout type : 1D
[dq049:27360] *** An error occurred in MPI_Reduce
[dq049:27360] *** on communicator MPI_COMM_WORLD
[dq049:27360] *** MPI_ERR_OP: invalid reduce operation
[dq049:27360] *** MPI_ERRORS_ARE_FATAL (goodbye)
[dq048:27568] *** An error occurred in MPI_Reduce
[dq048:27568] *** on communicator MPI_COMM_WORLD
[dq048:27568] *** MPI_ERR_OP: invalid reduce operation
[dq048:27568] *** MPI_ERRORS_ARE_FATAL (goodbye)
[dq088:24879] *** An error occurred in MPI_Reduce