Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Graham E Fagg (fagg_at_[hidden])
Date: 2006-01-19 13:08:21


On Thu, 19 Jan 2006, Rainer Keller wrote:

> And yes, when I run with the basic-coll, we also hang ,-]

in the first case your running :
#8 0x407307a4 in ompi_coll_tuned_bcast_intra_basic_linear (buff=0x80c9c58,

which is actually the basic collective anyway.. it just got there via a
different path (in this case the collective decision as for 2 procs a
lnear bcast for small messages is faster than segmented).

>
> mpirun -np 2 --mca coll basic ./mpi_test_suite -r FULL -c MPI_COMM_WORLD -d
> MPI_INT
>
> #8 0x4070e402 in mca_coll_basic_bcast_lin_intra (buff=0x80c4ca0, count=1000,

> So, this was my initial search for whether we may have races in
> opal/mpi_free_list....
>

G