Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] dynamic rules
From: Daniel Spångberg (daniels_at_[hidden])
Date: 2010-01-15 09:10:07


I have done this according to suggestion on this list, until a fix comes
that makes it possible to change via command line:

To choose bruck for all message sizes / mpi sizes with openmpi-1.4

File $HOME/.openmpi/mca-params.conf (replace /homeXXXXX) so it points to
the correct file:
coll_tuned_use_dynamic_rules=1
coll_tuned_dynamic_rules_filename="/homeXXXX/.openmpi/dynamic_rules_file"

file $HOME/.openmpi/dynamic_rules_file:
1 # num of collectives
3 # ID = 3 Alltoall collective (ID in coll_tuned.h)
1 # number of com sizes
0 # comm size
1 # number of msg sizes
0 3 0 0 # for message size 0, bruck, topo 0, 0 segmentation
# end of collective rule

Change the number 3 to something else for other algoritms (can be found
with ompi_info -a for example):

    MCA coll: information "coll_tuned_alltoall_algorithm_count" (value: "4")
                           Number of alltoall algorithms available
                 MCA coll: parameter "coll_tuned_alltoall_algorithm"
(current value: "0")
                           Which alltoall algorithm is used. Can be locked
down to choice of: 0 ignore, 1 basic linear, 2 pairwise, 3: modified
bruck, 4: two proc only.

HTH
Daniel Spångberg

Den 2010-01-15 13:54:33 skrev Roman Martonak <r.martonak_at_[hidden]>:

> On my machine I need to use dynamic rules to enforce the bruck or
> pairwise
> algorithm for alltoall, since unfortunately the default basic linear
> algorithm performs quite poorly on my
> Infiniband network. Few months ago I noticed that in case of VASP,
> however, the use of dynamic
> rules via --mca coll_tuned_use_dynamic_rules 1 -mca
> coll_tuned_dynamic_rules_filename dyn_rules
> has no effect at all. Later it was identified that there was a bug
> causing the dynamic rules to
> apply only to the MPI_COMM_WORLD but not to other communicators. As
> far as I understand, the bug
> was fixed in openmpi-1.3.4. I tried now the openmpi-1.4 version and
> expected that tuning of alltoall via dynamic
> rules would work, but there is still no effect at all. Even worse, now
> it is not even possible to use static rules
> (which worked previously) such as -mca coll_tuned_alltoall_algorithm
> 3, because the code would crash (as discussed in the list recently).
> When running with --mca coll_base_verbose 1000, I get messages like
>
> [compute-0-0.local:08011] coll:sm:comm_query (0/MPI_COMM_WORLD):
> intercomm, comm is too small, or not all peers local; disqualifying
> myself
> [compute-0-0.local:08011] coll:base:comm_select: component not
> available: sm
> [compute-0-0.local:08011] coll:base:comm_select: component available:
> sync, priority: 50
> [compute-0-3.local:26116] coll:base:comm_select: component available:
> self, priority: 75
> [compute-0-3.local:26116] coll:sm:comm_query (1/MPI_COMM_SELF):
> intercomm, comm is too small, or not all peers local; disqualifying
> myself
> [compute-0-3.local:26116] coll:base:comm_select: component not
> available: sm
> [compute-0-3.local:26116] coll:base:comm_select: component available:
> sync, priority: 50
> [compute-0-3.local:26116] coll:base:comm_select: component not
> available: tuned
> [compute-0-0.local:08011] coll:base:comm_select: component available:
> tuned, priority: 30
>
> Is there now a way to use other alltoall algorithms instead of the
> basic linear algorithm in openmpi-1.4.x ?
>
> Thanks in advance for any suggestion.
>
> Best regards
>
> Roman Martonak
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Daniel Spångberg
Materialkemi
Uppsala Universitet