Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] dynamic rules
From: Daniel Spångberg (daniels_at_[hidden])
Date: 2010-01-20 14:08:55


Den 2010-01-16 16:31:24 skrev Roman Martonak <r.martonak_at_[hidden]>:

>> Terribly sorry, I should checked my own notes thoroughly before giving
>> others advice. One needs to give the dynamic rules file location on the
>> command line:
>>
>> mpirun -mca coll_tuned_use_dynamic_rules 1 -mca
>> coll_tuned_dynamic_rules_filename /homeXXXX/.openmpi/dynamic_rules_file
>>
>> That works for me with openmpi 1.4. I have not tried 1.4.1 yet.
>
> Thanks, I will try it. VASP uses cartesian topology communicators.
> Should the dynamic
> rules work also for this case in openmpi-1.4 ? In openmpi-1.3.2 and
> previous versions
> the dynamic rules specified via a dynamic rules file had no effect at
> all for VASP.

I just tried alltoall with a communicator I created that uses half the
slots with 256 bytes message size. The fixed rules uses bruck for messages
smaller than 200 bytes on (I think) 12 processes and up. So my test should
never use bruck. On 512 cores (using 256 for the comm) the fixed rules
take about 10 ms / alltoall. Using the dynamic rules file forcing bruck
makes it take about 1 ms / alltoall, so 10 times quicker. So, yes it seems
that other communicators than MPI_COMM_WORLD are affected by the dynamics
rule file.

My communicator:

   MPI_Comm_group(MPI_COMM_WORLD,&world_group);
   MPI_Comm_size(MPI_COMM_WORLD,&size);
   MPI_Comm_rank(MPI_COMM_WORLD,&rank);
   ranges[0][0]=0;
   ranges[0][1]=size-2;
   ranges[0][2]=2;
   MPI_Group_range_incl(world_group,1,ranges,&half_group);
   MPI_Comm_create(MPI_COMM_WORLD,half_group,&half_comm);

HTH

-- 
Daniel Spångberg
Materialkemi
Uppsala Universitet