We have lead some tests and the option btl_sm_eager_limit has a positive
consequence on the performance. Eugene, thank you for your links.
Now, to offer a good support to our users, we would like to get the
value of this parameters at the runtime. I am aware I can have the value
running ompi_info like following:
ompi_info --param btl all | grep btl_sm_eager_limit
but can I get the value during the computation when I run mpirun -np 12
--mca btl_sm_eager_limit 8192 my_binary? This value could be compared
with the buffer size into my code and some warning put into the output.
On 12/06/2010 04:31 PM, Eugene Loh wrote:
> Mathieu Gontier wrote:
>> Nevertheless, one can observed some differences between MPICH and
>> OpenMPI from 25% to 100% depending on the options we are using into
>> our software. Tests are lead on a single SGI node on 6 or 12
>> processes, and thus, I am focused on the sm option.
> Is it possible to narrow our focus here a little? E.g., are there
> particular MPI calls that are much more expensive with OMPI than
> MPICH? Is the performance difference observable with simple ping-pong
>> So, I have two questions:
>> 1/ does the option--mca mpool_sm_max_size=XXXX can change something
>> (I am wondering if the value is not too small and, as consequence, a
>> set of small messages is sent instead of a big one)
> There was recent related discussion on this mail list.
> Check the OMPI FAQ for more info. E.g.,
> This particular parameter disappeared with OMPI 1.3.2.
> To move messages as bigger chunks, try btl_sm_eager_limit and
>> 2/ is there a difference between --mca btl tcp,sm,self and --mca btl
>> self,sm,tcp (or not put any explicit mca option)?
> I think tcp,sm,self and self,sm,tcp will be the same. Without an
> explicit MCA btl choice, it depends on what BTL choices are available.
> users mailing list