Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] RFC: Force Slurm to use PMI-1 unless PMI-2 is specifically requested
From: Artem Polyakov (artpol84_at_[hidden])
Date: 2014-05-07 22:22:41

That is interesting. I think I will reconstruct your experiments on my
system when I will be testing PMI selection logic. According to your
resource count numbers I can do that. I will publish my results in the list.

2014-05-08 8:51 GMT+07:00 Christopher Samuel <samuel_at_[hidden]>:

> Hash: SHA1
> On 07/05/14 18:00, Ralph Castain wrote:
> > Interesting - how many nodes were involved? As I said, the bad
> > scaling becomes more evident at a fairly high node count.
> Our x86-64 systems are low node counts (we've got BG/Q for capacity),
> the cluster that those tests were run on has 70 nodes, each with 16
> cores, so I suspect we're a long long way away from that pain point.
> All the best!
> Chris
> - --
> Christopher Samuel Senior Systems Administrator
> VLSCI - Victorian Life Sciences Computation Initiative
> Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545
> Version: GnuPG v1.4.14 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird -
> NJIAn1Cun+yY8zd6IToEsYJELpJTIdGb
> =K0XF
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> Subscription:
> Link to this post:

С Уважением, Поляков Артем Юрьевич
Best regards, Artem Y. Polyakov