Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] One additional (unwanted) process when using dynamical process management
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-01-21 13:38:48

If you can, 1.3 would certainly be a good step to take. I'm not sure
why 1.2.5 would be behaving this way, though, so it may indeed be
something in the application (perhaps in the info key being passed to
us?) that is the root cause.

Still, if it isn't too much trouble, moving to 1.3 will provide you
with a better platform for dynamic process management regardless.


On Jan 21, 2009, at 11:30 AM, Evgeniy Gromov wrote:

> Dear Ralph,
> Thanks for your reply.
> I encountered this problem using openmpi-1.2.5,
> on a Opteron cluster with Myrinet-mx. I tried for
> compilation of Global Arrays different compilers
> (gfortran, intel, pathscale), the result is the same.
> As I mentioned in the previous message GA itself works
> fine, but the application which uses it doesn't work
> correctly if it runs on several nodes. If it runs on
> one node with several cores everything is fine. So I
> thought that the problem might be in this additional
> process.
> Should I try to use the latest 1.3 version of openmpi?
> Best,
> Evgeniy
> Ralph Castain wrote:
>> Not that I've seen. What version of OMPI are you using, and on what
>> type of machine/environment?
>> On Jan 21, 2009, at 11:02 AM, Evgeniy Gromov wrote:
>>> Dear OpenMPI users,
>>> I have the following (problem) related to OpenMPI:
>>> I have recently compiled with OPenMPI the new (4-1)
>>> Global Arrays package using ARMCI_NETWORK=MPI-SPAWN,
>>> which implies the use of dynamic process management
>>> realised in MPI2. It got compiled and tested successfully.
>>> However when it is spawning on different nodes (machine) one
>>> additional process on each node appears, i.e. if nodes=2:ppn=2
>>> then on each node there are 3 running processes. In the case
>>> when it runs just on one pc with a few cores (let say
>>> nodes=1:ppn=4),
>>> the number of processes exactly equals the number of cpus (ppn)
>>> requested and there is no additional process.
>>> I am wondering whether it is normal behavior. Thanks!
>>> Best regards,
>>> Evgeniy
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> --
> _______________________________________
> Dr. Evgeniy Gromov
> Theoretische Chemie
> Physikalisch-Chemisches Institut
> Im Neuenheimer Feld 229
> D-69120 Heidelberg
> Germany
> Telefon: +49/(0)6221/545263
> Fax: +49/(0)6221/545221
> E-mail: evgeniy_at_[hidden]
> _______________________________________
> _______________________________________________
> users mailing list
> users_at_[hidden]