Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Dynamic processes connection and segfault on MPI_Comm_accept
From: Grzegorz Maj (maju3_at_[hidden])
Date: 2010-04-17 18:24:39


Yes, I know. The problem is that I need to use some special way for
running my processes provided by the environment in which I'm working
and unfortunately I can't use mpirun.

2010/4/18 Ralph Castain <rhc_at_[hidden]>:
> Guess I don't understand why you can't use mpirun - all it does is start things, provide a means to forward io, etc. It mainly sits there quietly without using any cpu unless required to support the job.
>
> Sounds like it would solve your problem. Otherwise, I know of no way to get all these processes into comm_world.
>
>
> On Apr 17, 2010, at 2:27 PM, Grzegorz Maj wrote:
>
>> Hi,
>> I'd like to dynamically create a group of processes communicating via
>> MPI. Those processes need to be run without mpirun and create
>> intracommunicator after the startup. Any ideas how to do this
>> efficiently?
>> I came up with a solution in which the processes are connecting one by
>> one using MPI_Comm_connect, but unfortunately all the processes that
>> are already in the group need to call MPI_Comm_accept. This means that
>> when the n-th process wants to connect I need to collect all the n-1
>> processes on the MPI_Comm_accept call. After I run about 40 processes
>> every subsequent call takes more and more time, which I'd like to
>> avoid.
>> Another problem in this solution is that when I try to connect 66-th
>> process the root of the existing group segfaults on MPI_Comm_accept.
>> Maybe it's my bug, but it's weird as everything works fine for at most
>> 65 processes. Is there any limitation I don't know about?
>> My last question is about MPI_COMM_WORLD. When I run my processes
>> without mpirun their MPI_COMM_WORLD is the same as MPI_COMM_SELF. Is
>> there any way to change MPI_COMM_WORLD and set it to the
>> intracommunicator that I've created?
>>
>> Thanks,
>> Grzegorz Maj
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>