Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Tim Prins (tprins_at_[hidden])
Date: 2007-10-22 21:06:23

Hi Ides,

Thanks for the report and reminder. I have filed a ticket on this
( and you should receive email
as it is updated.

I do not know of any more elegant way to work around this at the moment.



On Friday 19 October 2007 06:31:53 am idesbald van den bosch wrote:
> Hi,
> I've run into the same problem as discussed in the thread Lev Gelb: "Re:
> [OMPI users] Recursive use of "orterun" (Ralph H
> Castain)"<>
> I am running a parallel python code, then from python I launch a C++
> parallel program using the python os.system command, then I come back in
> python and keep going.
> With LAM/MPI there is no problem with this.
> But Open-mpi systematically crashes, because the python os.system command
> launches the C++ program with the same OMPI_* environment variables as for
> the Python program. As discussed in the thread, I have tried filtering the
> OMPI_* variables prior to launching the C++ program with an
> os.execvecommand, but then it fails to return the hand to python and
> instead simply
> terminates when the C++ program ends.
> There is a workaround (
> create a
> *.sh file with the following lines:
> --------
> for i in $(env | grep OMPI_MCA |sed 's/=/ /' | awk '{print $1}')
> do
> unset $i
> done
> # now the C++ call
> mpirun -np 2 ./MoM/communicateMeshArrays
> ----------
> and then call the *.sh program through the python os.system command.
> What I would like to know is that if this "problem" will get fixed in
> open-MPI? Is there another way to elegantly solve this issue? Meanwhile, I
> will stick to the ugly *.sh hack listed above.
> Cheers
> Ides