Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Handling output of processes
From: jody (jody.xha_at_[hidden])
Date: 2009-01-26 03:11:13

I have written some shell scripts which ease the output
to an xterm for each processor for normal execution(,
gdb (, and valgrind (

In order for the xterms to be shown on your machine,
you have to set the DISPLAY variable on every host
(if this is not done by ssh)
  export DISPLAY=myhost:0.0

on myhost you may have to allow access:
  xhost +<host-name>
for each machine in your hostfile.

Then start
  mpirun -np 12 -x DISPLAY myApp arg1 arg2 arg3

I've attached these little scripts to this mail.
Feel free to use them.

I've started working on my "complicated" way, i.e.
wrappers redirecting output via sockets to a server.


On Sun, Jan 25, 2009 at 1:20 PM, Ralph Castain <rhc_at_[hidden]> wrote:
> For those of you following this thread:
> I have been impressed by the various methods used to grab the output from
> processes. Since this is clearly something of interest to a broad audience,
> I would like to try and make this easier to do by adding some options to
> mpirun. Coming in 1.3.1 will be --tag-output, which will automatically tag
> each line of output with the rank of the process - this was already in the
> works, but obviously doesn't meet the needs expressed here.
> I have done some prelim work on a couple of options based on this thread:
> 1. spawn a screen and redirect process output to it, with the ability to
> request separate screens for each specified rank. Obviously, specifying all
> ranks would be the equivalent of replacing "my_app" on the mpirun cmd line
> with "xterm my_app". However, there are cases where you only need to see the
> output from a subset of the ranks, and that is the intent of this option.
> 2. redirect output of specified processes to files using the provided
> filename appended with ".rank". You can do this for all ranks, or a
> specified subset of them.
> 3. timestamp output
> Is there anything else people would like to see?
> It is also possible to write a dedicated app such as Jody described, but
> that is outside my purview for now due to priorities. However, I can provide
> technical advice to such an effort, so feel free to ask.
> Ralph
> On Jan 23, 2009, at 12:19 PM, Gijsbert Wiesenekker wrote:
>> jody wrote:
>>> Hi
>>> I have a small cluster consisting of 9 computers (8x2 CPUs, 1x4 CPUs).
>>> I would like to be able to observe the output of the processes
>>> separately during an mpirun.
>>> What i currently do is to apply the mpirun to a shell script which
>>> opens a xterm for each process,
>>> which then starts the actual application.
>>> This works, but is a bit complicated, e.g. finding the window you're
>>> interested in among 19 others.
>>> So i was wondering is there a possibility to capture the processes'
>>> outputs separately, so
>>> i can make an application in which i can switch between the different
>>> processor outputs?
>>> I could imagine that could be done by wrapper applications which
>>> redirect the output over a TCP
>>> socket to a server application.
>>> But perhaps there is an easier way, or something like this alread does
>>> exist?
>>> Thank You
>>> Jody
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>> For C I use a printf wrapper function that writes the output to a logfile.
>> I derive the name of the logfile from the mpi_id. It prefixes the lines with
>> a time-stamp, so you also get some basic profile information. I can send you
>> the source code if you like.
>> Regards,
>> Gijsbert
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]