Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Problem with X forwarding
From: Reuti (reuti_at_[hidden])
Date: 2008-06-09 13:26:53


Hi,

Am 09.06.2008 um 18:13 schrieb Dave Grote:

> I have this same issue from a while ago. Search for "x11
> forwarding" in
> the archives. The solution I settled on is to use the -d option, the
> debug option. With this option, mpirun will keep the ssh sessions
> open,
> and so the X forwarding stays active. Note that you do get lots of
> debugging output at the start of the run, but after that, there's no
> extra output. An enhancement ticket was going to be added to add a
> command line option to keep the ssh sessions open (without having to
> turn debugging on). I never heard anything more on it, so apparently
> nothing happened. But using the -d option does work well and doesn't
> require any extra fiddling.

maybe an additional line (the last one here) in your ~/.ssh/config
can do a similar thing. Usually I have therein:

Host *
     ForwardAgent yes
     ForwardX11 yes
     ForwardX11Trusted yes
     Compression yes
     NoHostAuthenticationForLocalhost yes
     ServerAliveInterval 900

-- Reuti

> Dave
>
> Allen Barnett wrote:
>> If you are using a recent version of Linux (as machine A), the X
>> server
>> is probably started with its TCP network connection turned off. For
>> example, if you do:
>>
>> $ ps auxw | grep X
>> /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten
>> tcp vt7
>>
>> The "-nolisten tcp" option turns off the X server's remote connection
>> socket. Also, "netstat -atp" on A will show that nothing is
>> listening on
>> port 6000. So, for example, from machine B:
>>
>> [B]$ xlogo -display A:0
>>
>> doesn't work.
>>
>> The trick I've used: Before you run your MPI application, you can
>> ssh to
>> the remote node with X forwarding enabled ("ssh -Y"). On the remote
>> system, do "echo $DISPLAY" to see what DISPLAY environment
>> variable ssh
>> created. For example, it might be something like "localhost:10.0".
>> Leave
>> this ssh connection open and then run your OMPI application in
>> another
>> window and pass "-x DISPLAY=localhost:10.0" through MPI. X
>> applications
>> on the remote node *should* now be able to connect back through
>> the open
>> ssh connection. This probably won't scale very well, though.
>>
>> Allen
>>
>> On Wed, 2008-06-04 at 14:36 -0400, Jeff Squyres wrote:
>>
>>> In general, Open MPI doesn't have anything to do with X forwarding.
>>> However, if you're using ssh to startup your processes, ssh may
>>> configure X forwarding for you (depending on your local system
>>> setup). But OMPI closes down ssh channels once applications have
>>> launched (there's no need to keep them open), so any X forwarding
>>> that
>>> may have been setup will be closed down.
>>>
>>> The *easiest* way to setup X forwarding is simply to allow X
>>> connections to your local host from the node(s) that will be running
>>> your application. E.g., use the "xhost" command to add the target
>>> nodes into the access list. And then have mpirun export a suitable
>>> DISPLAY variable, such as:
>>>
>>> export DISPLAY=my_hostname:0
>>> mpirun -x DISPLAY ...
>>>
>>> The "-x DISPLAY" clause tells Open MPI to export the value of the
>>> DISPLAY variable to all nodes when running your application.
>>>
>>> Hope this helps.
>>>
>>>
>>> On May 30, 2008, at 1:24 PM, Cally K wrote:
>>>
>>>
>>>> hi, I have some problem running DistributedData.cxx ( it is a VTK
>>>> file ) , I need to be able to see the rendering from my computer
>>>>
>>>> I, however have problem running the executable, I loaded both the
>>>> executabe into 2 machines
>>>>
>>>> and I am accesing it from my computer( DHCP enabled )
>>>>
>>>> after running the following command - I use OpenMPI
>>>>
>>>> mpirun -hostfile myhostfile -np 2 -bynode ./DistributedData
>>>>
>>>> and I keep getting these errors
>>>>
>>>> ERROR: In /home/kalpanak/Installation_Files/VTKProject/VTK/
>>>> Rendering/
>>>> vtkXOpenGLRenderWindow.cxx, line 326
>>>> vtkXOpenGLRenderWindow (0x8664438): bad X server connection.
>>>>
>>>>
>>>> ERROR: In /home/kalpanak/Installation_Files/VTKProject/VTK/
>>>> Rendering/
>>>> vtkXOpenGLRenderWindow.cxx, line 169
>>>> vtkXOpenGLRenderWindow (0x8664438): bad X server connection.
>>>>
>>>>
>>>> [vrc1:27394] *** Process received signal ***
>>>> [vrc1:27394] Signal: Segmentation fault (11)
>>>> [vrc1:27394] Signal code: Address not mapped (1)
>>>> [vrc1:27394] Failing at address: 0x84
>>>> [vrc1:27394] [ 0] [0xffffe440]
>>>> [vrc1:27394] [ 1] ./
>>>> DistributedData(_ZN22vtkXOpenGLRenderWindow20GetDesiredVisualInfoEv
>>>> +0x229) [0x8227e7d]
>>>> [vrc1:27394] [ 2] ./
>>>> DistributedData(_ZN22vtkXOpenGLRenderWindow16WindowInitializeEv
>>>> +0x340) [0x8226812]
>>>> [vrc1:27394] [ 3] ./
>>>> DistributedData(_ZN22vtkXOpenGLRenderWindow10InitializeEv+0x29)
>>>> [0x82234f9]
>>>> [vrc1:27394] [ 4] ./
>>>> DistributedData(_ZN22vtkXOpenGLRenderWindow5StartEv+0x29)
>>>> [0x82235eb]
>>>> [vrc1:27394] [ 5] ./
>>>> DistributedData(_ZN15vtkRenderWindow14DoStereoRenderEv+0x1a)
>>>> [0x82342ac]
>>>> [vrc1:27394] [ 6] ./
>>>> DistributedData(_ZN15vtkRenderWindow10DoFDRenderEv+0x427)
>>>> [0x8234757]
>>>> [vrc1:27394] [ 7] ./
>>>> DistributedData(_ZN15vtkRenderWindow10DoAARenderEv+0x5b7)
>>>> [0x8234d19]
>>>> [vrc1:27394] [ 8] ./DistributedData(_ZN15vtkRenderWindow6RenderEv
>>>> +0x690) [0x82353b4]
>>>> [vrc1:27394] [ 9] ./
>>>> DistributedData(_ZN22vtkXOpenGLRenderWindow6RenderEv+0x52)
>>>> [0x82245e2]
>>>> [vrc1:27394] [10] ./DistributedData [0x819e355]
>>>> [vrc1:27394] [11] ./
>>>> DistributedData(_ZN16vtkMPIController19SingleMethodExecuteEv+0x1ab)
>>>> [0x837a447]
>>>> [vrc1:27394] [12] ./DistributedData(main+0x180) [0x819de78]
>>>> [vrc1:27394] [13] /lib/libc.so.6(__libc_start_main+0xe0)
>>>> [0xb79c0fe0]
>>>> [vrc1:27394] [14] ./DistributedData [0x819dc21]
>>>> [vrc1:27394] *** End of error message ***
>>>> mpirun noticed that job rank 0 with PID 27394 on node ....
>>>> exited on
>>>> signal 11 (Segmentation fault).
>>>>
>>>>
>>>> Maybe I am not doing the xforwading properly, but has anyone ever
>>>> encountered the same problem, it works fine on one pc, and I read
>>>> the mailing list but I just don't know if my prob is similiar to
>>>> their, I even tried changing the DISPLAY env
>>>>
>>>>
>>>> This is what I want to do
>>>>
>>>> my mpirun should run on 2 machines ( A and B ) and I should be able
>>>> to view the output ( on my PC ),
>>>> are there any specfic commands to use.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users