This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Yeah, there is something funny about the way Ubuntu is defining their
Ethernet interfaces that is causing a problem. I have a patch that
will be in 1.3.2 that fixes the problem.
On Mar 24, 2009, at 5:24 PM, Simone Pellegrini wrote:
> Hello everyone,
> I have the same problem when I try to install openmpi 1.3.1 on my
> laptop (Ubuntu 8.10 running on a dual core machine).
> I did the same installation on Ubuntu 8.04 and everything works, but
> here no matter what I do, every time I type mpirun the system prompt
> for the password.
> Actually, when I install openmpi as a super user and I try to run
> mpirun (or mpicc) I get the following errors:
> @eNerd:~$ mpicc
> Cannot open configuration file /usr/share/openmpi/mpicc-wrapper-
> Error parsing data file mpicc: Not found
> @eNerd:~$ mpirun --np 2 ls
> mpirun: symbol lookup error: mpirun: undefined symbol: orted_cmd_line
> mpiexec on the other hand is working but asking for a password:
> @eNerd:~/Desktop/openmpi-1.3.1$ mpiexec --np 2 ls
> @enerd's password:
> acinclude.m4 config.log Doxyfile Makefile ompi
> aclocal.m4 config.status examples Makefile.am opal
> cheers, Simone
> Ralph Castain wrote:
>> One thing you might want to try is blowing away that prefix dir and
>> reinstalling OMPI 1.3.1. I'm not confident that "make uninstall"
>> does an adequate job of cleaning things out. The problem is that
>> there are major differences between 1.2.x and 1.3.x, and the
>> uninstall may well miss some things as a result.
>> Easy place to start, at least. ;-)
>> On Mar 23, 2009, at 11:51 AM, Olaf Lenz wrote:
>>> Ralph Castain wrote:
>>>> I regularly run jobs like that on 1.3.1 - it has no desire to use
>>>> ssh to start anything. On a local host such as this command uses,
>>>> all mpiexec does is fork/exec the procs.
>>> That sounds strange. I'm just going back and forth between OpenMPI
>>> 1.2.9 and OpenMPI 1.3.1 by using make uninstall/make install, and
>>> I can always reproduce the behavior.
>>>> It sounds like something strange is going on in your environment
>>>> that makes OMPI think it is launching on a remote host. Most
>>>> likely cause is
>>>> something in your Ethernet configuration. Can you send us the
>>>> output of ifconfig (or whatever your equivalent is)?
>>> Ok, here is some information on my system:
>>> * Kubuntu 9.04 (Jaunty) alpha 6
>>> * Core Duo CPU
>>> * I have compiled both OpenMPI versions (1.2.9 and 1.3.1) myself,
>>> configure --prefix=$HOME/software --enable-shared --enable-static
>>> Output of some ifconfig:
>>> > ifconfig
>>> eth0 Link encap:Ethernet HWaddr 00:1e:37:15:1b:70
>>> inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:
>>> inet6 addr: fe80::21e:37ff:fe15:1b70/64 Scope:Link
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:34185 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:29386 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:100
>>> RX bytes:25645492 (25.6 MB) TX bytes:3921545 (3.9 MB)
>>> lo Link encap:Local Loopback
>>> inet addr:127.0.0.1 Mask:255.0.0.0
>>> inet6 addr: ::1/128 Scope:Host
>>> UP LOOPBACK RUNNING MTU:16436 Metric:1
>>> RX packets:5372 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:5372 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:661715 (661.7 KB) TX bytes:661715 (661.7 KB)
>>> tun0 Link encap:UNSPEC HWaddr
>>> inet addr:10.20.143.6 P-t-P:10.20.143.5 Mask:
>>> UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
>>> RX packets:2234 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:3158 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:100
>>> RX bytes:940549 (940.5 KB) TX bytes:207091 (207.0 KB)
>>> You need anything else?
>>> users mailing list
>> users mailing list
> users mailing list