This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I am trying to run OpenMPI-1.6.5 on a Linux on a system based on ARM Cortex
A9. The linux system and the hardware is provided by Xilinx Inc., and for
those who may have related experiences the system is called Zynq, which is
an embedded SoC system with ARM cores and FPGA fabrics. Xilinx has provided
cross-compiler for the system, which I used to compile openmpi, and the
compilation is successful. Here is the configuration script I used for the
./configure --build=arm-linux-gnueabi --host=armv7-linux-gnueabi \
--disable-mpi-f77 --disable-mpi-f90 \
--disable-mpi-cxx --prefix=`pwd`/install \
--with-devel-headers --enable-binaries \
--enable-shared --enable-static \
--disable-mmap-shmem --disable-posix-shmem --disable-sysv-shmem \
For the cross-compiler, I have set the environmental variables "CC" and
When I launch 'mpirun' on the ARM linux, I got the error like this:
It looks like opal_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
--> Returned value -1 instead of OPAL_SUCCESS
[ZC702:01353] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file
runtime/orte_init.c at line 79
[ZC702:01353] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file orterun.c
at line 694
I have compressed the information from 'ompi-info --all' in the attachment.
For some more related information, I have been tuning the configuration
settings for a while, and I am afraid some of them may not be quite
appropriate. My general goal is to enable message passing in my system of
several such chips connected via Ethernet. So I will not launch more than
one process on any single machine. That's why I wanted to disable the
shared memory support. Although that doesn't change the outcome for me.
I also got a lot of error messages on mca failing to find components, that
is why I tried disable dlopen.
I am also looking for suggestions. Basically I want to compile a "clean"
version of OpenMPI with only the core message passing support, that may
automatically get rid of a lot of the headache of the cross-compilation.
When I searched through the documentation, I came to notice the idea of
Portable Hardware locality (hwloc), however, the idea is completely new to
me so I do not know if that would be relevant for my case.
Thank you in advance for your suggestions! Please tell me if I need to
provide further information of my system.
Di Wu (Allan)
VAST Labortory (http://vast.cs.ucla.edu/),
Department of Computer Science, UC Los Angeles