Thank you very much guys. Now a more serious issue:
I am using mpi with laamps (a molecular dynamics package) on a single Rack Dell server Poweredge R810
(4 eight-core processors, 128 Gb RAM memory).
I am now potentially interested into buying the Intel MPI 4.1 libraries, and I am trying them by exploiting the
30 days trial. However, I am not experiencing any significant improved performance by using
the Intel MPI libraries with respect to the Open Mpi (compiled with the Itel compilers).
Here there is the (makefile) working configuration for the Intel MPI 4.1 compilers:
CC = /opt/intel/impi/4.1.0.024/intel64/bin/mpiicpc
CCFLAGS = -O -DMPICH_IGNORE_CXX_SEEK -DMPICH_SKIP_MPICXX
And here there is the Open Mpi one:
CC = /usr/local/bin/mpicc
CCFLAGS = -O -mpicc
I also tried the flag -O3 but I detected no significant differences in performance.
Now, I would be considering the Intel Mpi libraries, provided this would bring to a
significant increase in performance with respect to Open Mpi.
I have evidence that there is room to improve because laamps under the same conditions
and on an HP Z650 with two 6-core processors (the clock frequency is the same though,
and tests for comparison were done on parallel runs using 8 cores), improves of nearly the 70%
by using the proprietary HP MPI libraries.
The reason is that you aren't actually running Open MPI - those error messages are coming from MPICH. Check your path and ensure you put the OMPI install location first, or use the absolute path to the OMPI mpirun_______________________________________________
I have built open mpi 1.6 with Intel compilers (2013 versions). Compilation was smooth, however even when I try to execute
the simple program hello.c:
mpirun -np 4 ./hello_c.x
[firstname.lastname@example.org] HYDU_create_process (./utils/launch/launch.c:102): execvp error on file /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/pmi_proxy (No such file or directory)
[email@example.com] HYD_pmcd_pmiserv_proxy_init_cb (./pm/pmiserv/pmiserv_cb.c:1177): assert (!closed) failed
[firstname.lastname@example.org] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[email@example.com] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:358): error waiting for event
[firstname.lastname@example.org] main (./ui/mpich/mpiexec.c:689): process manager error waiting for completion
Before that, there was an additional error, since also the file mpivars.sh was not present in /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/.
Even though I managed to create one and it worked:
if [ -z "`echo $PATH | grep /usr/local/bin`" ]; then
if [ -z "`echo $LD_LIBRARY_PATH | grep /usr/local/lib`" ]; then
if [ -n "$LD_LIBRARY_PATH" ]; then
I do not have any clue about how to generate the file pmi_proxy.
Thank you in advance for your help!
users mailing list
users mailing list