This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Thank you very much guys. Now a more serious issue:
I am using mpi with laamps (a molecular dynamics package) on a single Rack
Dell server Poweredge R810
(4 eight-core processors, 128 Gb RAM memory).
I am now potentially interested into buying the Intel MPI 4.1 libraries,
and I am trying them by exploiting the
30 days trial. However, I am not experiencing any significant improved
performance by using
the Intel MPI libraries with respect to the Open Mpi (compiled with the
Here there is the (makefile) working configuration for the Intel MPI 4.1
CC = /opt/intel/impi/4.1.0.024/intel64/bin/mpiicpc
CCFLAGS = -O -DMPICH_IGNORE_CXX_SEEK -DMPICH_SKIP_MPICXX
And here there is the Open Mpi one:
CC = /usr/local/bin/mpicc
CCFLAGS = -O -mpicc
I also tried the flag -O3 but I detected no significant differences in
Now, I would be considering the Intel Mpi libraries, provided this would
bring to a
significant increase in performance with respect to Open Mpi.
I have evidence that there is room to improve because laamps under the same
and on an HP Z650 with two 6-core processors (the clock frequency is the
and tests for comparison were done on parallel runs using 8 cores),
improves of nearly the 70%
by using the proprietary HP MPI libraries.
2012/10/27 Ralph Castain <rhc_at_[hidden]>
> The reason is that you aren't actually running Open MPI - those error
> messages are coming from MPICH. Check your path and ensure you put the OMPI
> install location first, or use the absolute path to the OMPI mpirun
> On Oct 27, 2012, at 8:46 AM, Giuseppe P. <istruzione_at_[hidden]> wrote:
> I have built open mpi 1.6 with Intel compilers (2013 versions).
> Compilation was smooth, however even when I try to execute
> the simple program hello.c:
> mpirun -np 4 ./hello_c.x
> [mpiexec_at_claudio.ukzn] HYDU_create_process (./utils/launch/launch.c:102):
> execvp error on file
> /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/pmi_proxy (No such file
> or directory)
> [mpiexec_at_claudio.ukzn] HYD_pmcd_pmiserv_proxy_init_cb
> (./pm/pmiserv/pmiserv_cb.c:1177): assert (!closed) failed
> [mpiexec_at_claudio.ukzn] HYDT_dmxu_poll_wait_for_event
> (./tools/demux/demux_poll.c:77): callback returned error status
> [mpiexec_at_claudio.ukzn] HYD_pmci_wait_for_completion
> (./pm/pmiserv/pmiserv_pmci.c:358): error waiting for event
> [mpiexec_at_claudio.ukzn] main (./ui/mpich/mpiexec.c:689): process manager
> error waiting for completion
> Before that, there was an additional error, since also the file mpivars.sh
> was not present in /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/.
> Even though I managed to create one and it worked:
> if [ -z "`echo $PATH | grep /usr/local/bin`" ]; then
> export PATH=/usr/local/bin:$PATH
> if [ -z "`echo $LD_LIBRARY_PATH | grep /usr/local/lib`" ]; then
> if [ -n "$LD_LIBRARY_PATH" ]; then
> export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
> export LD_LIBRARY_PATH=/usr/local/lib
> I do not have any clue about how to generate the file pmi_proxy.
> Thank you in advance for your help!
> users mailing list
> users mailing list