Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] mpirun -np 4 hello_world; on a eight processor shared memory machine produces wrong output
From: Ralph Castain (rhc_at_[hidden])
Date: 2010-04-29 09:55:35


It confirms what we were saying - your application is not compiled against OMPI

You'll have to dig a little to figure out why that is happening - could be a path issue.

On Apr 29, 2010, at 2:56 AM, Pankatz, Klaus wrote:

> Hi Ralph,
>
> thanks for your advise. Finally I configured openmpi with ./configure --prefix=... --enable-debug.
> According to your suggestion I ran the hello_world with
> mpirun -np 4 -mca ess_base_verbose 5 and the output is as follows:
> I don't know what happend there...
>
> ******
> [marvin:00373] mca:base:select:( ess) Querying component [env]
> [marvin:00373] mca:base:select:( ess) Skipping component [env]. Query failed to return a module
> [marvin:00373] mca:base:select:( ess) Querying component [hnp]
> [marvin:00373] mca:base:select:( ess) Query of component [hnp] set priority to 100
> [marvin:00373] mca:base:select:( ess) Querying component [singleton]
> [marvin:00373] mca:base:select:( ess) Skipping component [singleton]. Query failed to return a module
> [marvin:00373] mca:base:select:( ess) Querying component [slurm]
> [marvin:00373] mca:base:select:( ess) Skipping component [slurm]. Query failed to return a module
> [marvin:00373] mca:base:select:( ess) Querying component [tool]
> [marvin:00373] mca:base:select:( ess) Skipping component [tool]. Query failed to return a module
> [marvin:00373] mca:base:select:( ess) Selected component [hnp]
> Hello World! I'm number 0 of 1 running on host marvin
> Hello World! I'm number 0 of 1 running on host marvin
> Hello World! I'm number 0 of 1 running on host marvin
> Hello World! I'm number 0 of 1 running on host marvin
> ****
>
> ________________________________________
> Von: users-bounces_at_[hidden] [users-bounces_at_[hidden]] im Auftrag von Ralph Castain [rhc_at_[hidden]]
> Gesendet: Freitag, 23. April 2010 17:04
> An: Open MPI Users
> Betreff: Re: [OMPI users] mpirun -np 4 hello_world; on a eight processor shared memory machine produces wrong output
>
> Is this build configured --enable-debug? If not, can you reconfigure it?
>
> If you can, you could run it with -mca ess_base_verbose 5 to see if it is picking up the correct modules.
>
> It really looks like your application was built with an older version, or compiled against something like mpich.
>
> On Apr 23, 2010, at 8:54 AM, Pankatz, Klaus wrote:
>
> Allright, I've ran a mpirun -np 4 env. And I see OMPI_COMM_WORLD_RANK 0 to 3. So far so good.
> OMPI_COMM_WORLD_SIZE=4 everytime, I think thats correct.
> OMPI_MCA_mpi_yield_when_idle=0 everytime zero
> OMPI_MCA_orte_app_num=0 everytime zero
>
> Am 23.04.2010 um 14:54 schrieb Terry Dontje:
>
> Ok can you do an "mpirun -np 4 env" you should seeOMPI_COMM_WORLD_RANK range 0 thru 3. I am curious if you even see OMPI_* env-vars and if you do is this one 0 for all procs?
>
> --td
>
> Pankatz, Klaus wrote:
>
> Yeah, I sure that I use the right mpirun.
>
> which mpirun leads to /usr/users/pankatz/OPENmpi/bin/mpirun which is the right one.
> ________________________________________
> Von: users-bounces_at_[hidden]<mailto:users-bounces_at_[hidden]> [users-bounces_at_[hidden]<mailto:users-bounces_at_[hidden]>] im Auftrag von Terry Dontje [terry.dontje_at_[hidden]<mailto:terry.dontje_at_[hidden]>]
> Gesendet: Freitag, 23. April 2010 14:29
> An: Open MPI Users
> Betreff: Re: [OMPI users] mpirun -np 4 hello_world; on a eight processor shared memory machine produces wrong output
>
> This looks like you are using an mpirun or mpiexec from mvapich to run an executable compiled with OMPI. Can you make sure that you are using the right mpirun?
>
> --td
>
> Pankatz, Klaus wrote:
>
> Yes, I did that.
>
> It ist basically the same problem with a Fortran version of this little program. With that I used the mpif90 command of openMPI.
> ________________________________________
> Von: users-bounces_at_[hidden]<mailto:users-bounces_at_[hidden]><mailto:users-bounces_at_[hidden]><mailto:users-bounces_at_[hidden]> [users-bounces_at_[hidden]<mailto:users-bounces_at_[hidden]><mailto:users-bounces_at_[hidden]><mailto:users-bounces_at_[hidden]>] im Auftrag von Reuti [reuti_at_[hidden]<mailto:reuti_at_[hidden]><mailto:reuti_at_[hidden]><mailto:reuti_at_[hidden]>]
> Gesendet: Freitag, 23. April 2010 14:15
> An: Open MPI Users
> Betreff: Re: [OMPI users] mpirun -np 4 hello_world; on a eight processor shared memory machine produces wrong output
>
> Hi,
>
> Am 23.04.2010 um 14:06 schrieb Pankatz, Klaus:
>
>
>
> Hi all,
>
> there's a problem with openMPI on my machine. When I simply try to run this little hello_world-program on multiple processors, the output isn't as expected:
> *****
> C code:
> #include <mpi.h>
> #include <stdio.h>
> #include <unistd.h>
> int main(int argc, char **argv)
> {
> int size,rank;
> char hostname[50];
> MPI_Init(&argc,&argv);
> MPI_Comm_rank(MPI_COMM_WORLD, &rank); //Who am I?
> MPI_Comm_size(MPI_COMM_WORLD, &size); //How many processes?
> gethostname (hostname, 50);
> printf ("Hello World! I'm number %2d of %2d running on host %s\n",
> rank, size, hostname);
> MPI_Finalize();
> return 0;
> }
> ****
>
> Command: mpirun -np 4 a.out
>
>
>
> the mpirun (better, use: mpiexec) is the one from the Open MPI, and you also used its version mpicc to compile the program?
>
> -- Reuti
>
>
>
>
> Output:
> Hello World! I'm number 0 of 1 running on host marvin
> Hello World! I'm number 0 of 1 running on host marvin
> Hello World! I'm number 0 of 1 running on host marvin
> Hello World! I'm number 0 of 1 running on host marvin
>
> It should be more or less:
> Hello World! I'm number 1 of 4 running on host marvin
> Hello World! I'm number 2 of 4 running on host marvin
> ....
>
> OpenMPI-version 1.4.1 compiled with Lahey Fortran 95 (lf95).
> OpenMPI was compiled "out of the box" only changing to the Lahey compiler with a setenv $FC lf95
>
> The System: Linux marvin 2.6.27.6-1 #1 SMP Sat Nov 15 20:19:04 CET 2008 x86_64 GNU/Linux
>
> Compiler: Lahey/Fujitsu Linux64 Fortran Compiler Release L8.10a
>
> Thanks very much!
> Klaus
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]><mailto:users_at_[hidden]><mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]><mailto:users_at_[hidden]><mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]><mailto:users_at_[hidden]><mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> [cid:part1.05010106.04050301_at_[hidden]]
> Terry D. Dontje | Principal Software Engineer
> Developer Tools Engineering | +1.650.633.7054
> Oracle - Performance Technologies
> 95 Network Drive, Burlington, MA 01803
> Email terry.dontje_at_[hidden]<mailto:terry.dontje_at_[hidden]><mailto:terry.dontje_at_[hidden]><mailto:terry.dontje_at_[hidden]>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> <ATT00001..gif>
> Terry D. Dontje | Principal Software Engineer
> Developer Tools Engineering | +1.650.633.7054
> Oracle - Performance Technologies
> 95 Network Drive, Burlington, MA 01803
> Email terry.dontje_at_[hidden]<mailto:terry.dontje_at_[hidden]>
>
> <ATT00002..txt>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users