Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] mpirun runs in serial even I set np to several processors
From: Elken, Tom (tom.elken_at_[hidden])
Date: 2014-04-14 17:04:10


That’s OK. Many of us make that mistake, though often as a typo.
One thing that helps is that the correct spelling of Open MPI has a space in it, but OpenMP does not.
If not aware what OpenMP is, here is a link: http://openmp.org/wp/

What makes it more confusing is that more and more apps. offer the option of running in a hybrid mode, such as WRF, with OpenMP threads running over MPI ranks with the same executable. And sometimes that MPI is Open MPI.

Cheers,
-Tom

From: users [mailto:users-bounces_at_[hidden]] On Behalf Of Djordje Romanic
Sent: Monday, April 14, 2014 1:28 PM
To: Open MPI Users
Subject: Re: [OMPI users] mpirun runs in serial even I set np to several processors

OK guys... Thanks for all this info. Frankly, I didn't know these diferences between OpenMP and OpenMPI. The commands:
which mpirun
which mpif90
which mpicc
give,
/usr/bin/mpirun
/usr/bin/mpif90
/usr/bin/mpicc
respectively.
A tutorial on how to compile WRF (http://www.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php) provides a test program to test MPI. I ran the program and it gave me the output of successful run, which is:
---------------------------------------------
C function called by Fortran
Values are xx = 2.00 and ii = 1
status = 2
SUCCESS test 2 fortran + c + netcdf + mpi
---------------------------------------------
It uses mpif90 and mpicc for compiling. Below is the output of 'ldd ./wrf.exe':

    linux-vdso.so.1 => (0x00007fff584e7000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f4d160ab000)
    libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007f4d15d94000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4d15a97000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f4d15881000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4d154c1000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f4d162e8000)
    libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007f4d1528a000)

On Mon, Apr 14, 2014 at 4:09 PM, Gus Correa <gus_at_[hidden]<mailto:gus_at_[hidden]>> wrote:
Djordje

Your WRF configure file seems to use mpif90 and mpicc (line 115 & following).
In addition, it also seems to have DISABLED OpenMP (NO TRAILING "I")
(lines 109-111, where OpenMP stuff is commented out).
So, it looks like to me your intent was to compile with MPI.

Whether it is THIS MPI (OpenMPI) or another MPI (say MPICH, or MVAPICH,
or Intel MPI, or Cray, or ...) only your environment can tell.

What do you get from these commands:

which mpirun
which mpif90
which mpicc

I never built WRF here (but other people here use it).
Which input do you provide to the command that generates the configure
script that you sent before?
Maybe the full command line will shed some light on the problem.


I hope this helps,
Gus Correa

On 04/14/2014 03:11 PM, Djordje Romanic wrote:
to get help :)



On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic <djordje8_at_[hidden]<mailto:djordje8_at_[hidden]>
<mailto:djordje8_at_[hidden]<mailto:djordje8_at_[hidden]>>> wrote:

    Yes, but I was hoping to get. :)


    On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres (jsquyres)
    <jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]> <mailto:jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]>>> wrote:

        If you didn't use Open MPI, then this is the wrong mailing list
        for you. :-)

        (this is the Open MPI users' support mailing list)


        On Apr 14, 2014, at 2:58 PM, Djordje Romanic <djordje8_at_[hidden]<mailto:djordje8_at_[hidden]>
        <mailto:djordje8_at_[hidden]<mailto:djordje8_at_[hidden]>>> wrote:

> I didn't use OpenMPI.
>
>
> On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres)
        <jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]> <mailto:jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]>>> wrote:
> This can also happen when you compile your application with
        one MPI implementation (e.g., Open MPI), but then mistakenly use
        the "mpirun" (or "mpiexec") from a different MPI implementation
        (e.g., MPICH).
>
>
> On Apr 14, 2014, at 2:32 PM, Djordje Romanic
        <djordje8_at_[hidden]<mailto:djordje8_at_[hidden]> <mailto:djordje8_at_[hidden]<mailto:djordje8_at_[hidden]>>> wrote:
>
> > I compiled it with: x86_64 Linux, gfortran compiler with
        gcc (dmpar). dmpar - distributed memory option.
> >
> > Attached is the self-generated configuration file. The
        architecture specification settings start at line 107. I didn't
        use Open MPI (shared memory option).
> >
> >
> > On Mon, Apr 14, 2014 at 1:23 PM, Dave Goodell (dgoodell)
        <dgoodell_at_[hidden]<mailto:dgoodell_at_[hidden]> <mailto:dgoodell_at_[hidden]<mailto:dgoodell_at_[hidden]>>> wrote:
> > On Apr 14, 2014, at 12:15 PM, Djordje Romanic
        <djordje8_at_[hidden]<mailto:djordje8_at_[hidden]> <mailto:djordje8_at_[hidden]<mailto:djordje8_at_[hidden]>>> wrote:
> >
> > > When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
> > > -------------------------------------------------
> > > starting wrf task 0 of 1
> > > starting wrf task 0 of 1
> > > starting wrf task 0 of 1
> > > starting wrf task 0 of 1
> > > -------------------------------------------------
> > > This indicates that it is not using 4 processors, but 1.
> > >
> > > Any idea what might be the problem?
> >
> > It could be that you compiled WRF with a different MPI
        implementation than you are using to run it (e.g., MPICH vs.
        Open MPI).
> >
> > -Dave
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]<mailto:users_at_[hidden]> <mailto:users_at_[hidden]<mailto:users_at_[hidden]>>

> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > <configure.wrf>_______________________________________________
> > users mailing list
> > users_at_[hidden]<mailto:users_at_[hidden]> <mailto:users_at_[hidden]<mailto:users_at_[hidden]>>

> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]> <mailto:jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]>>

> For corporate legal information go to:
        http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]> <mailto:users_at_[hidden]<mailto:users_at_[hidden]>>

> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]> <mailto:users_at_[hidden]<mailto:users_at_[hidden]>>

> http://www.open-mpi.org/mailman/listinfo.cgi/users


        --
        Jeff Squyres
        jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]> <mailto:jsquyres_at_[hidden]<mailto:jsquyres_at_[hidden]>>

        For corporate legal information go to:
        http://www.cisco.com/web/about/doing_business/legal/cri/

        _______________________________________________
        users mailing list
        users_at_[hidden]<mailto:users_at_[hidden]> <mailto:users_at_[hidden]<mailto:users_at_[hidden]>>

        http://www.open-mpi.org/mailman/listinfo.cgi/users





_______________________________________________
users mailing list
users_at_[hidden]<mailto:users_at_[hidden]>
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
users_at_[hidden]<mailto:users_at_[hidden]>
http://www.open-mpi.org/mailman/listinfo.cgi/users