Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Res: Res: Res: Res: Gromacs run in parallel
From: Gus Correa (gus_at_[hidden])
Date: 2010-06-08 16:34:57


Hi

Have you tried to compile and run the simple examples
that come with OpenMPI?

Often times they tell you right away if there are problems with your
PATH, or with your LD_LIBRARY_PATH, or if all OpenMPI software can
be reached by your nodes (and not only by your head node), etc.

The little time you spend doing this boring routine,
may become the big time you save trying to troubleshoot
a more complex application like Gromacs.
Minimally, it should tell you if OpenMPI is really working
on your system.

You can find the example programs in the "examples" directory of the
OpenMPI tarball.
My favorite is connectivity_c.c, which tests pair-wise communication
across all processes, but there are other examples there.

To compile it, do:

mpicc connectivity_c.c

To run it with verbose output, say, on 4 processes, do:

mpirun -np 4 a.out -v

The output is self-explanatory.

Other useful test is just to run hostname through mpirun, say:

mpirun -np 4 hostname

to see if all hosts respond.

I hope this helps,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------

lauren wrote:
> One problem with versions or incompatibility can lead to a error like:
> "Unable to start a daemon on the local node"
> and
> "ompi_mpi_init: ort_init failed"
>
> ??
>
> thanks
>
>
> ------------------------------------------------------------------------
> *De:* "Addepalli, Srirangam V" <srirangam.v.addepalli_at_[hidden]>
> *Para:* Open MPI Users <users_at_[hidden]>
> *Enviadas:* Terça-feira, 8 de Junho de 2010 13:59:08
> *Assunto:* Re: [OMPI users] Res: Res: Res: Gromacs run in parallel
>
> Hello,
>
> ldd `which mdrun_mpi`
>
> should give you which libraries the binary is looking for. What does
> the above command do for your build.
>
> I had a user who had a serial mdrun in his path and it did the same.
>
> Rangam
>
> ________________________________________
> From: users-bounces_at_[hidden] <mailto:users-bounces_at_[hidden]>
> [users-bounces_at_[hidden] <mailto:users-bounces_at_[hidden]>] On
> Behalf Of lauren [owenlany_at_[hidden] <mailto:owenlany_at_[hidden]>]
> Sent: Tuesday, June 08, 2010 11:36 AM
> To: Open MPI Users
> Subject: [OMPI users] Res: Res: Res: Gromacs run in parallel
>
> Hi,
> I did it and it match.....
> mdrun and mpiexec at the same place.
> seems ok...
> 1 more suggestion?
>
> thank you,
>
>
>
>
>
> ________________________________
> De: Carsten Kutzner <ckutzne_at_[hidden] <mailto:ckutzne_at_[hidden]>>
> Para: Open MPI Users <users_at_[hidden] <mailto:users_at_[hidden]>>
> Enviadas: Terça-feira, 8 de Junho de 2010 13:12:35
> Assunto: Re: [OMPI users] Res: Res: Gromacs run in parallel
>
> Ok,
>
> 1. type 'which mdrun' to see where the mdrun executable resides.
> 2. type ldd 'which mdrun' to find out against which mpi library it is linked
> 3. type which mpirun (or which mpiexec, whatever you use) to verify that
> this is the right mpi launcher for your mdrun.
> 4. If the MPI's do not match, either use the right mpiexec or recompile
> gromacs with the current mpi.
>
> Carsten
>
>
> On Jun 8, 2010, at 5:50 PM, lauren wrote:
>
> I saw
> Host: <somename> pid: <somepid> nodeid: 0 nnodes: 1
>
> really it`s running in 1 node
> and All of you really undestood my problem, thanks
>
> But how can I fix it.
> How can I run 1 job in 4 nodes...?
> I really need help,
> I took a look in my files and erase all the errors and the
> implementations seem corect.
> >From the beginning, please.
> `case all tutorials only explain the same thing that look right.
> And thanks very much for this help!
>
>
>
> ________________________________
> De: Jeff Squyres <jsquyres_at_[hidden]
> <mailto:jsquyres_at_[hidden]><mailto:jsquyres_at_[hidden]
> <mailto:jsquyres_at_[hidden]>>>
> Para: Open MPI Users <users_at_[hidden]
> <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>>
> Enviadas: Terça-feira, 8 de Junho de 2010 10:30:03
> Assunto: Re: [OMPI users] Res: Gromacs run in parallel
>
> No, I'm sorry -- I wasn't clear. What I meant was, that if you run:
>
> mpirun -np 4 my_mpi_application
>
> 1. If you see a single, 4-process MPI job (regardless of how many
> nodes/servers it's spread across), then all is good. This is what you want.
>
> 2. But if you're seeing 4 independent 1-process MPI jobs (again,
> regardless of how many nodes/servers they are spread across), it's
> possible that you compiled your application with MPI implementation X
> and then used the "mpirun" from MPI implementation Y.
>
> You will need X==Y to make it work properly -- i.e., to see case #1,
> above. I mention this because your first post mentioned that you're
> seeing the same job run 4 times. This implied to me that you are
> running into case #2. If I misunderstood your problem, then ignore me
> and forgive the noise.
>
>
>
> On Jun 8, 2010, at 9:20 AM, Carsten Kutzner wrote:
>
> > On Jun 8, 2010, at 3:06 PM, Jeff Squyres wrote:
> >
> > > I know nothing about Gromacs, but you might want to ensure that
> your Gromacs was compiled with Open MPI. A common symptom of "mpirun
> -np 4 my_mpi_application" running 4 1-process MPI jobs (instead of 1
> 4-process MPI job) is that you compiled my_mpi_application with one MPI
> implementation, but then used the mpirun from a different MPI
> implementation.
> > >
> > Hi,
> >
> > this can be checked by looking at the Gromacs output file md.log. The
> second line should
> > read something like
> >
> > Host: <somename> pid: <somepid> nodeid: 0 nnodes: 4
> >
> > Lauren, you will want to ensure that nnodes is 4 in your case, and not 1.
> >
> > You can also easily test that without any input file by typing
> >
> > mpirun -np 4 mdrun -h
> >
> > and then should see
> >
> > NNODES=4, MYRANK=1, HOSTNAME=<...>
> > NNODES=4, MYRANK=2, HOSTNAME=<...>
> > NNODES=4, MYRANK=3, HOSTNAME=<...>
> > NNODES=4, MYRANK=4, HOSTNAME=<...>
> > ...
> >
> >
> > Carsten
> >
> >
> > >
> > > On Jun 8, 2010, at 8:59 AM, lauren wrote:
> > >
> > >>
> > >> The version of Gromacs is 4.0.7.
> > >> This is the first time that I using Gromacs, then excuse me if I'm
> nonsense.
> > >>
> > >> Wich part of md.log output should I post?
> > >> after or before the input description?
> > >>
> > >> thanks for all,
> > >> and sorry
> > >>
> > >> De: Carsten Kutzner <ckutzne_at_[hidden]
> <mailto:ckutzne_at_[hidden]><mailto:ckutzne_at_[hidden] <mailto:ckutzne_at_[hidden]>>>
> > >> Para: Open MPI Users <users_at_[hidden]
> <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>>
> > >> Enviadas: Domingo, 6 de Junho de 2010 9:51:26
> > >> Assunto: Re: [OMPI users] Gromacs run in parallel
> > >>
> > >> Hi,
> > >>
> > >> which version of Gromacs is this? Could you post the first lines of
> > >> the md.log output file?
> > >>
> > >> Carsten
> > >>
> > >>
> > >> On Jun 5, 2010, at 10:23 PM, lauren wrote:
> > >>
> > >>> sorry my english..
> > >>>
> > >>> I want to know how can I run Gromancs in parallel!
> > >>> Because when I used
> > >>>
> > >>> mdrun &
> > >>> mpiexec -np 4 mdrun_mpi -v -deffnm em
> > >>>
> > >>> to run the minimization in 4 cores > all cores make the same job,
> again!
> > >>> They don't run together.
> > >>> I want all in parallel make the job faster.
> > >>>
> > >>>
> > >>> what could be wrong?
> > >>>
> > >>> thank's a lot!
> > >>>
> > >>>
> > >>>
> > >>> _______________________________________________
> > >>> users mailing list
> > >>> users_at_[hidden]
> <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> > >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >>
> > >>
> > >>
> > >> _______________________________________________
> > >> users mailing list
> > >> users_at_[hidden]
> <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> > >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> > >
> > > --
> > > Jeff Squyres
> > > jsquyres_at_[hidden]
> <mailto:jsquyres_at_[hidden]><mailto:jsquyres_at_[hidden]
> <mailto:jsquyres_at_[hidden]>>
> > > For corporate legal information go to:
> > > http://www.cisco.com/web/about/doing_business/legal/cri/
> > >
> > >
> > > _______________________________________________
> > > users mailing list
> > > users_at_[hidden]
> <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > --
> > Dr. Carsten Kutzner
> > Max Planck Institute for Biophysical Chemistry
> > Theoretical and Computational Biophysics
> > Am Fassberg 11, 37077 Goettingen, Germany
> > Tel. +49-551-2012313, Fax: +49-551-2012302
> > http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
> >
> >
> >
> >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden] <mailto:jsquyres_at_[hidden]><mailto:jsquyres_at_[hidden]
> <mailto:jsquyres_at_[hidden]>>
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden] <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden] <mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
>
>
>
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden] <mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users