Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_ABORT was invoked on rank 0 in communicatorMPI_COMM_WORLD with errorcode 0.
From: kishore kumar (kishoreguptaos_at_[hidden])
Date: 2010-04-28 21:02:26


Many many thanks.

Best,
Kishore Kumar Pusukuri
http://www.cs.ucr.edu/~kishore

On 28 April 2010 18:00, Martin Siegert <siegert_at_[hidden]> wrote:

> Yes, I am quite sure that you need at least 16GB to run SPEC MPIM2007.
> See the FAQ at http://www.spec.org/mpi2007/docs/faq.html#MemoryMedium
> Furthermore, the benchmark is designed to run on at least 16p.
>
> Cheers,
> Martin
>
> --
> Martin Siegert
> Head, Research Computing
> WestGrid Site Lead
> IT Services phone: 778 782-4691
> Simon Fraser University fax: 778 782-4242
> Burnaby, British Columbia email: siegert_at_[hidden]
> Canada V5A 1S6
>
> On Wed, Apr 28, 2010 at 05:47:52PM -0700, kishore kumar wrote:
> >
> > Oh..... Thank you for the information.
> > The machine has 6GM of RAM and I am creating 4 processes (for 4
> cores).
> > Are you sure that it is because of lack of resources or some problem
> > with the network settings (I want to run the programs only on my
> > server)?
> > Is there anyway to do this (I need to run only 4 processes for my
> > project)?
> > Thank you.
> > Best,
> > Kishore Kumar Pusukuri
> > [1]http://www.cs.ucr.edu/~kishore <http://www.cs.ucr.edu/%7Ekishore>
> >
> > On 28 April 2010 17:18, Martin Siegert <[2]siegert_at_[hidden]> wrote:
> >
> > How much memory is available on that quad core machine?
> > The minimum requirements for MPIM2007 are:
> > 16GB of memory for the whole system or 1GB of memory per rank,
> > whichever
> > is larger.
> > For MPIL2007 you need to use at least 64 processes and a minimum of
> > 128GB
> > (2GB/process) is required.
> > Cheers,
> > Martin
> > --
> > Martin Siegert
> > Head, Research Computing
> > WestGrid Site Lead
> > IT Services phone: 778 782-4691
> > Simon Fraser University fax: 778 782-4242
> > Burnaby, British Columbia email: [3]siegert_at_[hidden]
> > Canada V5A 1S6
> >
> > On Wed, Apr 28, 2010 at 05:32:12AM -0500, Jeff Squyres (jsquyres)
> > wrote:
> > >
> > > I don't know much about specmpi, but it seems like it is choosing
> > to
> > > abort. Maybe the "no room for lattice" has some meaning...?
> > > -jms
> > > Sent from my PDA. No type good.
> > >
> >
> _______________________________________________________________________
> > >
> > > From: [4]users-bounces_at_[hidden]
> > <[5]users-bounces_at_[hidden]>
> > > To: [6]users_at_[hidden] <[7]users_at_[hidden]>
> > > Sent: Wed Apr 28 01:47:01 2010
> > > Subject: [OMPI users] MPI_ABORT was invoked on rank 0 in
> > > communicatorMPI_COMM_WORLD with errorcode 0.
> > >
> > > Hi,
> > > I am trying to run SPEC MPI 2007 workload on a quad-core machine.
> > > However getting this error message. I also tried to use hostfile
> > option
> > > by specifying localhost slots=4, but still getting the following
> > error.
> > > Please help me.
> >
> > > $mpirun --mca btl tcp,sm,self -np 4 su3imp_base.solaris
> >
> > > SU3 with improved KS action
> > > Microcanonical simulation with refreshing
> > > MIMD version 6
> > > Machine =
> > > R algorithm
> >
> > > type 0 for no prompts or 1 for prompts
> >
> > > nflavors 2
> > > nx 30
> > > ny 30
> > > nz 56
> > > nt 84
> > > iseed 1234
> > > LAYOUT = Hypercubes, options = EVENFIRST,
> > > NODE 0: no room for lattice
> > > termination: Tue Apr 27 23:41:44 2010
> > > Termination: node 0, status = 1
> > >
> >
> -----------------------------------------------------------------------
> > > ---
> > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > > with errorcode 0.
> > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI
> > processes.
> > > You may or may not see output from other processes, depending on
> > > exactly when Open MPI kills them.
> > >
> >
> -----------------------------------------------------------------------
> > > ---
> > >
> >
> -----------------------------------------------------------------------
> > > ---
> > > mpirun has exited due to process rank 0 with PID 17239 on
> > > node cache-aware exiting without calling "finalize". This may
> > > have caused other processes in the application to be
> > > terminated by signals sent by mpirun (as reported here).
> > > Best,
> > > Kishore Kumar Pusukuri
> >
> > > [1][8]http://www.cs.ucr.edu/~kishore>
> > >
> > > References
> > >
> > > 1. [9]
http://www.cs.ucr.edu/~kishore>
> >
> > > _______________________________________________
> > > users mailing list
> > > [10]users_at_[hidden]
> > > [11]
http://www.open-mpi.org/mailman/listinfo.cgi/users
> > _______________________________________________
> > users mailing list
> > [12]users_at_[hidden]
> > [13]http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > References
> >
> > 1. http://www.cs.ucr.edu/~kishore <http://www.cs.ucr.edu/%7Ekishore>
> > 2. mailto:siegert_at_[hidden]
> > 3. mailto:siegert_at_[hidden]
> > 4. mailto:users-bounces_at_[hidden]
> > 5. mailto:users-bounces_at_[hidden]
> > 6. mailto:users_at_[hidden]
> > 7. mailto:users_at_[hidden]
> > 8. http://www.cs.ucr.edu/%7Ekishore
> > 9. http://www.cs.ucr.edu/%7Ekishore
> > 10. mailto:users_at_[hidden]
> > 11. http://www.open-mpi.org/mailman/listinfo.cgi/users
> > 12. mailto:users_at_[hidden]
> > 13. http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> --
> Martin Siegert
> Head, Research Computing
> WestGrid Site Lead
> IT Services phone: 778 782-4691
> Simon Fraser University fax: 778 782-4242
> Burnaby, British Columbia email: siegert_at_[hidden]
> Canada V5A 1S6
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>