Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] mpirun takes only single processor instead of multiple processors
From: Ramya Narasimhan (varsharamya_at_[hidden])
Date: 2009-02-12 23:48:24

        I have entered the IP address of the system in the hosts file (twice
the IP for two CPU's) .I don't know about this mca parameter. Can you
please tell me about this or any reference material for this parameter?
Actually the input file is to perform minimization of the protein using
CHARMM program. It is not giving any error message as which rank stopped.
The output shows that the first charmm stops and the job runs on the other
which is fully completed.
When I tried to check for the CPU's it actually encounts, I gave the command
in charmm and it replied only one CPU. Is there anything wrong in the way of
adding CPU's?
Thanks for help

On Fri, Feb 13, 2009 at 9:49 AM, Ralph Castain <rhc_at_[hidden]> wrote:

> Could you pass along what is in your hosts file? Did you set any mca params
> in the default mca parameter file, or in your environ?
> I note that you redirected stdin. Which rank is running and which is
> stopped? How big is your input file? I am not familiar with your program -
> are both ranks expecting to get stdin, or only rank=0?
> Thanks
> Ralph
> On Feb 12, 2009, at 9:12 PM, Ramya Narasimhan wrote:
> Hi All,
>> I am a new user of Open MPI. I have installed open mpi-1.3 on a
>> RedHat Linux-5 ver system with F77 set as gfortran compiler. I tested the
>> programs in examples and all ran. When I tried the CHARMM program with
>> mpirun (2 CPU's) the job runs on single processor and it is stopped in the
>> other. Actually I clarified that the error is not with CHARMM. Is there any
>> error in my MPI procedure? I gave the job run as
>> mpirun -hostfile hosts -np 2 charmm < *.inp
>> Thanks for any help.
>> Varsha.
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]