This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
What Jeff said sounds right (as usual). But, I'm intrigued about one
point. Even if one did not compile for MPI, if you launch with "mpirun
-np 2 gulp", I would think you would still see two processes. They
would not be two processes of the same MPI job, but two replicas of the
same serial job. So, I'm curious what Rodolfo's second sentence ("But
when I try ...") means.
On Feb 21, 2010, at 10:25 AM, Rodolfo Chua wrote:
I used openMPI compiled with the GNU (gcc) compiler to run GULP code in parallel.
But when I try to input "mpirun -np 2 gulp <input>", GULP did not run in two
processors. Can you give me any suggestion on how to compile GULP code exactly with openMPI.
Below is the instruction from GULP code manual.
"If you wish to run the program in parallel using MPI then you will need to alter
the file "getmachine" accordingly. The usual changes would be to add the "-DMPI"
option and in some cases change the compiler name (for example tompif77/mpif90)
or include the MPI libraries in the link stage."
I'm afraid that I don't know the GULP code in particular, but their advice is sound: adding -DMPI sounds like something specific to their code (e.g., to activate the MPI code sections). But using mpif77 / mpif90 as your compiler name in their build process is probably the Right thing to do (e.g., instead of ifort / gfortran / pgf77 / whatever). This should build their executable with Open MPI's support libraries linked in, etc.