Please understand that Im decent at the engineering side of it.  As a system administrator, Im a decent engineer.

On the previous configurations, this program seems to run with any number of processors.  I believe these successful users have been using LAM/MPI.  While I was waiting for a reply, I installed LAM/MPI.  The results were similar to those from OpenMPI.

While I can choose LAM/MPI, Id prefer to port it to OpenMPI since that is where all the development and most of the support are.

I cannot choose the Portland compiler.  I must use either GNU or Intel compilers on the Itanium2.

Ted (more responses below)

On November 7, 2007 at 8:39 AM, Squyres, Jeff wrote:

I have tried to run a debugger, but I am not an expert at it.  I could not get Intels idb debugger to give me a prompt, but I could get a prompt from gdb.  Ive looked over the manual, but Im not sure how to put in the breakpoints et. al. that you geniuses use to evaluate a program at critical junctures.  I actually used an mpirun np 2 dbg command to run it on 2 CPUs.  I attached the file at the prompt.  When I did a run, it ran fine with no optimization and one processor.  With 2 processors, it didnt seem to do anything.  All I will say here is that I have a lot to learn.  Im calling on my friends for help on this.

Its a Fortran program.  It starts in the main program.  I inserted some PRINT*, statements of the PRINT*,Read the input at line 213 variety into the main program to see what would print.  It printed the first four statements, but it didnt reach the last three.  The calls that were reached were in the set-up section of the program.  The section that wasnt reached had a lot of matrix-setting and solving subroutine calls.

Im going to point my Intel support person to this post and see where it takes us.

You understand correctly.  I am not an expert at MPI of any sort.  Both the MPI and non-MPI versions of Hello print out once for each invoked CPU (e.g.

     mpirun np 1 mpi_hello


     mpirun np 1 non_mpi_hello

print one Hello, world and

     mpirun np 2 mpi_hello


     mpirun np 2 non_mpi_hello

print two Hello, worlds). 

This is my mistake.  I attached an old version of ompi_info.txt.  I am now attaching the correct version.  I already have 1.2.4 installed.