On Aug 19, 2012, at 12:11 PM, Bill Mulberry wrote:
> I have a large program written in FORTRAN 77 with a couple of routines
> written in C++. It has MPI commands built into it to run on a large scale
> multiprocessor IBM systems. I am now having the task of transferring this
> program over to a cluster system. Both the multiprocessor and cluster
> system has linux hosted on them. The Cluster system has GNU FORTRAN and GNU
> C compilers on it. I am told the Cluster has openmpi. I am wondering if
> anybody out there has had to do the same task and if so what I can expect
> from this. Will I be expected to make some big changes, etc.? Any advice
> will be appreciated.
MPI and Fortran are generally portable, meaning that if you wrote a correct MPI Fortran application, it should be immediately portable to a new system.
That being said, many applications are accidentally/inadvertently not correct. For example, when you try to compile your application on a Linux cluster with Open MPI, you'll find that you accidentally used a Fortran construct that was specific to IBM's Fortran compiler and is not portable. Similarly, when you run the application, you may find that inadvertently you used an implicit assumption for IBM's MPI implementation that isn't true for Open MPI.
...or you may find that everything just works, and you can raise a toast to the portability gods.
I expect that your build / compile / link procedure may change a bit from the old system to the new system. In Open MPI, you should be able to use "mpif77" and/or "mpif90" to compile and link everything. No further MPI-related flags are necessary (no need to -I to specify where mpif.h is located, no need to -lmpi, ...etc.).
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/