On Jun 13, 2007, at 2:29 PM, Julian Cummings wrote:
> Thanks, I will give that a try and repost to the list if problems
> remain. I am kind of surprised that compiling with -fpic is not
> done by
> default on a Linux system, since OpenMPI builds as a set of shared
> library .so files. Normally you want position-independent code in
> libraries so that, among other reasons, static objects are handled
It's actually more subtle than that. Open MPI itself is compiled
with -fpic if necessary, of course.
It's *your* code that has to be compiled with -fpic, which is odd /
unusual / a bug in pgCC.
> Regards, Julian C.
> On Wed, 2007-06-13 at 11:59 -0400, Jeff Squyres wrote:
>> Bummer -- I thought I had replied to that one (perhaps I'm thinking
>> that multiple people have posted this and I've replied to some but
>> not all of them).
>> Brock is correct that using "-fpic" to compile your MPI C++ app
>> should solve the problem. This information *used* to be posted on
>> the PGI web site in their support section, but I can't seem to find
>> it any more.
>> As far as I understand the issue, it's a PGI compiler issue, not an
>> OMPI issue.
>> On Jun 13, 2007, at 12:38 AM, Julian Cummings wrote:
>>> This is a follow up to a message originally posted by Andrew J
>>> Caird on
>>> 2006-08-16. No one ever replied to Andrew's message, and I am
>>> exactly the same problem with a more recent version of OpenMPI
>>> (1.2.1) and
>>> the PGI compiler (7.0). Essentially, the problem is that if you
>>> link an MPI
>>> application against the mpi_cxx library, at run time you will get a
>>> with each process giving the following message:
>>> C++ runtime abort: internal error: static object marked for
>>> destruction more
>>> than once
>>> If your MPI application does not utilize the MPI C++ bindings, you
>>> can link
>>> without this library and the runtime errors will go away.
>>> Since this problem was reported long ago and no one ever replied to
>>> report, I would assume that this is a bug either in the mpi_cxx
>>> library or
>>> in the way it is built under the PGI compiler. I could not figure
>>> out how
>>> to submit a bug report to the open-mpi bug tracking system, so I
>>> hope that
>>> this message to the users list will suffice. I am attaching my
>>> --all output to this message. I am running on a Myrinet-based Linux
>>> cluster, but the particulars are not relevant for this problem.
>>> You can
>>> replicate the problem with any trivial MPI application code, such
>>> as the
>>> standard "hello" program using the standard C interface. I am
>>> attaching my
>>> hello.c source code. Compile with "mpicxx -o hello hello.c" and
>>> run with
>>> "mpirun -np 1 ./hello". The runtime error disappears if you
>>> compile with
>>> "mpicc -o hello hello.c" to avoid linking against the mpi_cxx
>>> Please let me know if there is any fix available for this problem.
>>> Regards, Julian C.
>>> users mailing list
> Dr. Julian C. Cummings E-mail:
> California Institute of Technology Phone: 626-395-2543
> 1200 E. California Blvd., Mail Code 158-79 Fax: 626-584-5917
> Pasadena, CA 91125 Office: 125 Powell-Booth