Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] PathScale problems persist
From: Ake Sandgren (ake.sandgren_at_[hidden])
Date: 2010-09-22 08:16:08


On Wed, 2010-09-22 at 07:42 -0400, Jeff Squyres wrote:
> This is a problem with the Pathscale compiler and old versions of GCC. See:
>
> http://www.open-mpi.org/faq/?category=building#pathscale-broken-with-mpi-c%2B%2B-api
>
> I note that you said you're already using GCC 4.x, but it's not clear from your text whether pathscale is using that compiler or a different GCC on the back-end. If you can confirm that pathscale *is* using GCC 4.x on the back-end, then this is worth reporting to the pathscale support people.

I have no problem running the code below compiled with openmpi 1.4.2 and
pathscale 3.2.

> > However, now we are having trouble with the 1.4.2, PathScale 3.2, and
> > the C++ bindings. The following code:
> >
> > #include <iostream>
> > #include <mpi.h>
> >
> > int main(int argc, char* argv[]) {
> > int node, size;
> >
> > MPI::Init(argc, argv);
> > MPI::COMM_WORLD.Set_errhandler(MPI::ERRORS_THROW_EXCEPTIONS);
> >
> > try {
> > int rank = MPI::COMM_WORLD.Get_rank();
> > int size = MPI::COMM_WORLD.Get_size();
> >
> > std::cout << "Hello world from process " << rank << " out of "
> > << size << "!" << std::endl;
> > }
> >
> > catch(MPI::Exception e) {
> > std::cerr << "MPI Error: " << e.Get_error_code()
> > << " - " << e.Get_error_string() << std::endl;
> > }
> >
> > MPI::Finalize();
> > return 0;
> > }
> >
> > generates the following output:
> >
> > [host1:29934] *** An error occurred in MPI_Comm_set_errhandler
> > [host1:29934] *** on communicator MPI_COMM_WORLD
> > [host1:29934] *** MPI_ERR_COMM: invalid communicator
> > [host1:29934] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
> > --------------------------------------------------------------------------
> > mpirun has exited due to process rank 2 with PID 29934 on
> > node host1 exiting without calling "finalize". This may
> > have caused other processes in the application to be
> > terminated by signals sent by mpirun (as reported here).
> > --------------------------------------------------------------------------
> > [host1:29931] 3 more processes have sent help message
> > help-mpi-errors.txt / mpi_errors_are_fatal
> > [host1:29931] Set MCA parameter "orte_base_help_aggregate" to 0 to see
> > all help / error messages
> >
> > There are no problems when Open MPI 1.4.2 is built with GCC (GCC 4.1.2).
> > No problems are found with Open MPI 1.2.6 and PathScale either.

-- 
Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
Internet: ake_at_[hidden]   Phone: +46 90 7866134 Fax: +46 90 7866126
Mobile: +46 70 7716134 WWW: http://www.hpc2n.umu.se