Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2006-10-16 10:58:55


Correction. :-(

Upgrading the version of the Intel compiler worked on one platform
that I tested on, but not others. So it looks like this is still an
open issue.

On Oct 11, 2006, at 1:46 AM, Jeff Squyres wrote:

> Tobias / all --
>
> I swear there were further mails about this topic, but perhaps they
> were
> off-list.
>
> The end result is that this has finally been confirmed as an Intel
> 9.1 C++
> compiler bug. I don't know exactly what platforms it occurred on,
> but I was
> eventually able to replicate Tobias' problem on an EM64T machine
> running
> RHEL4U3. The problem was that the compiler was not initializing some
> private members of global C++ objects properly (e.g., the underlying
> MPI_Comm in MPI::COMM_WORLD).
>
> Intel released a new version of the 9.1 C++ compiler last week (Oct
> 5, 2006,
> build 44). This new version of the compiler now seems to
> initialize data
> members properly, and C++ applications (including the trivial
> "hello world"
> that Tobias ran into problems with) seem to be working fine now.
>
> So: please upgrade your version of the Intel compilers if you can.
>
>
>
> On 9/1/06 8:33 AM, "Jeff Squyres" <jsquyres_at_[hidden]> wrote:
>
>> Tobias --
>>
>> I am unfortunately unable to replicate your problem. :-(
>>
>> Can you confirm that you're getting the "right" mpi.h? That's the
>> most
>> obvious problem that I can think of.
>>
>> If it seems to be right, can you compile your program with
>> debugging enabled
>> and step through it with a debugger? A trivial program like this
>> does not
>> need to be started via mpirun -- you should be able to just launch
>> it directly
>> in a debugger (e.g., put a breakpoint in main() and step into
>> MPI::COMM_WORLD.Get_rank()).
>>
>> OMPI's C++ bindings are layered on top of the C bindings, so you
>> should step
>> into an inlined C++ function that calls MPI_Comm_rank(), and see
>> if the
>> communicator that it was invoked with is, indeed, MPI_COMM_WORLD.
>>
>>
>> On 8/31/06 2:26 AM, "Tobias Graf" <tgraf_at_[hidden]>
>> wrote:
>>
>>> Dear List,
>>>
>>> I was trying to use the C++ binding of OpenMPI, but unfortunately
>>> I ran
>>> into a problem. I'm trying to use MPI::COMM_WORLD, but I always
>>> get the
>>> following error message when I try to run it (compiling works fine):
>>>
>>> *** An error occurred in MPI_Comm_rank
>>> *** on communicator MPI_COMM_WORLD
>>> *** MPI_ERR_COMM: invalid communicator
>>> *** MPI_ERRORS_ARE_FATAL (goodbye)
>>> [0,0,0]-[0,1,0] mca_oob_tcp_msg_recv: readv failed with errno=104
>>> 1 additional process aborted (not shown)
>>>
>>> The code I'm trying to use is:
>>> ----------------------------------------------------
>>> // testcpp.cpp
>>> // mpic++ testcpp.cpp -o testcpp
>>> // mpiexec -np 2 ./testcpp
>>>
>>> #include "mpi.h"
>>> #include <iostream>
>>>
>>> using namespace std;
>>>
>>> int main(int argc, char *argv[])
>>> {
>>> int process_id; // rank of process
>>> int process_num; // total number of processes
>>>
>>> MPI::Init ( argc, argv );
>>> process_id = MPI::COMM_WORLD.Get_rank ();
>>> process_num = MPI::COMM_WORLD.Get_size ();
>>>
>>> cout << process_id+1 << "/" << process_num << endl;
>>> MPI::Finalize();
>>> }
>>> ----------------------------------------------------
>>>
>>> A similar program using the normal C interface (also compiled with
>>> mpic++) works fine (File: testc.cpp).
>>>
>>> For this example I'm using the Intel C/C++ V9.1 compiler on Linux
>>> (Ubuntu 5.10). I compiled openmpi by myself, so maybe something went
>>> wrong there. I added config.log and also the output from
>>> ompi_info. If
>>> necessary, I can also provide a capture of the configuration,
>>> compilation and installation process.
>>>
>>> Best Regards,
>>> Tobias
>>>
>>>
>>>
>>>
>>> Open MPI: 1.1.1
>>> Open MPI SVN revision: r11473
>>> Open RTE: 1.1.1
>>> Open RTE SVN revision: r11473
>>> OPAL: 1.1.1
>>> OPAL SVN revision: r11473
>>> Prefix: /opt/libs/openmpi-1.1.1_intel9.1
>>> Configured architecture: i686-pc-linux-gnu
>>> Configured by: tgraf
>>> Configured on: Thu Aug 31 14:52:07 JST 2006
>>> Configure host: tobias
>>> Built by: tgraf
>>> Built on: Thu Aug 31 15:05:52 JST 2006
>>> Built host: tobias
>>> C bindings: yes
>>> C++ bindings: yes
>>> Fortran77 bindings: yes (all)
>>> Fortran90 bindings: yes
>>> Fortran90 bindings size: small
>>> C compiler: icc
>>> C compiler absolute: /opt/intel/cc/9.1.042/bin/icc
>>> C++ compiler: icpc
>>> C++ compiler absolute: /opt/intel/cc/9.1.042/bin/icpc
>>> Fortran77 compiler: ifort
>>> Fortran77 compiler abs: /opt/intel/fc/9.1.036/bin/ifort
>>> Fortran90 compiler: ifort
>>> Fortran90 compiler abs: /opt/intel/fc/9.1.036/bin/ifort
>>> C profiling: yes
>>> C++ profiling: yes
>>> Fortran77 profiling: yes
>>> Fortran90 profiling: yes
>>> C++ exceptions: no
>>> Thread support: posix (mpi: no, progress: no)
>>> Internal debug support: no
>>> MPI parameter check: runtime
>>> Memory profiling support: no
>>> Memory debugging support: no
>>> libltdl support: yes
>>> MCA memory: ptmalloc2 (MCA v1.0, API v1.0,
>>> Component v1.1.1)
>>> MCA paffinity: linux (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA maffinity: first_use (MCA v1.0, API v1.0,
>>> Component v1.1.1)
>>> MCA timer: linux (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
>>> MCA allocator: bucket (MCA v1.0, API v1.0, Component
>>> v1.0)
>>> MCA coll: basic (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA coll: hierarch (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA coll: self (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA coll: tuned (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA io: romio (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA btl: self (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
>>> MCA topo: unity (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
>>> MCA gpr: null (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA gpr: proxy (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA gpr: replica (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA iof: proxy (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA ns: proxy (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA ns: replica (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
>>> MCA ras: dash_host (MCA v1.0, API v1.0,
>>> Component v1.1.1)
>>> MCA ras: hostfile (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA ras: localhost (MCA v1.0, API v1.0,
>>> Component v1.1.1)
>>> MCA ras: slurm (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA rds: hostfile (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA rds: resfile (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA rmaps: round_robin (MCA v1.0, API v1.0,
>>> Component v1.1.1)
>>> MCA rmgr: proxy (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA pls: fork (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA pls: slurm (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
>>> MCA sds: seed (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA sds: singleton (MCA v1.0, API v1.0,
>>> Component v1.1.1)
>>> MCA sds: pipe (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> MCA sds: slurm (MCA v1.0, API v1.0, Component
>>> v1.1.1)
>>> // testc.cpp
>>> // mpic++ testc.cpp -o testc
>>> // mpiexec -np 2 ./testc
>>>
>>> #include "mpi.h"
>>> #include <iostream>
>>>
>>> using namespace std;
>>>
>>> int main(int argc, char *argv[])
>>> {
>>> int process_id; // rank of process
>>> int process_num; // total number of processes
>>>
>>> MPI_Init(&argc,&argv);
>>> MPI_Comm_size(MPI_COMM_WORLD,&process_num);
>>> MPI_Comm_rank(MPI_COMM_WORLD,&process_id);
>>>
>>> cout << process_id+1 << "/" << process_num << endl;
>>> MPI_Finalize();
>>> }
>>> // testcpp.cpp
>>> // mpic++ testcpp.cpp -o testcpp
>>> // mpiexec -np 2 ./testcpp
>>>
>>> #include "mpi.h"
>>> #include <iostream>
>>>
>>> using namespace std;
>>>
>>> int main(int argc, char *argv[])
>>> {
>>> int process_id; // rank of process
>>> int process_num; // total number of processes
>>>
>>> MPI::Init ( argc, argv );
>>> process_id = MPI::COMM_WORLD.Get_rank ();
>>> process_num = MPI::COMM_WORLD.Get_size ();
>>>
>>> cout << process_id+1 << "/" << process_num << endl;
>>> MPI::Finalize();
>>> }
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
>

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems