Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] VampirTrace and MPI_Init_thread()
From: Lisandro Dalcin (dalcinl_at_[hidden])
Date: 2010-08-10 20:59:01


Below you have C program that will MPI_Init_thread()

$ cat demo/helloworld.c
#include <mpi.h>
#include <stdio.h>

int main(int argc, char *argv[])
{
  int provided;
  int size, rank, len;
  char name[MPI_MAX_PROCESSOR_NAME];

  MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);

  MPI_Comm_size(MPI_COMM_WORLD, &size);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Get_processor_name(name, &len);

  printf("Hello, World! I am process %d of %d on %s.\n", rank, size, name);

  MPI_Finalize();
  return 0;
}

Now I build like this:

$ mpicc-vt demo/helloworld.c

and then try to run it:

$ ./a.out
Hello, World! I am process 0 of 1 on trantor.
[trantor:18854] *** An error occurred in MPI_Group_free
[trantor:18854] *** on communicator MPI_COMM_WORLD
[trantor:18854] *** MPI_ERR_GROUP: invalid group
[trantor:18854] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

However, if MPI_Init() is used, it succeeds.

It seems the MPI_Init_thread() wrapper to PMPI_Init_thread() is
missing, see this:

$ nm a.out | grep MPI_Init
0805c4ef T MPI_Init
         U MPI_Init_thread
         U PMPI_Init

PS: Sorry if this is actually a VT bug. I'm not a VT user, I'm just
reporting this issue (related to a mpi4py bug report that arrived at
my inbox months ago).

-- 
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169