Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] An error occured in MPI_Bcast; MPI_ERR_TYPE: invalid datatype
From: Pankatz, Klaus (klaus.pankatz_at_[hidden])
Date: 2010-05-21 05:40:14


Hi folks,

openMPI 1.4.1 seems to have another problem with my machine, or something on it.

This little program here (compiled with mpif90) startet with mpiexec -np 4 a.out produces the following output:
Suriprisingly the same thing written in C-Code (compiled with mpiCC) works without a problem.
May it be a interference with other MPI-distributions although I think I have deleted all?

Note: The error occurs also with my climate model. The error is nearly the same, only with MPI_ERR_TYPE: invalid root.
I've compiled openMPI not as root root, but in my home-directory.

Thanks for your advice,
Klaus
 
My machine:
> OpenMPI-version 1.4.1 compiled with Lahey Fortran 95 (lf95).
> OpenMPI was compiled "out of the box" only changing to the Lahey compiler with a setenv $FC lf95
>
> The System: Linux marvin 2.6.27.6-1 #1 SMP Sat Nov 15 20:19:04 CET 2008 x86_64 GNU/Linux
>
> Compiler: Lahey/Fujitsu Linux64 Fortran Compiler Release L8.10a

***************************************
Output:
[marvin:21997] *** An error occurred in MPI_Bcast
[marvin:21997] *** on communicator MPI_COMM_WORLD
[marvin:21997] *** MPI_ERR_TYPE: invalid datatype
[marvin:21997] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
Process 1 : k= 10 before
--------------------------------------------------------------------------
mpiexec has exited due to process rank 1 with PID 21997 on
node marvin exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).
--------------------------------------------------------------------------
[marvin:21993] 3 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[marvin:21993] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Process 3 : k= 10 before
************************************************************
Program Fortran90:
  include 'mpif.h'

  integer k, rank, size, ierror, tag, p

  call MPI_INIT(ierror)
  call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
  if (rank == 0) then
     k = 20
  else
     k = 10
  end if
  do p= 0,size,1
     
     if (rank == p) then
        print*, 'Process', p,': k=', k, 'before'
     
     end if
     
  enddo
  call MPI_Bcast(k, 1, MPI_INT,0,MPI_COMM_WORLD)
  do p =0,size,1
     if (rank == p) then
        print*, 'Process', p, ': k=', k, 'after'
  end if
  enddo
  call MPI_Finalize(ierror)
   
  end
********************************************************
Program C-Code:

#include <mpi.h>
#include <stdio.h>
int main (int argc, char *argv[])
        {
        int k,id,p,size;
        MPI_Init(&argc,&argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &id);
        MPI_Comm_size(MPI_COMM_WORLD, &size);
        if(id == 0)
                k = 20;
        else
                k = 10;
        for(p=0; p<size; p++){
                if(id == p)
                        printf("Process %d: k= %d before\n",id,k);
        }
        //note MPI_Bcast must be put where all other processes
        //can see it.
        MPI_Bcast(&k,1,MPI_INT,0,MPI_COMM_WORLD);
        for(p=0; p<size; p++){
                if(id == p)
                        printf("Process %d: k= %d after\n",id,k);
        }
        MPI_Finalize();
        return 0 ;
        }
***************************************************************