Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Benoit Semelin (benoit.semelin_at_[hidden])
Date: 2006-03-06 08:11:23


>>Second topic:
>>I am using 3 processors
>>I am calling a series of MPI_SCATTER which work when I send
>>messages of
>>5 ko to the other processors, fails at the second scatter if I sent
>>messages of ~10 ko, and fails at the first scatter for bigger
>>messages.
>>The message is:
>>
>>
>
>What is "ko" -- did you mean "kb"?
>
>

I meant kilobytes (not kilobits). Sorry for that. It comes from
"kilo-octet" in french where "octet"=byte.

>>2 processes killed (possibly by Open MPI)
>>
>>
>
>
>
>>Could this be a problem of maximum allowed message size? Or of
>>buffering
>>space?
>>
>>
>
>No, Open MPI should allow scattering of arbitrary sized messages.
>Can you verify that your arguments to MPI_SCATTER are correct, such
>as buffer length, the receive sizes on the clients, etc.?
>
>

Actually this part of the the code works fine with another mpi
implementation for much larger messages...If it can help, here
are relevant parts of the codes.

INTEGER, PARAMETER :: nb_proc=4, master=0
INTEGER, PARAMETER :: message_size=1000
INTEGER, parameter :: part_array_size=message_size*nb_proc

TYPE :: PART
  integer :: p_type
  real(KIND=8), dimension(3) :: POS
  real(KIND=8), dimension(3) :: VEL
  real(KIND=8) :: u
  real(KIND=8) :: star_age
  real(KIND=8) :: mass
  real(KIND=8) :: frac_mass1
  real(KIND=8) :: h
  real(KIND=8) :: dens
END TYPE PART

TYPE(PART), dimension(part_array_size) :: part_array

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! Declaration of the MPI type for PART !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

call MPI_TYPE_EXTENT(MPI_INTEGER,mpi_integer_length,mpi_err)
array_of_block_length(1:2) = (/1,12/)
array_of_types(1:2) = (/MPI_INTEGER,MPI_DOUBLE_PRECISION/)
array_of_displacement(1) = 0
array_of_displacement(2) = MPI_integer_length
call MPI_TYPE_CREATE_STRUCT(2,array_of_block_length,array_of_displacement &
                    ,array_of_types,MPI_part,mpi_err)
call MPI_TYPE_COMMIT(MPI_part,mpi_err)

call MPI_TYPE_EXTENT(MPI_PART,mpi_part_length,mpi_err)

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! The communication call...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

< snip
 
Here sone code filling part_array with data

snip >

call MPI_SCATTER(part_array,nb_sent,MPI_PART,MPI_IN_PLACE,nb_sent, &
                 MPI_PART,root,MPI_COMM_WORLD,mpi_err)

( I ensure nb_send <= message_size)

>Are any corefiles generated? Do you know which processes die?
>
>
>
Yes, it generates one core file in this case (message_size=1000). And in
this case with 4 processes, 3 die:
"3 processes killed (possibly by Open MPI)"