Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] users Digest, Vol 2574, Issue 1
From: Andrea Negri (negri.andre_at_[hidden])
Date: 2013-05-14 12:11:01


I'm not an expert of MPI, but I stronly encourage you to use

use mpi
implicit none

This can save a LOT of time in the debug.

On 14 May 2013 18:00, <users-request_at_[hidden]> wrote:
> Send users mailing list submissions to
> users_at_[hidden]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-request_at_[hidden]
>
> You can reach the person managing the list at
> users-owner_at_[hidden]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
> 1. MPI_SUM is not defined on the MPI_INTEGER datatype (Hayato KUNIIE)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 15 May 2013 00:39:06 +0900
> From: Hayato KUNIIE <kuni255_at_[hidden]>
> Subject: [OMPI users] MPI_SUM is not defined on the MPI_INTEGER
> datatype
> To: users_at_[hidden]
> Message-ID: <51925A9A.50401_at_[hidden]>
> Content-Type: text/plain; charset=ISO-2022-JP
>
> Hello I'm kuni255
>
> I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
> about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
> MPI_REDUCE.
>
> Then, Error occured.
>
> This cluster system consist of one head node and 2 slave nodes.
> And sharing home directory in head node by NFS. so Open MPI is installed
> each nodes.
>
> When I test this program on only head node, program is run correctly.
> and output result.
> But When I test this program on only slave node, same error occured.
>
> Please tell me, good idea : )
>
> Error message
> [bwslv01:30793] *** An error occurred in MPI_Reduce: the reduction
> operation MPI_SUM is not defined on the MPI_INTEGER datatype
> [bwslv01:30793] *** on communicator MPI_COMM_WORLD
> [bwslv01:30793] *** MPI_ERR_OP: invalid reduce operation
> [bwslv01:30793] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 1 with PID 30793 on
> node bwslv01 exiting improperly. There are two reasons this could occur:
>
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
>
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
>
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [bwhead.clnet:02147] 1 more process has sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> [bwhead.clnet:02147] Set MCA parameter "orte_base_help_aggregate" to 0
> to see all help / error messages
>
>
>
>
> Fortran90 source code
> include 'mpif.h'
> parameter(nmax=12)
> integer n(nmax)
>
> call mpi_init(ierr)
> call mpi_comm_size(MPI_COMM_WORLD, isize, ierr)
> call mpi_comm_rank(MPI_COMM_WORLD, irank, ierr)
> ista=irank*(nmax/isize) + 1
> iend=ista+(nmax/isize-1)
> isum=0
> do i=1,nmax
> n(i) = i
> isum = isum + n(i)
> end do
> call mpi_reduce(isum, itmp, 1, MPI_INTEGER, MPI_SUM,
> & 0, MPI_COMM_WORLD, ierr)
>
> if (irank == 0) then
> isum=itmp
> WRITE(*,*) isum
> endif
> call mpi_finalize(ierr)
> end
>
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2574, Issue 1
> **************************************