Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Why? MPI_Scatter problem
From: Gus Correa (gus_at_[hidden])
Date: 2010-12-13 12:01:49

Kechagias Apostolos wrote:
> I have the code that is in the attachment.
> Can anybody explain how to use scatter function?
> It seems that this way im using it doesnt do the job.
> ------------------------------------------------------------------------
> _______________________________________________
> users mailing list
> users_at_[hidden]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>

int main(int argc, char *argv[])
        int error_code, err, rank, size, N, i, N1, start, end;
        float W, pi=0, sum=0;

        MPI_Init(&argc, &argv);
        MPI_Comm_rank( MPI_COMM_WORLD, &rank);
        MPI_Comm_size( MPI_COMM_WORLD, &size);


        int n[N],data[N];

        N1 = N/size;
        //printf("N1:%d W:%f\n",N1,W);

                printf("You must have 2 or more ranks to complete this action\n");
                printf("Not enough arguments given\n");

        if(rank == 0) {for(i=0;i<N;i++) n[i]=i;}
        MPI_Scatter (n, N1, MPI_INT,data, N1,MPI_INT, 0, MPI_COMM_WORLD);

        pi = 0;
        start = rank*N1;
     end = (rank+1)*N1;

        for(i=data[start];i<data[end];i++) pi+=4*W/(1+(i+0.5)*(i+0.5)*W*W);
     // printf("rank:%d tmppi:%f\n",rank,pi);
        printf("data[start]:%d data[end]:%d ",data[start],data[end]);

     printf("rankN1:%d rank+1N1:%d\n",start,end);
        MPI_Reduce(&pi, &sum, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);

        if (rank == 0) printf("Pi is:%f size:%d\n",sum,size);

Hi Kechagias

If you use MPI_Scatter, the receive buffers start receiving
at the zero offset (i.e. at data[0]), not at data[start].
Also, your receive buffers could have size N1, not N.
I guess the MPI_Scatter call is right.
The subsequent code needs to change.
The loop should go from data[0] to data[N1-1].
(However, be careful with edge cases where the number
of processes doesn't divide N evenly.)

Alternatively you could use MPI_Alltoallw to scatter the way
your code suggests you want to do, but that would be an overkill.