Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Why? MPI_Scatter problem
From: Kechagias Apostolos (pasxal.antix_at_[hidden])
Date: 2010-12-13 13:00:19


Sure it helps. I had no idea about this source.
I hope that it is up to date.

2010/12/13 Gus Correa <gus_at_[hidden]>

> Hi Kechagias
>
> The figures in Chapter 4 of
> "MPI: The Complete Reference, Vol 1, 2nd Ed.",
> by Snir et. al. are good reminders.
>
> Here are a few:
> //www.dartmouth.edu/~rc/classes/intro_mpi/mpi_comm_modes2.html#top
>
> I hope this helps,
> Gus Correa
>
> Kechagias Apostolos wrote:
>
>> I thought that every process will receive the data as is.
>> Thanks that solved my problem.
>>
>> 2010/12/13 Gus Correa <gus_at_[hidden] <mailto:
>> gus_at_[hidden]>>
>>
>>
>> Kechagias Apostolos wrote:
>>
>> I have the code that is in the attachment.
>> Can anybody explain how to use scatter function?
>> It seems that this way im using it doesnt do the job.
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden] <mailto:users_at_[hidden]>
>>
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <string.h>
>> #include <mpi.h>
>>
>> int main(int argc, char *argv[])
>> {
>> int error_code, err, rank, size, N, i, N1, start, end;
>> float W, pi=0, sum=0;
>>
>>
>> MPI_Init(&argc, &argv);
>> MPI_Comm_rank( MPI_COMM_WORLD, &rank);
>> MPI_Comm_size( MPI_COMM_WORLD, &size);
>>
>> N=atoi(argv[1]);
>>
>> int n[N],data[N];
>>
>> N1 = N/size;
>> W=1.0/N;
>> //printf("N1:%d W:%f\n",N1,W);
>>
>> if(size<2)
>> {
>> printf("You must have 2 or more ranks to complete
>> this action\n");
>> MPI_Abort(MPI_COMM_WORLD,err);
>> }
>> if(argc<2)
>> {
>> printf("Not enough arguments given\n");
>> MPI_Abort(MPI_COMM_WORLD,err); }
>>
>>
>>
>> if(rank == 0) {for(i=0;i<N;i++) n[i]=i;}
>> MPI_Scatter (n, N1, MPI_INT,data, N1,MPI_INT, 0,
>> MPI_COMM_WORLD);
>>
>> pi = 0;
>> start = rank*N1;
>> end = (rank+1)*N1;
>>
>> for(i=data[start];i<data[end];i++)
>> pi+=4*W/(1+(i+0.5)*(i+0.5)*W*W);
>> // printf("rank:%d tmppi:%f\n",rank,pi);
>> printf("data[start]:%d data[end]:%d ",data[start],data[end]);
>>
>> printf("rankN1:%d rank+1N1:%d\n",start,end);
>> MPI_Reduce(&pi, &sum, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD);
>>
>>
>> if (rank == 0) printf("Pi is:%f size:%d\n",sum,size);
>> MPI_Finalize();
>> }
>>
>>
>> #########
>> Hi Kechagias
>>
>> If you use MPI_Scatter, the receive buffers start receiving
>> at the zero offset (i.e. at data[0]), not at data[start].
>> Also, your receive buffers could have size N1, not N.
>> I guess the MPI_Scatter call is right.
>> The subsequent code needs to change.
>> The loop should go from data[0] to data[N1-1].
>> (However, be careful with edge cases where the number
>> of processes doesn't divide N evenly.)
>>
>> Alternatively you could use MPI_Alltoallw to scatter the way
>> your code suggests you want to do, but that would be an overkill.
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden] <mailto:users_at_[hidden]>
>>
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>