Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Problems Broadcasting/Scattering Data
From: Dino Rossegger (dino.rossegger_at_[hidden])
Date: 2008-01-08 12:05:04


George Bosilca schrieb:
>
> On Jan 8, 2008, at 11:14 AM, Dino Rossegger wrote:
>
>>> If so, then the problem is that Scatter actually gets an array of
>>> pointers
>>> and sends these pointers trying to interpret them as doubles.
>>> You either have to use several scatter commands or "fold" your
>>> 2D-Array into a one-dimensional array.
>> So neither MPI_Broadcast nor scatter can handle 2 dimensional arrays?
>> But also if it is like that, is it normal that there are only 0 in the
>> array? For me that sounds more as if the data isn't transmitted at all
>> and not that it isn't splitted correctly. But I'll try the folding,
>> maybe this will help.
>
> The array that get scattered is not initialized, so it is normal that
> everyone get a lot of zeros ... Moreover, the only operation you do on
> the data (multiplication) will only generate zeros out of zeros. Try
> setting some meaningful data in the stat array before the MPI_Scatter
> operation.
>
> george.

In fact it is initialized, as I stated in my first mail I only left out
the code where it gets initialized, since it reads the data from a file
and that works (I have tested it).
>
>
>
>>
>>
>> Thanks
>>
>>> Hope this helps
>>> Jody
>>>
>>> On Jan 8, 2008 3:54 PM, Dino Rossegger <dino.rossegger_at_[hidden]> wrote:
>>>> Hi,
>>>> I have a problem distributing a 2 dimensional array over 3 processes.
>>>>
>>>> I tried different methods to distribute the data (Broadcast,
>>>> Send/Recv,
>>>> Scatter) but all of them didn't work for me. The output of the root
>>>> processor (0 in my case) is always okay, the output of the others are
>>>> simple 0.
>>>>
>>>> The Array stat is filled with entrys from a file (I left out the
>>>> generation of the Array Data since this is much code and it works
>>>> (tested the whole thing in "single" mode.))
>>>>
>>>> Here are the important parts of the Source Code:
>>>>
>>>> const int ARRAYSIZE = 150;
>>>> int main(int argc, char* argv[])
>>>> {
>>>> MPI_Init(&argc,&argv);
>>>> int rank, anzprocs,recvcount,sendcnt;
>>>> MPI_Comm_size(MPI_COMM_WORLD,&anzprocs);
>>>> MPI_Comm_rank(MPI_COMM_WORLD,&rank);
>>>>
>>>> const int WORKING = ARRAYSIZE/anzprocs;
>>>>
>>>> double stat[ARRAYSIZE][2];
>>>> double stathlp[WORKING][2];
>>>>
>>>> double stat2[WORKING][5];
>>>> double stat3[anzprocs][ARRAYSIZE][5];
>>>> if(rank==0)sendcnt=WORKING*2;
>>>>
>>>> MPI::COMM_WORLD.Scatter(stat,sendcnt,MPI::DOUBLE,stathlp,WORKING*2,MPI::DOUBLE,0);
>>>>
>>>>
>>>> for(int i=0;i<WORKING;i++){
>>>> stat2[i][0]=stathlp[i][0];
>>>> stat2[i][1]=stathlp[i][1];
>>>> stat2[i][2]=(stat2[i][0]*stat2[i][1]);
>>>> stat2[i][3]=(stat2[i][0]*stat2[i][0]);
>>>> stat2[i][4]=(stat2[i][1]*stat2[i][1]);
>>>> }
>>>> if (rank==0) recvcount=WORKING*5;
>>>> MPI_Gather(&stat2, WORKING*5, MPI_DOUBLE,&stat3, recvcount,
>>>> MPI_DOUBLE,
>>>> 0, MPI_COMM_WORLD);
>>>> if (rank==0){
>>>> cout << stat3[0][0][0] << endl;
>>>> cout << stat3[1][0][0] << endl;
>>>> cout << stat3[2][0][0] << endl;
>>>> }
>>>> }
>>>>
>>>> I don't know any further since my experience with OMPI is also not too
>>>> big. Is there anything specific I have to know about distributing 2
>>>> Dimensional Arrays? I don't think that the error is in the MPI_Gather,
>>>> since I did a cout of the data on all nodes and the output was the
>>>> same.
>>>>
>>>> Thanks in advance and sorry for my bad english
>>>> Dino
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users