Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] issue with column type in language C
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2012-08-20 13:45:43


It looks like you also posted this to Stackoverflow:

http://stackoverflow.com/questions/12031330/mpi-issue-with-column-type-in-language-c

It looks like it was answered, too. :-)

On Aug 19, 2012, at 8:38 PM, Christian Perrier wrote:

> Hi,
>
> Indeed I try to make the equivalent of this Fortran program in C. The Fortran version works fine but I have problems now in C.
>
> I can't get to exchange between 2 processes a single column.
>
> Could you try please to compile and execute the following test code wich simply sends a column from the rank=2 and received by rank=0 ( you need to execute it with nproc=4) :
>
> --------------------------------------------------------------------------------------------------
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <math.h>
> #include "mpi.h"
>
> int main(int argc, char *argv[])
> {
> /* size of the discretization */
>
> double** x;
> double** x0;
> int bonk1, bonk2;
> int i,j,k,l;
> int nproc;
> int ndims;
> int S=0, E=1, N=2, W=3;
> int NeighBor[4];
> int xcell, ycell, size_tot_x, size_tot_y;
> int *xs,*ys,*xe,*ye;
> int size_x = 4;
> int size_y = 4;
> int me;
> int x_domains=2;
> int y_domains=2;
> int flag = 1;
> MPI_Comm comm, comm2d;
> int dims[2];
> int periods[2];
> int reorganisation = 0;
> int row;
> MPI_Datatype column_type;
> MPI_Status status;
>
>
> size_tot_x=size_x+2*x_domains+2;
> size_tot_y=size_y+2*y_domains+2;
>
> xcell=(size_x/x_domains);
> ycell=(size_y/y_domains);
>
> MPI_Init(&argc, &argv);
> comm = MPI_COMM_WORLD;
> MPI_Comm_size(comm,&nproc);
> MPI_Comm_rank(comm,&me);
>
> x = malloc(size_tot_y*sizeof(double*));
> x0 = malloc(size_tot_y*sizeof(double*));
>
>
> for(j=0;j<=size_tot_y-1;j++) {
> x[j] = malloc(size_tot_x*sizeof(double));
> x0[j] = malloc(size_tot_x*sizeof(double));
> }
>
> xs = malloc(nproc*sizeof(int));
> xe = malloc(nproc*sizeof(int));
> ys = malloc(nproc*sizeof(int));
> ye = malloc(nproc*sizeof(int));
>
> /* Create 2D cartesian grid */
> periods[0] = 0;
> periods[1] = 0;
>
> ndims = 2;
> dims[0]=x_domains;
> dims[1]=y_domains;
>
> MPI_Cart_create(comm, ndims, dims, periods, reorganisation, &comm2d);
>
> /* Identify neighbors */
> NeighBor[0] = MPI_PROC_NULL;
> NeighBor[1] = MPI_PROC_NULL;
> NeighBor[2] = MPI_PROC_NULL;
> NeighBor[3] = MPI_PROC_NULL;
>
> /* Left/West and right/Est neigbors */
> MPI_Cart_shift(comm2d,0,1,&NeighBor[W],&NeighBor[E]);
> /* Bottom/South and Upper/North neigbors */
> MPI_Cart_shift(comm2d,1,1,&NeighBor[S],&NeighBor[N]);
>
> /* coordinates of current cell with me rank */
>
> xcell=(size_x/x_domains);
> ycell=(size_y/y_domains);
>
> ys[me]=(y_domains-me%(y_domains)-1)*(ycell+2)+2;
> ye[me]=ys[me]+ycell-1;
>
> for(i=0;i<=y_domains-1;i++)
> {xs[i]=2;}
>
> for(i=0;i<=y_domains-1;i++)
> {xe[i]=xs[i]+xcell-1;}
>
> for(i=1;i<=(x_domains-1);i++)
> { for(j=0;j<=(y_domains-1);j++)
> {
> xs[i*y_domains+j]=xs[(i-1)*y_domains+j]+xcell+2;
> xe[i*y_domains+j]=xs[i*y_domains+j]+xcell-1;
> }
> }
>
> for(i=0;i<=size_tot_y-1;i++)
> { for(j=0;j<=size_tot_x-1;j++)
> { x0[i][j]= i+j;
> }
> }
>
> /* Create column data type to communicate with South and North neighbors */
>
>
>
> MPI_Type_vector( ycell, 1, size_tot_x, MPI_DOUBLE, &column_type);
> MPI_Type_commit(&column_type);
>
> if(me==2) {
> printf("Before Send - Process 2 subarray\n");
> for(i=ys[me]-1;i<=ye[me]+1;i++)
> { for(j=xs[me]-1;j<=xe[me]+1;j++)
> { printf("%f ",x0[i][j]);
> }
> printf("\n");
> }
> printf("\n");
>
>
>
> MPI_Send(&(x0[ys[2]][xs[2]]), 1, column_type, 0, flag, comm2d );
> }
>
> if(me==0) {
>
> MPI_Recv(&(x0[ys[0]][xe[0]]), 1, column_type, 2, flag, comm2d , &status);
> printf("After Receive - Process 0 subarray\n");
> for(i=ys[me]-1;i<=ye[me]+1;i++)
> { for(j=xs[me]-1;j<=xe[me]+1;j++)
> { printf("%f ",x0[i][j]);
> }
> printf("\n");
> }
> printf("\n");
>
> MPI_Get_count(&status,column_type,&bonk1);
> MPI_Get_elements(&status,MPI_DOUBLE,&bonk2);
> printf("got %d elements of type column_type\n",bonk1);
> printf("which contained %d elements of type MPI_DOUBLE\n",bonk2);
> printf("\n");
>
> }
>
> for(i=0;i<=size_tot_y-1;i++)
> {
> free(x[i]);
> free(x0[i]);
> }
>
> free(x);
> free(x0);
>
> free(xs);
> free(xe);
> free(ys);
> free(ye);
>
> MPI_Finalize();
>
> return 0;
> }
>
> --------------------------------------------------------------------------------------------------
>
> xs[me] and xe[me] correspond respectively to x_start[me] and x_end[me] of rank = me. This is the same for ys[me] and ye[me].
>
> As I said in the precedent post, there's only the first value of the column which is received par
> process of rank 0. Here's the output of this program :
>
> Before Send - Process 2 subarray
> 10.000000 11.000000 12.000000 13.000000
> 11.000000 12.000000 13.000000 14.000000
> 12.000000 13.000000 14.000000 15.000000
> 13.000000 14.000000 15.000000 16.000000
>
> After Receive - Process 0 subarray
> 6.000000 7.000000 8.000000 9.000000
> 7.000000 8.000000 12.000000 10.000000
> 8.000000 9.000000 10.000000 11.000000
> 9.000000 10.000000 11.000000 12.000000
>
> got 1 elements of type column_type
> which contained 2 elements of type MPI_DOUBLE
>
> ------------------------------------------------------------------------
>
> I get "12.00000" for the first element but for the second element, I have "10.00000" instead of "13.00000".
>
> Any help would be really appreciated.
>
>
> On Sun, Aug 19, 2012 at 6:25 PM, Rayson Ho <raysonlogin_at_[hidden]> wrote:
> Hi Christian,
>
> The code you posted is very similar to another school assignment sent
> to this list 2 years ago:
>
> http://www.open-mpi.org/community/lists/users/2010/10/14619.php
>
> At that time, the code was written in Fortran, and now it is written
> in C - however, the variable names, logic, etc are quite similar! :-D
>
> Remember, debugging and bug fix is part of the (home) work!
>
> Rayson
>
> ==================================================
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
>
>
> On Sun, Aug 19, 2012 at 12:59 AM, Christian Perrier
> <christian01.perrier_at_[hidden]> wrote:
> > Hi,
> >
> > I have a problem with MPI_Senrecv communication where I send columns on
> > edges between processes.
> > For debugging, I show you below a basic example where I initialize a 10x10
> > matrix ("x0" array) with x_domain=4
> > and y_domain=4. For the test, I simply initialize the 2D array values with
> > x0[i][j] = i+j . After, in updateBound.c", I'm
> > using the MPI_Sendrecv functions for the North-South and Est-West process.
> >
> > Here's the main program "example.c" :
> >
> > -------------------------------------------------------------------------------------------
> >
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <math.h>
> > #include "mpi.h"
> >
> > int main(int argc, char *argv[])
> > {
> > /* size of the discretization */
> >
> > double** x;
> > double** x0;
> >
> > int i,j,k,l;
> > int nproc;
> > int ndims;
> > int S=0, E=1, N=2, W=3;
> > int NeighBor[4];
> > int xcell, ycell, size_tot_x, size_tot_y;
> > int *xs,*ys,*xe,*ye;
> > int size_x = 4;
> > int size_y = 4;
> > int me;
> > int x_domains=2;
> > int y_domains=2;
> >
> > MPI_Comm comm, comm2d;
> > int dims[2];
> > int periods[2];
> > int reorganisation = 0;
> > int row;
> > MPI_Datatype column_type;
> >
> >
> >
> > size_tot_x=size_x+2*x_domains+2;
> > size_tot_y=size_y+2*y_domains+2;
> >
> > xcell=(size_x/x_domains);
> > ycell=(size_y/y_domains);
> >
> > MPI_Init(&argc, &argv);
> > comm = MPI_COMM_WORLD;
> > MPI_Comm_size(comm,&nproc);
> > MPI_Comm_rank(comm,&me);
> >
> > x = malloc(size_tot_y*sizeof(double*));
> > x0 = malloc(size_tot_y*sizeof(double*));
> >
> >
> > for(j=0;j<=size_tot_y-1;j++) {
> > x[j] = malloc(size_tot_x*sizeof(double));
> > x0[j] = malloc(size_tot_x*sizeof(double));
> > }
> >
> > xs = malloc(nproc*sizeof(int));
> > xe = malloc(nproc*sizeof(int));
> > ys = malloc(nproc*sizeof(int));
> > ye = malloc(nproc*sizeof(int));
> >
> > /* Create 2D cartesian grid */
> > periods[0] = 0;
> > periods[1] = 0;
> >
> > ndims = 2;
> > dims[0]=x_domains;
> > dims[1]=y_domains;
> >
> > MPI_Cart_create(comm, ndims, dims, periods, reorganisation, &comm2d);
> >
> > /* Identify neighbors */
> >
> > NeighBor[0] = MPI_PROC_NULL;
> > NeighBor[1] = MPI_PROC_NULL;
> > NeighBor[2] = MPI_PROC_NULL;
> > NeighBor[3] = MPI_PROC_NULL;
> >
> > /* Left/West and right/Est neigbors */
> >
> > MPI_Cart_shift(comm2d,0,1,&NeighBor[W],&NeighBor[E]);
> >
> > /* Bottom/South and Upper/North neigbors */
> >
> > MPI_Cart_shift(comm2d,1,1,&NeighBor[S],&NeighBor[N]);
> >
> > /* coordinates of current cell with me rank */
> >
> > xcell=(size_x/x_domains);
> > ycell=(size_y/y_domains);
> >
> > ys[me]=(y_domains-me%(y_domains)-1)*(ycell+2)+2;
> > ye[me]=ys[me]+ycell-1;
> >
> > for(i=0;i<=y_domains-1;i++)
> > {xs[i]=2;}
> >
> > for(i=0;i<=y_domains-1;i++)
> > {xe[i]=xs[i]+xcell-1;}
> >
> > for(i=1;i<=(x_domains-1);i++)
> > { for(j=0;j<=(y_domains-1);j++)
> > {
> > xs[i*y_domains+j]=xs[(i-1)*y_domains+j]+xcell+2;
> > xe[i*y_domains+j]=xs[i*y_domains+j]+xcell-1;
> > }
> > }
> >
> > for(i=0;i<=size_tot_y-1;i++)
> > { for(j=0;j<=size_tot_x-1;j++)
> > { x0[i][j]= i+j;
> > // printf("%f\n",x0[i][j]);
> > }
> > }
> >
> > /* Create column data type to communicate with South and North
> > neighbors */
> >
> > MPI_Type_vector( ycell, 1, size_tot_x, MPI_DOUBLE, &column_type);
> > MPI_Type_commit(&column_type);
> >
> > updateBound(x0, NeighBor, comm2d, column_type, me, xs, ys, xe, ye,
> > xcell);
> >
> >
> > for(i=0;i<=size_tot_y-1;i++)
> > {
> > free(x[i]);
> > free(x0[i]);
> > }
> >
> > free(x);
> > free(x0);
> >
> > free(xs);
> > free(xe);
> > free(ys);
> > free(ye);
> >
> > MPI_Finalize();
> >
> > return 0;
> > }
> > -------------------------------------------------------------------------------------------
> >
> > and the second file "updateBound.c" which sends the columns and rows
> >
> >
> > -------------------------------------------------------------------------------------------
> >
> >
> > #include "mpi.h"
> > #include <stdio.h>
> >
> > /*******************************************************************/
> > /* Update Bounds of subdomain with me process */
> > /*******************************************************************/
> >
> > void updateBound(double** x,int NeighBor[], MPI_Comm comm2d, MPI_Datatype
> > column_type , int me, int* xs, int* ys, int* xe, int* ye, int xcell)
> > {
> >
> > int S=0, E=1, N=2, W=3;
> > int flag;
> > MPI_Status status;
> >
> > int i,j;
> >
> > if(me==0) {printf("verif_update_before\n");
> > for(i=ys[me]-1;i<=ye[me]+1;i++)
> > { for(j=xs[me]-1;j<=xe[me]+1;j++)
> > { printf("%f ",x[i][j]);
> > }
> > printf("\n");
> > }
> > printf("\n");
> > }
> >
> > /********* North/South communication **********************************/
> > flag = 1;
> > /*Send my boundary to North and receive from South*/
> > MPI_Sendrecv(&x[ys[me]][xs[me]], xcell, MPI_DOUBLE, NeighBor[N], flag,
> > &x[ye[me]+1][xs[me]], xcell, MPI_DOUBLE, NeighBor[S], flag, comm2d,
> > &status);
> >
> > /*Send my boundary to South and receive from North*/
> > MPI_Sendrecv(&x[ye[me]][xs[me]], xcell, MPI_DOUBLE, NeighBor[S], flag,
> > &x[ys[me]-1][xs[me]], xcell, MPI_DOUBLE, NeighBor[N], flag, comm2d,
> > &status);
> >
> > /********* Est/West communication ************************************/
> > flag = 2;
> > /*Send my boundary to Est and receive from West*/
> > MPI_Sendrecv(&x[ys[me]][xe[me]], 1, column_type, NeighBor[E], flag,
> > &x[ys[me]][xs[me]-1], 1, column_type, NeighBor[W], flag, comm2d, &status);
> >
> > /*Send my boundary to West and receive from Est*/
> > MPI_Sendrecv(&x[ys[me]][xs[me]], 1, column_type, NeighBor[W], flag,
> > &x[ys[me]][xe[me]+1], 1, column_type, NeighBor[E], flag, comm2d, &status);
> >
> > if(me==0) {printf("verif_update_after\n");
> > for(i=ys[me]-1;i<=ye[me]+1;i++)
> > { for(j=xs[me]-1;j<=xe[me]+1;j++)
> > { printf("%f ",x[i][j]);
> > }
> > printf("\n");
> > }
> > printf("\n");
> > }
> > }
> >
> > ------------------------------------------------------------------------------
> >
> > Running with nproc=4, I print the values of the subarray with rank=0 (so at
> > left bottom of the grid) and I get before and after the
> > bounds udpate :
> >
> > verif_update_before
> > 6.000000 7.000000 8.000000 9.000000
> > 7.000000 8.000000 9.000000 10.000000
> > 8.000000 9.000000 10.000000 11.000000
> > 9.000000 10.000000 11.000000 12.000000
> >
> > verif_update_after
> > 6.000000 5.000000 6.000000 9.000000
> > 7.000000 8.000000 9.000000 12.000000
> > 8.000000 9.000000 10.000000 11.000000
> > 9.000000 10.000000 11.000000 12.000000
> >
> > As you can see, after the udpate, I don't have the correct value ( in
> > underligned bold : 11.0 ) at the second element
> > of the column coming from the Est. I expected 13.0 instead of 11.0.
> >
> > So there's a problem with the column datatype which only send the first
> > element of this column.
> >
> > In "example.c", I define the column as following :
> >
> > MPI_Type_vector( ycell, 1, size_tot_x, MPI_DOUBLE, &column_type);
> > MPI_Type_commit(&column_type);
> >
> > However, It seems ok and the computation of begin and end coordinates as a
> > function of rank "me" is also good.
> >
> > I make you notice there's no problem between the exchange of rows between
> > the North and the South, only
> > between columns.
> >
> > If you could help me, I don't know what to do.
> >
> > Regards
> >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/