Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Type_struct for structs with dynamic arrays
From: George Bosilca (bosilca_at_[hidden])
Date: 2008-08-17 17:30:54


Jitendra,

There is a problem with the addresses you provide to MPI_Type_struct.
For all arrays instead of giving the pointer to the array, you provide
a pointer to the pointer in the individual struct.

Try the following
MPI_Get_address(&parentPop[0].indiv[0].intvar[0], &disp[0]);
instead of
MPI_Get_address(&parentPop[0].indiv[0].intvar, &disp[0]);

Please note the [0] after the array name. Please do the same to all
arrays and I think the MPI_Type_struct will do the rest.

Btw, you dont have to substract the disp[0] from all addresses.
Instead you can use MPI_BOTTOM and all your addresses can be absolute.

   george.

On Aug 11, 2008, at 1:07 AM, Jitendra Kumar wrote:

> Hi,
> I am trying to use MPI derived datatype doutines for sending a struct
> which contains dynamically allocated arrays. I tried implementing it
> using MPI_Type_struct. It doesn't throws any error but messages being
> received (of the declared datatype) aren't correct. Some memory
> corruption seems to be going on as the value of 'rank' at receive end
> are changed to 0 right after the receive . Below are the snippets of
> my
> struct and implementation of the derived datatype.
> I am not sure where things are going wrong. I would highly appreciate
> any pointers or suggestions. Is there any better alternative way
> instead
> of MPI_Type_struct considering that frequent communication of these
> structs are needed?
>
> Thanks,
> Jitendra
>
> The struct looks like this:
> 51 typedef struct
> 52 {
> 53 int *intvar;
> 54 double *realvar;
> 55 double *binvar;
> 56 int **gene;
> 57
> 58 double *obj;
> 59 double *constr;
> 60 double constr_violation;
> 61 double crowd_dist;
> 62 int rank;
> 63 double *strategyParameter;
> 64 }
> 65 individual;
>
> Implementation of the derived datatype:
>
> 483 blockcounts[0] = parentPop[0].numInteger;
> 484 blockcounts[1] = parentPop[0].numReal;
> 485 blockcounts[2] = parentPop[0].numBinary;
> 486 sum = 0;
> 487 for(i=0; i<parentPop[0].numBinary; i++)
> 488 {
> 489 sum = sum + parentPop[0].nbits[i];
> 490 }
> 491
> 492 blockcounts[3] = sum;
> 493 blockcounts[4] = parentPop[0].nobj;
> 494 blockcounts[5] = parentPop[0].ncon;
> 495 blockcounts[6] = 1;
> 496 blockcounts[7] = 1;
> 497 blockcounts[8] = 1;
> 498 blockcounts[9] = parentPop[0].numInteger +
> parentPop[0].numReal;
>
> 506 types[0] = MPI_INT;
> 507 types[1] = MPI_DOUBLE;
> 508 types[2] = MPI_DOUBLE;
> 509 types[3] = MPI_INT;
> 510 types[4] = MPI_DOUBLE;
> 511 types[5] = MPI_DOUBLE;
> 512 types[6] = MPI_DOUBLE;
> 513 types[7] = MPI_DOUBLE;
> 514 types[8] = MPI_INT;
> 515 types[9] = MPI_DOUBLE;
> 516
> 517 MPI_Get_address(&parentPop[0].indiv[0].intvar,
> &disp[0]);
> 518 printf("parentpop.indiv0 %ld disp %ld (%ld)\n",
> &parentPop[0].indiv[0], disp, disp[0]);
> 519 MPI_Get_address(&parentPop[0].indiv[0].realvar,
> &disp[1]);
> 520 printf("disp 1 %ld\n", disp[1]);
> 521 MPI_Get_address(&parentPop[0].indiv[0].binvar,
> &disp[2]);
> 522 printf("disp 2 %ld\n", disp[2]);
> 523 MPI_Get_address(&parentPop[0].indiv[0].gene, &disp[3]);
> 524 MPI_Get_address(&parentPop[0].indiv[0].obj, &disp[4]);
> 525 MPI_Get_address(&parentPop[0].indiv[0].constr,
> &disp[5]);
> 526
> MPI_Get_address(&parentPop[0].indiv[0].constr_violation,
> &disp[6]);
> 527 MPI_Get_address(&parentPop[0].indiv[0].crowd_dist,
> &disp[7]);
> 528 MPI_Get_address(&parentPop[0].indiv[0].rank, &disp[8]);
> 529
> MPI_Get_address(&parentPop[0].indiv[0].strategyParameter, &disp[9]);
> 530 base = disp[0];
> 531 for(i=0; i<10; i++)
> 532 {
> 533 disp[i] -= base;
> 534 }
> 535
> 536 MPI_Type_create_struct(10, blockcounts, disp, types,
> &Individual);
> 537
> 538 /* Check that the datatype has correct extent */
> 539 MPI_Type_extent(Individual, &extent);
> 540 if(extent != sizeof(individual))
> 541 {
> 542 MPI_Datatype indold = Individual;
> 543 MPI_Type_create_resized(indold, 0,
> sizeof(individual), &Individual);
> 544 MPI_Type_free(&indold);
> 545 }
> 546 MPI_Type_commit(&Individual);
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users



  • application/pkcs7-signature attachment: smime.p7s