Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] Can we avoid derived datatypes?: Update!
From: devendra rai (rai.devendra_at_[hidden])
Date: 2012-01-05 05:35:14


Dear All, I read a little more about MPI derived data types, and to answer my own question: In general, we cannot assume portability of sending a C/C++ struct as a stream of bytes. There must be a promise that data representation on machines involved in the transmission must perform the same data layout. Also, there was a +1 on using derived data types in terms of clarity of code. But, now that I decided to use it, I run into another problem. I have a function that commits the new datatype: Add_New_MPITypes(). This is called just after MPI_Init(...). After a few subsequent function calls, I am doing MPI-Send/Receive in another function, which looks like: void sendMessagetoSlave(void* Payload, int MESSAGETYPE) {     switch (MESSAGETYPE)     {     case MSGINSTALLP:       {         //Add_MPI_msgInstallP_Type(); /*Was already done in Add_New_MPITypes() */         msgInstallP InstallPMessage;         InstallPMessage = *(msgInstallP*)Payload;         MPI_Ssend(                   (void*)Payload,                       /* Payload */                   sizeof(msgInstallP),                 /* size of the payload */                   MPI_MSGINSTALLP,                      /* MPI Data type */                   InstallPMessage.location,             /* location to which the message is being sent */                   MASTERSLAVECONTROLMESSAGE,            /* Tag */                   MPI_COMM_WORLD                        /* Communicator */                   );       }       break;     default:       break;     } } The linker complains that it does not know MPI_MSGINSTALLP derived datatype. Specifically, the message from the linker is: "‘MPI_MSGINSTALLP’ was not declared in this scope". I have using mpic++ (1.4.2) to compile, and g++ (4.5.3) to link. Can anyone help? Best. Devendra ________________________________ From: devendra rai <rai.devendra_at_[hidden]> To: Open MPI Users <users_at_[hidden]> Sent: Wednesday, 4 January 2012, 17:31 Subject: [OMPI users] Can we avoid derived datatypes? Hello All, I need to send a struct- datatype over MPI. Currently, I send the strcture as a series of MPI_BYTEs and on the other end, I dereference it as though it were a struct- type. Something like this: MPI_Ssend((void*)&MasterSlavePayload, sizeof(MasterSlavePayload), MPI_BYTE, destNode,MASTERSLAVECONTROLMESSAGE,MPI_COMM_WORLD); where MasterSlavePayload is a structure variable. This currently seems to work, where we have a homogenous environment: same hardware configuration, and same operating system. The question is: Is this approach portable? Safe? And whether this will work on a system of nodes with mixed processor types? I read from MPI tutorials "...Primitive data types are contiguous. Derived data types allow you to specify non-contiguous data in a convenient manner and to treat it as though it was contiguous. " So, since I am using a primitive data type, does it mean that the packing of elements is maintained across the MPI_Send/MPI_Recv process? If so, it would mean that the approach that I use should work. Any ideas? Thanks a lot, best Devendra _______________________________________________ users mailing list users_at_[hidden] http://www.open-mpi.org/mailman/listinfo.cgi/users