Dear All,

I read a little more about MPI derived data types, and to answer my own question:

In general, we cannot assume portability of sending a C/C++ struct as a stream of bytes. There must be a promise that data representation on machines involved in the transmission must perform the same data layout.

Also, there was a +1 on using derived data types in terms of clarity of code.

But, now that I decided to use it, I run into another problem.

I have a function that commits the new datatype: Add_New_MPITypes(). This is called just after MPI_Init(...).
After a few subsequent function calls, I am doing MPI-Send/Receive in another function, which looks like:

void sendMessagetoSlave(void* Payload, int MESSAGETYPE)
    switch (MESSAGETYPE)
        //Add_MPI_msgInstallP_Type(); /*Was already done in
Add_New_MPITypes() */
        msgInstallP InstallPMessage;
        InstallPMessage = *(msgInstallP*)Payload;
                  (void*)Payload,                       /* Payload */
                  sizeof(msgInstallP),                 /* size of the payload */
                  MPI_MSGINSTALLP,                      /* MPI Data type */
                  InstallPMessage.location,             /* location to which the message is being sent */
                  MASTERSLAVECONTROLMESSAGE,            /* Tag */
                  MPI_COMM_WORLD                        /* Communicator */


The linker complains that it does not know MPI_MSGINSTALLP derived datatype. Specifically, the message from the linker is:

"‘MPI_MSGINSTALLP’ was not declared in this scope".

I have using mpic++ (1.4.2) to compile, and g++ (4.5.3) to link.

Can anyone help?



From: devendra rai <>
To: Open MPI Users <>
Sent: Wednesday, 4 January 2012, 17:31
Subject: [OMPI users] Can we avoid derived datatypes?

Hello All,

I need to send a struct- datatype over MPI. Currently, I send the strcture as a series of MPI_BYTEs and on the other end, I dereference it as though it were a struct- type.

Something like this:

MPI_Ssend((void*)&MasterSlavePayload, sizeof(MasterSlavePayload), MPI_BYTE, destNode,MASTERSLAVECONTROLMESSAGE,MPI_COMM_WORLD);

where MasterSlavePayload is a structure variable.

This currently seems to work, where we have a homogenous environment: same hardware configuration, and same operating system.

The question is: Is this approach portable? Safe? And whether this will work on a system of nodes with mixed processor types?

I read from MPI tutorials

"...Primitive data types are contiguous. Derived data types allow you to specify non-contiguous data in a convenient manner and to treat it as though it was contiguous. "

So, since I am using a primitive data type, does it mean that the packing of elements is maintained across the MPI_Send/MPI_Recv process? If so, it would mean that the approach that I use should work.

Any ideas?

Thanks a lot,



users mailing list