thanks for the quick response. Yes, that is what I meant. I thought there was no other way around what I am doing but It is always good to ask a expert rather than assume!
Natarajan CS wrote:You mean with one standard MPI call? I don't think so.
Firstly, My apologies for a duplicate post in LAM/MPI list I have the following simple MPI code. I was wondering if there was a workaround for sending a dynamically allocated 2-D matrix? Currently I can send the matrix row-by-row, however, since rows are not contiguous I cannot send the entire matrix at once. I realize one option is to change the malloc to act as one contiguous block but can I keep the matrix definition as below and still send the entire matrix in one go?
In MPI, there is a notion of derived datatypes, but I'm not convinced this is what you want. A derived datatype is basically a static template of data and holes in memory. E.g., 3 bytes, then skip 7 bytes, then another 2 bytes, then skip 500 bytes, then 1 last byte. Something like that. Your 2d matrices differ in two respects. One is that the pattern in memory is different for each matrix you allocate. The other is that your matrix definition includes pointer information that won't be the same in every process's address space. I guess you could overcome the first problem by changing alloc_matrix() to some fixed pattern in memory for some r and c, but you'd still have pointer information in there that you couldn't blindly copy from one process address space to another.
users mailing list