I currently have an application that works perfectly fine on 2 cores, but
I'm moving it to an 8 core machine and I was wondering if OpenMPI had a
built in solution, or if I would have to code this out manually.
Currently, I'm processing on core X and sending the data to core 0 where it
collects it, puts it back together and eventually writes the data. I wanted
to avoid a bottle neck on core 0 and have each core transmit to it's even or
odd pair in order to alleviate the IO cost on core 0. (ie. core 6 would
send to core 4, 4 sends to core 2, 2 sends to core 0. for odd: core 7 send
to core 5...... core 1 send to core 0 ).
I'm told this is a fairly common practice to avoid bottlenecks in one node.
Does OpenMPI contain any call that would take care of this, or would I have
to code it out.
Thanks you again for any help.