Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Allan, Mark \(UK Filton\) (Mark.Allan2_at_[hidden])
Date: 2007-06-04 10:00:47

I'm new to this list and wonder if anyone can help. I'm trying to
measure communication time between parallel processes using openmpi. As
an example I might be running on 4 dual core processors (8 processes in
total). I was hoping that communication using shared memory (comms
between dual cores on the same chip) would be faster than that over the
network. To measure communication time I'm sending a block of data to
each process (from each process) using a blocking send, and am timing
how long it takes. I repeat this 50 times (for example) and take the
average time. The code is something like:
 for(int i=0;i<numProcs;i++)
    for(int j=0;j<numProcs;j++)
           // // // i is the sending proc to j, others wait
             double time = 0.0;
             for(int kk=0; kk<50; kk++)
                      double start = MPI::Wtime();
                      double end = MPI::Wtime();
                  out << i << " " << j << " " << time/50.0 << std::endl;
The problem I am having is that I'm not noticing any appreciable
difference in communication times between shared memory and network
protocols. I expected shared memory to be faster(!?!).
Does anyone have a better way of measuring communication times?

This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.