Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Neeraj Chourasia (neeraj_ch1_at_[hidden])
Date: 2007-10-12 08:38:06


Yes, the buffer was being re-used. No we didnt try to benchmark it with netpipe and other stuffs. But the program was pretty simple. Do you think, I need to test it with bigger chunks (>8MB) for communication.?We also tried manipulating eager_limit and min_rdma_sze, but no success.NeerajOn Fri, 12 Oct 2007 13:00:10 +0200 Open MPI Users wrote Hello, > The code was pretty simple. I was trying to send 8MB data from one > rank to other in a loop(say 1000 iterations). And then i was taking the > average of time taken and was calculating the bandwidth. > > The above logic i tried with both mpirun-with-mca-parameters and without > any parameters. And to my surprise, the performance was degrading when i > was trying to manipulate. That sounds strange. So did you re-use the communication buffers? Did you try to run some existing benchmarks like Netpipe [1], IMB or Netgauge [2]? > Now I have another question in mind. Is it possible to have IB Hardware
  > Multicast implementation in OpenMPI? I have gone through the > issues/challenges for the same, but also read couple of people who have > successfully done it for Ethernet/Giga-bit Ethernet and IPoIB ofcourse in > experimental stage. Actually i want to contribute for it in OpenMPI and > need the help for the same. As far as I know, there are two groups/people working on this. Andy Friedley implements a \"traditional\" ACK based approach (like the one that the OSU folks published about some time ago) and I implemented a new idea for extreme scale (see \"A practically constant-time MPI Broadcast Algorithm for large-scale InfiniBand Clusters with Multicast\" [3]). I know that my version is still unstable and has some problems. But I\'m working on this. Best, Torsten [1]: http://www.scl.ameslab.gov/netpipe/ [2]: http://www.unixer.de/research/netgauge/ [3]: https://www.unixer.de/publications/#hoefler-cac07 -- bash$ :(){ :|:&};: ---------------------
 
http://www.unixer.de/ ----- Computer scientists are the historians of computing. -- Gordon Bell _______________________________________________ users mailing list users_at_[hidden] http://www.open-mpi.org/mailman/listinfo.cgi/users