Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Open MPI performance on Amazon Cloud
From: Joshua Bernstein (jbernstein_at_[hidden])
Date: 2010-03-19 23:17:36


Hi Hammad,

        Before we launched the Penguin Computing On-Demand service we
conducted several tests that compared the latencies of EC2 with a
traditional HPC type setup (much like we have with our POD service). I
have a whole suite of tests that I'd be happy to share with you, but
to sum it up the EC2 latencies were absolutely terrible. For starters,
the EC2 PingPong latencies for a zero byte message was around ~150ms,
compared to an completely untuned, Gigabit Ethernet link of 32ms. For
something actually useful, say a packet of 4K, EC2 was roughly ~265ms,
where as a standard GigE link was a more reasonable (but still high)
71ms. One "real-world" application that was very sensitive to latency
took almost 30 times longer to run on EC2 then a real cluster
configuration such as POD.

        I have benchmarks from several complete IMB runs, as well as other
types of benchmarks such as STREAM and some iobench. If you are
interested in any particular type, please let me know as I'd be happy
to share.

        <pitch>If you really need an on-demand type system where latency is
an issue, you should look towards our POD offering. We even offer
Inifniband! On the compute side nothing is virtualized so your
application runs on the hardware without the overhead of a VM.</pitch>

-Joshua Bernstein
Senior Software Engineer
Penguin Computing

On Mar 19, 2010, at 11:19 AM, Jeff Squyres wrote:

> Yes, it is -- sometimes we get so caught up in other issues that
> user emails slip through the cracks. Sorry about that!
>
> I actually have little experience with EC2 -- other than knowing
> that it works, I don't know much about the performance that you can
> extract from it. I have heard issues about non-uniform latency
> between MPI processes because you really don't know where the
> individual MPI processes may land (network- / VM-wise). It suggests
> to me that EC2 might be best suited for compute-bound jobs (vs.
> latency-bound jobs).
>
> Amusingly enough, the first time someone reported an issue with Open
> MPI on EC2, I tried to submit a help ticket to EC2 support saying,
> "I'm one of the Open MPI developers ... blah blah blah ... is there
> anything I can do to help?" The answer I got back was along the
> lines of, "You need to have a paid EC2 support account before we can
> help you." I think they missed the point, but oh well. :-)
>
>
>
> On Mar 12, 2010, at 12:10 AM, Hammad Siddiqi wrote:
>
>> Dear All,
>> Is this the correct forum for sending these kind of emails. please
>> let me know if there is some other mailing list.
>> Thank
>> Best Regards,
>> Hammad Siddiqi
>> System Administrator,
>> Centre for High Performance Scientific Computing,
>> School of Electrical Engineering and Computer Science,
>> National University of Sciences and Technology,
>> H-12, Islamabad.
>> Office : +92 (51) 90852207
>> Web: http://hpc.seecs.nust.edu.pk/~hammad/
>>
>>
>> On Sat, Feb 27, 2010 at 10:07 PM, Hammad Siddiqi <hammad.siddiqi_at_[hidden]
>> > wrote:
>> Dear All,
>>
>> I am facing very wierd results of OpenMPI 1.4.1 on Amazon EC2. I have
>> used Small Instance and and High CPU medium instance for
>> benchmarking
>> latency and bandwidth. The OpenMPI was configured with the default
>> options. when the code is run in the cluster mode the latency and
>> bandwidth of Amazon EC2 Small instance is really less than that of
>> Amazon EC2 High CPU medium instance. To my understanding the
>> difference should not be that much. The following are the links to
>> graphs ad their data:
>>
>> Data: http://hpc.seecs.nust.edu.pk/~hammad/OpenMPI,Latency-BandwidthData.jpg
>> Graphs: http://hpc.seecs.nust.edu.pk/~hammad/OpenMPI,Latency-Bandwidth.jpg
>>
>>
>> Please have a look on them.
>>
>> Is anyone else facing the same problem. Any guidance in this regard
>> will highly be appreciated.
>>
>> Thank you.
>>
>>
>> --
>> Best Regards,
>> Hammad Siddiqi
>> System Administrator,
>> Centre for High Performance Scientific Computing,
>> School of Electrical Engineering and Computer Science,
>> National University of Sciences and Technology,
>> H-12, Islamabad.
>> Office : +92 (51) 90852207
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users