Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-04-27 19:49:04


This is actually expected behavior. We make the assumption that MPI
processes are meant to exhibit as low latency as possible, and
therefore use active polling for most message passing.

Additionally, it may be possible that connections could come across
multiple devices, so we need to poll them all to check for progress/
connections. We've talked internally about getting better at
recognizing single-device scenarios (and therefore allowing
blocking), but haven't really done much about it. Our internal
interfaces were designed to be non-blocking for polling for maximum
performance (i.e., lowest latency / highest bandwidth).

On Apr 26, 2007, at 3:48 PM, Nuno Sucena Almeida wrote:

> Hello,
>
> I'm having a weird problem while using the MPI_Comm_Accept (C) or the
> MPI::Comm::Accept (C++ bindings).
> My "server" runs until the call to this function but if there's no
> client
> connecting, it sits there eating all CPU (100%), although if a
> client connects
> the loop works fine, but when the client disconnects again we are
> back to the
> same high CPU usage.
> I tried using OpenMPI version 1.1.2 and 1.2. The machines
> architectures are
> AMD Opteron and Intel Itanium2 respectively, the former compiled
> with gcc
> 4.1.1 and the later with gcc 3.2.3.
>
> The C++ code is here:
>
> http://compel.bu.edu/~nuno/openmpi/
>
> along with the logs for orted and the 'server' output.
>
> I started orted with:
>
> orted --persistent --seed --scope public --universe foo
>
> and the 'server' with
>
> mpirun --universe foo -np 1 ./server
>
> The code is a C++ conversion from the C basic one posted at the
> mpi-forum
> website:
>
> http://www.mpi-forum.org/docs/mpi-20-html/node106.htm#Node109
>
> Is there an easy fix for this? I tried also the C version having
> the same
> problem...
>
> Regards,
> Nuno
> --
> http://aeminium.org/slug/
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
Cisco Systems