Actually, OMPI is distributed with a daemon that does pretty much what you want. Checkout "man ompi-server". I originally wrote that code to support cross-application MPI publish/subscribe operations, but we can utilize it here too. Have to blame me for not making it more publicly known.

The attached patch upgrades ompi-server and modifies the singleton startup to provide your desired support. This solution works in the following manner:

1. launch "ompi-server -report-uri <filename>". This starts a persistent daemon called "ompi-server" that acts as a rendezvous point for independently started applications.  The problem with starting different applications and wanting them to MPI connect/accept lies in the need to have the applications find each other. If they can't discover contact info for the other app, then they can't wire up their interconnects. The "ompi-server" tool provides that rendezvous point. I don't like that comm_accept segfaulted - should have just error'd out.

2. set OMPI_MCA_orte_server=file:<filename>" in the environment where you will start your processes. This will allow your singleton processes to find the ompi-server. I automatically also set the envar to connect the MPI publish/subscribe system for you.

3. run your processes. As they think they are singletons, they will detect the presence of the above envar and automatically connect themselves to the "ompi-server" daemon. This provides each process with the ability to perform any MPI-2 operation.

I tested this on my machines and it worked, so hopefully it will meet your needs. You only need to run one "ompi-server" period, so long as you locate it where all of the processes can find the contact file and can open a TCP socket to the daemon. There is a way to knit multiple ompi-servers into a broader network (e.g., to connect processes that cannot directly access a server due to network segmentation), but it's a tad tricky - let me know if you require it and I'll try to help.

If you have trouble wiring them all into a single communicator, you might ask separately about that and see if one of our MPI experts can provide advice (I'm just the RTE grunt).

HTH - let me know how this works for you and I'll incorporate it into future OMPI releases.
Ralph