Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Brock Palen (brockp_at_[hidden])
Date: 2006-11-22 00:09:07


Feel free to correct me if im wrong.

OMPI assumes you have a fast network and checks for them. If they
are not found it falls back to tcp.

So if you leave out the --mca etc etc.... It should use the mx if
its available. Im not sure how MX responds if one of the hosts does
not have a working card (not activated) because the mpi job will
still run. Just not using MX to that host. All other hosts will us MX.

If openmpi sees that a node has more than one cpu (SMP) It will use
the sm (shared mem) method to communicate over the mx. and if a
proc sends to its self, the self method is used. So its like a
priority order.

I know there is a way (its in the archives) to see the priority on
how OMPI choses what method to use. and uses the highest priority
that will allow the communication to complete.

I know there is also some magic being working on/implemented. That
will stripe over multiple networks for large messages when more
bandwidth is needed. I dont know if OMPI will have this ability or
not. Someone else can chime in on that.

Brock Palen
Center for Advanced Computing
brockp_at_[hidden]
(734)936-1985

On Nov 21, 2006, at 11:28 PM, Iannetti, Anthony C. ((GRC-RTB0)) wrote:

> Dear OpenMPI List:
>
>
>
> From looking at a recent thread, I see an mpirun command with
> shared memory and mx:
>
>
>
> mpirun –mca btl mx,sm,self –np 2 pi3f90.x
>
>
>
> This works. I may have forgot to mention it, but I am using
> 1.1.2. I see there is an –mca mtl in version 1.2b1 . I do not
> think this exists in 1.1.2.
>
> Still, I would like to know what –mca is given automatically.
>
>
>
> Thanks,
>
> Tony
>
>
>
>
>
>
>
> Anthony C. Iannetti, P.E.
>
> NASA Glenn Research Center
>
> Propulsion Systems Division, Combustion Branch
>
> 21000 Brookpark Road, MS 5-10
>
> Cleveland, OH 44135
>
> phone: (216)433-5586
>
> email: Anthony.C.Iannetti_at_[hidden]
>
>
>
> Please note: All opinions expressed in this message are my own and
> NOT of NASA. Only the NASA Administrator can speak on behalf of NASA.
>
>
>
> From: Iannetti, Anthony C. (GRC-RTB0)
> Sent: Tuesday, November 21, 2006 8:39 PM
> To: 'users_at_[hidden]'
> Subject: MX performance problem on two processor nodes
>
>
>
> Dear OpenMPI List:
>
>
>
> I am running the Myrinet MX btl with OpenMPI on MacOSX
> 10.4. I am running into a problem. When I run on one processor
> per node, OpenMPI runs just fine. When I run on two processors
> per node (slots=2), it seems to take forever (something is hanging).
>
>
>
> Here is the command:
>
> mpirun –mca btl mx,self –np 2 pi3f90.x
>
>
>
> However, if I give the command:
>
> mpirun –np 2 pi3f90.x
>
>
>
> The process runs normally. But, I do not know if it is using the
> Myrinet network. Is there a way to diagnose this problem. mpirun –
> v and –d do not seem to indicate which mca is actually being used.
>
>
>
> Thanks,
>
> Tony
>
>
>
> Anthony C. Iannetti, P.E.
>
> NASA Glenn Research Center
>
> Propulsion Systems Division, Combustion Branch
>
> 21000 Brookpark Road, MS 5-10
>
> Cleveland, OH 44135
>
> phone: (216)433-5586
>
> email: Anthony.C.Iannetti_at_[hidden]
>
>
>
> Please note: All opinions expressed in this message are my own and
> NOT of NASA. Only the NASA Administrator can speak on behalf of NASA.
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users