This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Are you wanting to run the solvers on different nodes within the
allocation? Or on different cores across all nodes?
For different nodes, you can just use -host to specify which nodes you want
that specific mpirun to use, or a hostfile should also be fine. The FAQ's
comment was aimed at people who were giving us the PBS_NODEFILE as the
hostfile - which could confuse older versions of OMPI into using the rsh
launcher instead of Torque. Remember that we have the relative node syntax
so you don't actually have to name the nodes - helps if you want to execute
batch scripts and won't know the node names in advance.
For different cores across all nodes, you would need to use some binding
trickery that may not be in the 1.4 series, so you might need to update to
the 1.6 series. You have two options: (a) have Torque bind your mpirun to
specific cores (I believe it can do that), or (b) use --slot-list to
specify which cores that particular mpirun is to use. You can then separate
the two solvers but still run on all the nodes, if that is of concern.
On Wed, Nov 27, 2013 at 6:10 AM, <Ola.Widlund_at_[hidden]> wrote:
> We have an in-house application where we run two solvers in a loosely
> coupled manner: The first solver runs a timestep, then the second solver
> does work on the same timestep, etc. As the two solvers never execute at
> the same time, we would like to run the two solvers in the same allocation
> (launching mpirun once for each of them). RAM is not an issue, so there
> should not be any risk of excessive swapping degrading performance.
> We use openmpi-1.4.5 compiled with torque integration. The torque
> integration means we do not give a hostfile to mpirun, it will itself query
> torque for the allocation info.
> Can we force one of the solvers to run in a *subset* of the full
> allocation? How do we do that? I read in the FAQ that providing a hostfile
> to mpirun in this case (when it's not needed due to torque integration)
> would cause a lot of problems...
> Thanks in advance,
> users mailing list