« Return to documentation listing
Table of Contents
MPI_Comm_split_type - Creates new communicators based on colors
and keys.
#include <mpi.h>
int MPI_Comm_split_type(MPI_Comm comm, int split_type, int key,
MPI_Info info, MPI_Comm *newcomm)
INCLUDE ’mpif.h’
MPI_COMM_SPLIT_TYPE(COMM, SPLIT_TYPE, KEY, INFO, NEWCOMM, IERROR)
INTEGER COMM, SPLIT_TYPE, KEY, INFO, NEWCOMM, IERROR
- comm
- Communicator (handle).
- split_type
- Type of processes
to be grouped together (integer).
- key
- Control of rank assignment (integer).
- info
- Info argument (handle).
- newcomm
- New communicator
(handle).
- IERROR
- Fortran only: Error status (integer).
This function
partitions the group associated with comm into disjoint subgroups, based
on the type specied by split_type. Each subgroup contains all processes
of the same type. Within each subgroup, the processes are ranked in the
order defined by the value of the argument key, with ties broken according
to their rank in the old group. A new communicator is created for each subgroup
and returned in newcomm. This is a collective call; all processes must provide
the same split_type, but each process is permitted to provide different
values for key. An exception to this rule is that a process may supply the
type value MPI_UNDEFINED, in which case newcomm returns MPI_COMM_NULL.
- MPI_COMM_TYPE_SHARED
- This type splits the communicator into
subcommunicators, each of which can create a shared memory region.
This
is an extremely powerful mechanism for dividing a single communicating
group of processes into k subgroups, with k chosen implicitly by the user
(by the number of colors asserted over all the processes). Each resulting
communicator will be nonoverlapping. Such a division could be useful for
defining a hierarchy of computations, such as for multigrid or linear algebra.
Multiple calls to MPI_Comm_split_type can be used to overcome the requirement
that any call have no overlap of the resulting communicators (each process
is of only one color per call). In this way, multiple overlapping communication
structures can be created. Creative use of the color and key in such splitting
operations is encouraged.
Note that keys need not be unique. It is MPI_Comm_split_type’s
responsibility to sort processes in ascending order according to this key,
and to break ties in a consistent way. If all the keys are specified in
the same way, then all the processes in a given color will have the relative
rank order as they did in their parent group. (In general, they will have
different ranks.)
Essentially, making the key value zero for all processes
of a given split_type means that one needn’t really pay attention to the
rank-order of the processes in the new communicator.
Almost all MPI
routines return an error value; C routines as the value of the function
and Fortran routines in the last argument.
Before the error value is returned,
the current MPI error handler is called. By default, this error handler
aborts the MPI job, except for I/O function errors. The error handler may
be changed with MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
may be used to cause error values to be returned. Note that MPI does not
guarantee that an MPI program can continue past an error.
MPI_Comm_create
MPI_Intercomm_create
MPI_Comm_dup
MPI_Comm_free
MPI_Comm_split
Table of Contents
« Return to documentation listing
|