Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Mike Houston (mhouston_at_[hidden])
Date: 2007-03-24 19:33:27


Also make sure that /tmp is user writable. By default, that is where
openmpi likes to stick some files.

-Mike

David Burns wrote:
> Could also be a firewall problem. Make sure all nodes in the cluster
> accept tcp packets from all others.
>
> Dave
>
> Walker, David T. wrote:
>
>> I am presently trying to get OpenMPI up and running on a small cluster
>> of MacPros (dual dual-core Xeons) using TCP. Opne MPI was compiled using
>> the intel Fortran Compiler (9.1) and gcc. When I try to launch a job on
>> a remote node, orted starts on the remote node but then times out. I am
>> guessing that the problem is SSH related. Any thoughts?
>>
>> Thanks,
>>
>> Dave
>>
>> Details:
>>
>> I am using SSH, set up as outlined in the FAQ, using ssh-agent to allow
>> passwordless logins. The paths for all the libraries appear to be OK.
>>
>> A simple MPI code (Hello_World_Fortran) launched on node01 will run OK
>> for up to four processors (all on node01). The output is shown here.
>>
>> node01 1247% mpirun --debug-daemons -hostfile machinefile -np 4
>> Hello_World_Fortran
>> Calling MPI_INIT
>> Calling MPI_INIT
>> Calling MPI_INIT
>> Calling MPI_INIT
>> Fortran version of Hello World, rank 2
>> Rank 0 is present in Fortran version of Hello World.
>> Fortran version of Hello World, rank 3
>> Fortran version of Hello World, rank 1
>>
>> For five processors mpirun tries to start an additional process on
>> node03. Everything launches the same on node01 (four instances of
>> Hello_World_Fortran are launched). On node03, orted starts, but times
>> out after 10 seconds and the output below is generated.
>>
>> node01 1246% mpirun --debug-daemons -hostfile machinefile -np 5
>> Hello_World_Fortran
>> Calling MPI_INIT
>> Calling MPI_INIT
>> Calling MPI_INIT
>> Calling MPI_INIT
>> [node03:02422] [0,0,1]-[0,0,0] mca_oob_tcp_peer_send_blocking: send()
>> failed with errno=57
>> [node01.local:21427] ERROR: A daemon on node node03 failed to start as
>> expected.
>> [node01.local:21427] ERROR: There may be more information available from
>> [node01.local:21427] ERROR: the remote shell (see above).
>> [node01.local:21427] ERROR: The daemon exited unexpectedly with status
>> 255.
>> forrtl: error (78): process killed (SIGTERM)
>> forrtl: error (78): process killed (SIGTERM)
>>
>> Here is the ompi info:
>>
>>
>> node01 1248% ompi_info --all
>> Open MPI: 1.1.2
>> Open MPI SVN revision: r12073
>> Open RTE: 1.1.2
>> Open RTE SVN revision: r12073
>> OPAL: 1.1.2
>> OPAL SVN revision: r12073
>> MCA memory: darwin (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA maffinity: first_use (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA timer: darwin (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
>> MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
>> MCA coll: basic (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA coll: hierarch (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA coll: self (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA io: romio (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA btl: self (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
>> MCA topo: unity (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
>> MCA gpr: null (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA ns: replica (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
>> MCA ras: dash_host (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA ras: hostfile (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA ras: localhost (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA ras: xgrid (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA rds: hostfile (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA rmaps: round_robin (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA pls: fork (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA pls: xgrid (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA sds: env (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA sds: seed (MCA v1.0, API v1.0, Component v1.1.2)
>> MCA sds: singleton (MCA v1.0, API v1.0, Component
>> v1.1.2)
>> Prefix: /usr/local
>> Bindir: /usr/local/bin
>> Libdir: /usr/local/lib
>> Incdir: /usr/local/include
>> Pkglibdir: /usr/local/lib/openmpi
>> Sysconfdir: /usr/local/etc
>> Configured architecture: i386-apple-darwin8.7.3
>> Configured by: root
>> Configured on: Wed Jan 24 10:46:02 EST 2007
>> Configure host: node01
>> Built by: root
>> Built on: Wed Jan 24 11:02:16 EST 2007
>> Built host: node01
>> C bindings: yes
>> C++ bindings: yes
>> Fortran77 bindings: yes (single underscore)
>> Fortran90 bindings: yes
>> Fortran90 bindings size: small
>> C compiler: gcc
>> C compiler absolute: /usr/bin/gcc
>> C char size: 1
>> C bool size: 1
>> C short size: 2
>> C int size: 4
>> C long size: 4
>> C float size: 4
>> C double size: 8
>> C pointer size: 4
>> C char align: 1
>> C bool align: 1
>> C int align: 4
>> C float align: 4
>> C double align: 4
>> C++ compiler: g++
>> C++ compiler absolute: /usr/bin/g++
>> Fortran77 compiler: ifort
>> Fortran77 compiler abs: /usr/bin/ifort
>> Fortran90 compiler: ifort
>> Fortran90 compiler abs: /usr/bin/ifort
>> Fort integer size: 4
>> Fort logical size: 4
>> Fort logical value true: -1
>> Fort have integer1: yes
>> Fort have integer2: yes
>> Fort have integer4: yes
>> Fort have integer8: yes
>> Fort have integer16: no
>> Fort have real4: yes
>> Fort have real8: yes
>> Fort have real16: yes
>> Fort have complex8: yes
>> Fort have complex16: yes
>> Fort have complex32: yes
>> Fort integer1 size: 1
>> Fort integer2 size: 2
>> Fort integer4 size: 4
>> Fort integer8 size: 8
>> Fort integer16 size: -1
>> Fort real size: 4
>> Fort real4 size: 4
>> Fort real8 size: 8
>> Fort real16 size: 16
>> Fort dbl prec size: 4
>> Fort cplx size: 4
>> Fort dbl cplx size: 4
>> Fort cplx8 size: 8
>> Fort cplx16 size: 16
>> Fort cplx32 size: 32
>> Fort integer align: 1
>> Fort integer1 align: 1
>> Fort integer2 align: 1
>> Fort integer4 align: 1
>> Fort integer8 align: 1
>> Fort integer16 align: -1
>> Fort real align: 1
>> Fort real4 align: 1
>> Fort real8 align: 1
>> Fort real16 align: 1
>> Fort dbl prec align: 1
>> Fort cplx align: 1
>> Fort dbl cplx align: 1
>> Fort cplx8 align: 1
>> Fort cplx16 align: 1
>> Fort cplx32 align: 1
>> C profiling: yes
>> C++ profiling: yes
>> Fortran77 profiling: yes
>> Fortran90 profiling: yes
>> C++ exceptions: no
>> Thread support: posix (mpi: no, progress: no)
>> Build CFLAGS: -O3 -DNDEBUG -fno-strict-aliasing
>> Build CXXFLAGS: -O3 -DNDEBUG -finline-functions
>> Build FFLAGS:
>> Build FCFLAGS:
>> Build LDFLAGS: -export-dynamic -Wl,-u,_munmap
>> -Wl,-multiply_defined,suppress
>> Build LIBS:
>> Wrapper extra CFLAGS:
>> Wrapper extra CXXFLAGS:
>> Wrapper extra FFLAGS:
>> Wrapper extra FCFLAGS:
>> Wrapper extra LDFLAGS: -Wl,-u,_munmap
>> -Wl,-multiply_defined,suppress
>> Wrapper extra LIBS: -ldl
>> Internal debug support: no
>> MPI parameter check: runtime
>> Memory profiling support: no
>> Memory debugging support: no
>> libltdl support: yes
>> MCA mca: parameter "mca_param_files" (current value:
>>
>> "/Users/dwalker/.openmpi/mca-params.conf:/usr/local/etc/openmpi-mca-para
>> ms.conf")
>> Path for MCA configuration files containing
>> default parameter values
>> MCA mca: parameter "mca_component_path" (current value:
>> "/usr/local/lib/openmpi:/Users/dwalker/.openmpi/components")
>> Path where to look for Open MPI and ORTE
>> components
>> MCA mca: parameter "mca_verbose" (current value:
>> <none>)
>> Top-level verbosity parameter
>> MCA mca: parameter "mca_component_show_load_errors"
>> (current value: "1")
>> Whether to show errors for components that
>> failed to load or not
>> MCA mca: parameter "mca_component_disable_dlopen"
>> (current value: "0")
>> Whether to attempt to disable opening dynamic
>> components or not
>> MCA mpi: parameter "mpi_param_check" (current value:
>> "1")
>> Whether you want MPI API parameters checked at
>> run-time or not. Possible values are 0 (no checking) and 1
>> (perform checking at run-time)
>> MCA mpi: parameter "mpi_yield_when_idle" (current
>> value: "0")
>> Yield the processor when waiting for MPI
>> communication (for MPI processes, will default to 1 when oversubscribing
>> nodes)
>> MCA mpi: parameter "mpi_event_tick_rate" (current
>> value: "-1")
>> How often to progress TCP communications (0 =
>> never, otherwise specified in microseconds)
>> MCA mpi: parameter "mpi_show_handle_leaks" (current
>> value: "0")
>> Whether MPI_FINALIZE shows all MPI handles
>> that were not freed or not
>> MCA mpi: parameter "mpi_no_free_handles" (current
>> value: "0")
>> Whether to actually free MPI objects when
>> their handles are freed
>> MCA mpi: parameter "mpi_show_mca_params" (current
>> value: "0")
>> Whether to show all MCA parameter value during
>> MPI_INIT or not (good for reproducability of MPI jobs)
>> MCA mpi: parameter "mpi_show_mca_params_file" (current
>> value: <none>)
>> If mpi_show_mca_params is true, setting this
>> string to a valid filename tells Open MPI to dump all the MCA
>> parameter values into a file suitable for
>> reading via the mca_param_files parameter (good for reproducability of
>> MPI jobs)
>> MCA mpi: parameter "mpi_paffinity_alone" (current
>> value: "0")
>> If nonzero, assume that this job is the only
>> (set of) process(es) running on each node and bind processes to
>> processors, starting with processor ID 0
>> MCA mpi: parameter "mpi_keep_peer_hostnames" (current
>> value: "1")
>> If nonzero, save the string hostnames of all
>> MPI peer processes (mostly for error / debugging output messages).
>> This can add quite a bit of memory usage to
>> each MPI process.
>> MCA mpi: parameter "mpi_abort_delay" (current value:
>> "0")
>> If nonzero, print out an identifying message
>> when MPI_ABORT is invoked (hostname, PID of the process that called
>> MPI_ABORT) and delay for that many seconds
>> before exiting (a negative delay value means to never abort). This
>> allows attaching of a debugger before quitting
>> the job.
>> MCA mpi: information "mpi_abort_print_stack" (value:
>> "0")
>> If nonzero, print out a stack trace when
>> MPI_ABORT is invoked
>> MCA mpi: parameter "mpi_leave_pinned" (current value:
>> "0")
>> leave_pinned
>> MCA mpi: parameter "mpi_leave_pinned_pipeline" (current
>> value: "0")
>> leave_pinned_pipeline
>> MCA orte: parameter "orte_base_user_debugger" (current
>> value: "totalview @mpirun@ -a @mpirun_args@ : fxp @mpirun@ -a
>> @mpirun_args@")
>> Sequence of user-level debuggers to search for
>> in orterun
>> MCA orte: parameter "orte_debug" (current value: "0")
>> Whether or not to enable debugging output for
>> all ORTE components (0 or 1)
>> MCA opal: parameter "opal_signal" (current value:
>> "6,10,8,11")
>> If a signal is received, display the stack
>> trace frame
>> MCA memory: parameter "memory" (current value: <none>)
>> Default selection set of components for the
>> memory framework (<none> means "use all components that can be
>> found")
>> MCA memory: parameter "memory_base_verbose" (current
>> value: "0")
>> Verbosity level for the memory framework (0 =
>> no verbosity)
>> MCA memory: parameter "memory_darwin_priority" (current
>> value: "0")
>> MCA paffinity: parameter "paffinity" (current value: <none>)
>> Default selection set of components for the
>> paffinity framework (<none> means "use all components that can be
>> found")
>> MCA maffinity: parameter "maffinity" (current value: <none>)
>> Default selection set of components for the
>> maffinity framework (<none> means "use all components that can be
>> found")
>> MCA maffinity: parameter "maffinity_first_use_priority"
>> (current value: "10")
>> Priority of the first_use maffinity component
>> MCA timer: parameter "timer" (current value: <none>)
>> Default selection set of components for the
>> timer framework (<none> means "use all components that can be found")
>> MCA timer: parameter "timer_base_verbose" (current value:
>> "0")
>> Verbosity level for the timer framework (0 =
>> no verbosity)
>> MCA timer: parameter "timer_darwin_priority" (current
>> value: "0")
>> MCA allocator: parameter "allocator" (current value: <none>)
>> Default selection set of components for the
>> allocator framework (<none> means "use all components that can be
>> found")
>> MCA allocator: parameter "allocator_base_verbose" (current
>> value: "0")
>> Verbosity level for the allocator framework (0
>> = no verbosity)
>> MCA allocator: parameter "allocator_basic_priority" (current
>> value: "0")
>> MCA allocator: parameter "allocator_bucket_num_buckets"
>> (current value: "30")
>> MCA allocator: parameter "allocator_bucket_priority" (current
>> value: "0")
>> MCA coll: parameter "coll" (current value: <none>)
>> Default selection set of components for the
>> coll framework (<none> means "use all components that can be found")
>> MCA coll: parameter "coll_base_verbose" (current value:
>> "0")
>> Verbosity level for the coll framework (0 = no
>> verbosity)
>> MCA coll: parameter "coll_basic_priority" (current
>> value: "10")
>> Priority of the basic coll component
>> MCA coll: parameter "coll_basic_crossover" (current
>> value: "4")
>> Minimum number of processes in a communicator
>> before using the logarithmic algorithms
>> MCA coll: parameter "coll_hierarch_priority" (current
>> value: "0")
>> Priority of the hierarchical coll component
>> MCA coll: parameter "coll_hierarch_verbose" (current
>> value: "0")
>> Turn verbose message of the hierarchical coll
>> component on/off
>> MCA coll: parameter "coll_hierarch_use_rdma" (current
>> value: "0")
>> Switch from the send btl list used to detect
>> hierarchies to the rdma btl list
>> MCA coll: parameter "coll_hierarch_ignore_sm" (current
>> value: "0")
>> Ignore sm protocol when detecting hierarchies.
>> Required to enable the usage of protocol specific collective
>> operations
>> MCA coll: parameter "coll_hierarch_symmetric" (current
>> value: "0")
>> Assume symmetric configuration
>> MCA coll: parameter "coll_self_priority" (current value:
>> "75")
>> MCA coll: parameter "coll_sm_priority" (current value:
>> "0")
>> Priority of the sm coll component
>> MCA coll: parameter "coll_sm_control_size" (current
>> value: "4096")
>> Length of the control data -- should usually
>> be either the length of a cache line on most SMPs, or the size of a
>> page on machines that support direct memory
>> affinity page placement (in bytes)
>> MCA coll: parameter "coll_sm_bootstrap_filename"
>> (current value: "shared_mem_sm_bootstrap")
>> Filename (in the Open MPI session directory)
>> of the coll sm component bootstrap rendezvous mmap file
>> MCA coll: parameter "coll_sm_bootstrap_num_segments"
>> (current value: "8")
>> Number of segments in the bootstrap file
>> MCA coll: parameter "coll_sm_fragment_size" (current
>> value: "8192")
>> Fragment size (in bytes) used for passing data
>> through shared memory (will be rounded up to the nearest
>> control_size size)
>> MCA coll: parameter "coll_sm_mpool" (current value:
>> "sm")
>> Name of the mpool component to use
>> MCA coll: parameter "coll_sm_comm_in_use_flags" (current
>> value: "2")
>> Number of "in use" flags, used to mark a
>> message passing area segment as currently being used or not (must be >=
>> 2
>> and <= comm_num_segments)
>> MCA coll: parameter "coll_sm_comm_num_segments" (current
>> value: "8")
>> Number of segments in each communicator's
>> shared memory message passing area (must be >= 2, and must be a multiple
>> of comm_in_use_flags)
>> MCA coll: parameter "coll_sm_tree_degree" (current
>> value: "4")
>> Degree of the tree for tree-based operations
>> (must be => 1 and <= min(control_size, 255))
>> MCA coll: information
>> "coll_sm_shared_mem_used_bootstrap" (value: "160")
>> Amount of shared memory used in the shared
>> memory bootstrap area (in bytes)
>> MCA coll: parameter "coll_sm_info_num_procs" (current
>> value: "4")
>> Number of processes to use for the calculation
>> of the shared_mem_size MCA information parameter (must be => 2)
>> MCA coll: information "coll_sm_shared_mem_used_data"
>> (value: "548864")
>> Amount of shared memory used in the shared
>> memory data area for info_num_procs processes (in bytes)
>> MCA coll: parameter "coll_tuned_priority" (current
>> value: "30")
>> Priority of the tuned coll component
>> MCA coll: parameter
>> "coll_tuned_pre_allocate_memory_comm_size_limit" (current value:
>> "32768")
>> Size of communicator were we stop
>> pre-allocating memory for the fixed internal buffer used for message
>> requests
>> etc that is hung off the communicator data
>> segment. I.e. if you have a 100'000 nodes you might not want to
>> pre-allocate 200'000 request handle slots per
>> communicator instance!
>> MCA coll: parameter "coll_tuned_use_dynamic_rules"
>> (current value: "0")
>> Switch used to decide if we use static (if
>> statements) or dynamic (built at runtime) decision function rules
>> MCA coll: parameter "coll_tuned_init_tree_fanout"
>> (current value: "4")
>> Inital fanout used in the tree topologies for
>> each communicator. This is only an initial guess, if a tuned
>> collective needs a different fanout for an
>> operation, it build it dynamically. This parameter is only for the
>> first guess and might save a little time
>> MCA coll: parameter "coll_tuned_init_chain_fanout"
>> (current value: "4")
>> Inital fanout used in the chain (fanout
>> followed by pipeline) topologies for each communicator. This is only an
>> initial guess, if a tuned collective needs a
>> different fanout for an operation, it build it dynamically. This
>> parameter is only for the first guess and
>> might save a little time
>> MCA coll: parameter "coll_tuned_allreduce_algorithm"
>> (current value: "0")
>> Which allreduce algorithm is used. Can be
>> locked down to choice of: 0 ignore, 1 basic linear, 2 nonoverlapping
>> (tuned reduce + tuned bcast)
>> MCA coll: parameter
>> "coll_tuned_allreduce_algorithm_segmentsize" (current value: "0")
>> Segment size in bytes used by default for
>> allreduce algorithms. Only has meaning if algorithm is forced and
>> supports segmenting. 0 bytes means no
>> segmentation.
>> MCA coll: parameter
>> "coll_tuned_allreduce_algorithm_tree_fanout" (current value: "4")
>> Fanout for n-tree used for allreduce
>> algorithms. Only has meaning if algorithm is forced and supports n-tree
>> topo
>> based operation.
>> MCA coll: parameter
>> "coll_tuned_allreduce_algorithm_chain_fanout" (current value: "4")
>> Fanout for chains used for allreduce
>> algorithms. Only has meaning if algorithm is forced and supports chain
>> topo
>> based operation.
>> MCA coll: parameter "coll_tuned_alltoall_algorithm"
>> (current value: "0")
>> Which alltoall algorithm is used. Can be
>> locked down to choice of: 0 ignore, 1 basic linear, 2 pairwise, 3:
>> modified bruck, 4: two proc only.
>> MCA coll: parameter
>> "coll_tuned_alltoall_algorithm_segmentsize" (current value: "0")
>> Segment size in bytes used by default for
>> alltoall algorithms. Only has meaning if algorithm is forced and
>> supports segmenting. 0 bytes means no
>> segmentation.
>> MCA coll: parameter
>> "coll_tuned_alltoall_algorithm_tree_fanout" (current value: "4")
>> Fanout for n-tree used for alltoall
>> algorithms. Only has meaning if algorithm is forced and supports n-tree
>> topo
>> based operation.
>> MCA coll: parameter
>> "coll_tuned_alltoall_algorithm_chain_fanout" (current value: "4")
>> Fanout for chains used for alltoall
>> algorithms. Only has meaning if algorithm is forced and supports chain
>> topo
>> based operation.
>> MCA coll: parameter "coll_tuned_barrier_algorithm"
>> (current value: "0")
>> Which barrier algorithm is used. Can be locked
>> down to choice of: 0 ignore, 1 linear, 2 double ring, 3: recursive
>> doubling 4: bruck, 5: two proc only, 6: step
>> based bmtree
>> MCA coll: parameter "coll_tuned_bcast_algorithm"
>> (current value: "0")
>> Which bcast algorithm is used. Can be locked
>> down to choice of: 0 ignore, 1 basic linear, 2 chain, 3: pipeline, 4:
>> split binary tree, 5: binary tree.
>> MCA coll: parameter
>> "coll_tuned_bcast_algorithm_segmentsize" (current value: "0")
>> Segment size in bytes used by default for
>> bcast algorithms. Only has meaning if algorithm is forced and supports
>> segmenting. 0 bytes means no segmentation.
>> MCA coll: parameter
>> "coll_tuned_bcast_algorithm_tree_fanout" (current value: "4")
>> Fanout for n-tree used for bcast algorithms.
>> Only has meaning if algorithm is forced and supports n-tree topo
>> based operation.
>> MCA coll: parameter
>> "coll_tuned_bcast_algorithm_chain_fanout" (current value: "4")
>> Fanout for chains used for bcast algorithms.
>> Only has meaning if algorithm is forced and supports chain topo based
>> operation.
>> MCA coll: parameter "coll_tuned_reduce_algorithm"
>> (current value: "0")
>> Which reduce algorithm is used. Can be locked
>> down to choice of: 0 ignore, 1 linear, 2 chain, 3 pipeline
>> MCA coll: parameter
>> "coll_tuned_reduce_algorithm_segmentsize" (current value: "0")
>> Segment size in bytes used by default for
>> reduce algorithms. Only has meaning if algorithm is forced and supports
>> segmenting. 0 bytes means no segmentation.
>> MCA coll: parameter
>> "coll_tuned_reduce_algorithm_tree_fanout" (current value: "4")
>> Fanout for n-tree used for reduce algorithms.
>> Only has meaning if algorithm is forced and supports n-tree topo
>> based operation.
>> MCA coll: parameter
>> "coll_tuned_reduce_algorithm_chain_fanout" (current value: "4")
>> Fanout for chains used for reduce algorithms.
>> Only has meaning if algorithm is forced and supports chain topo
>> based operation.
>> MCA io: parameter "io_base_freelist_initial_size"
>> (current value: "16")
>> Initial MPI-2 IO request freelist size
>> MCA io: parameter "io_base_freelist_max_size" (current
>> value: "64")
>> Max size of the MPI-2 IO request freelist
>> MCA io: parameter "io_base_freelist_increment"
>> (current value: "16")
>> Increment size of the MPI-2 IO request
>> freelist
>> MCA io: parameter "io" (current value: <none>)
>> Default selection set of components for the io
>> framework (<none> means "use all components that can be found")
>> MCA io: parameter "io_base_verbose" (current value:
>> "0")
>> Verbosity level for the io framework (0 = no
>> verbosity)
>> MCA io: parameter "io_romio_priority" (current value:
>> "10")
>> Priority of the io romio component
>> MCA io: parameter "io_romio_delete_priority" (current
>> value: "10")
>> Delete priority of the io romio component
>> MCA io: parameter
>> "io_romio_enable_parallel_optimizations" (current value: "0")
>> Enable set of Open MPI-added options to
>> improve collective file i/o performance
>> MCA mpool: parameter "mpool" (current value: <none>)
>> Default selection set of components for the
>> mpool framework (<none> means "use all components that can be found")
>> MCA mpool: parameter "mpool_base_verbose" (current value:
>> "0")
>> Verbosity level for the mpool framework (0 =
>> no verbosity)
>> MCA mpool: parameter "mpool_sm_size" (current value:
>> "536870912")
>> MCA mpool: parameter "mpool_sm_allocator" (current value:
>> "bucket")
>> MCA mpool: parameter "mpool_sm_priority" (current value:
>> "0")
>> MCA mpool: parameter "mpool_base_use_mem_hooks" (current
>> value: "0")
>> use memory hooks for deregistering freed
>> memory
>> MCA mpool: parameter "mpool_use_mem_hooks" (current
>> value: "0")
>> (deprecated, use mpool_base_use_mem_hooks)
>> MCA pml: parameter "pml" (current value: "ob1")
>> Default selection set of components for the
>> pml framework (<none> means "use all components that can be found")
>> MCA pml: parameter "pml_base_verbose" (current value:
>> "0")
>> Verbosity level for the pml framework (0 = no
>> verbosity)
>> MCA pml: parameter "pml_ob1_free_list_num" (current
>> value: "4")
>> MCA pml: parameter "pml_ob1_free_list_max" (current
>> value: "-1")
>> MCA pml: parameter "pml_ob1_free_list_inc" (current
>> value: "64")
>> MCA pml: parameter "pml_ob1_priority" (current value:
>> "1")
>> MCA pml: parameter "pml_ob1_eager_limit" (current
>> value: "131072")
>> MCA pml: parameter "pml_ob1_send_pipeline_depth"
>> (current value: "3")
>> MCA pml: parameter "pml_ob1_recv_pipeline_depth"
>> (current value: "4")
>> MCA bml: parameter "bml" (current value: <none>)
>> Default selection set of components for the
>> bml framework (<none> means "use all components that can be found")
>> MCA bml: parameter "bml_base_verbose" (current value:
>> "0")
>> Verbosity level for the bml framework (0 = no
>> verbosity)
>> MCA bml: parameter "bml_r2_priority" (current value:
>> "0")
>> MCA rcache: parameter "rcache" (current value: <none>)
>> Default selection set of components for the
>> rcache framework (<none> means "use all components that can be
>> found")
>> MCA rcache: parameter "rcache_base_verbose" (current
>> value: "0")
>> Verbosity level for the rcache framework (0 =
>> no verbosity)
>> MCA rcache: parameter "rcache_rb_priority" (current value:
>> "0")
>> MCA btl: parameter "btl_base_debug" (current value:
>> "0")
>> If btl_base_debug is 1 standard debug is
>> output, if > 1 verbose debug is output
>> MCA btl: parameter "btl" (current value: <none>)
>> Default selection set of components for the
>> btl framework (<none> means "use all components that can be found")
>> MCA btl: parameter "btl_base_verbose" (current value:
>> "0")
>> Verbosity level for the btl framework (0 = no
>> verbosity)
>> MCA btl: parameter "btl_self_free_list_num" (current
>> value: "0")
>> Number of fragments by default
>> MCA btl: parameter "btl_self_free_list_max" (current
>> value: "-1")
>> Maximum number of fragments
>> MCA btl: parameter "btl_self_free_list_inc" (current
>> value: "32")
>> Increment by this number of fragments
>> MCA btl: parameter "btl_self_eager_limit" (current
>> value: "131072")
>> Eager size fragmeng (before the rendez-vous
>> ptotocol)
>> MCA btl: parameter "btl_self_min_send_size" (current
>> value: "262144")
>> Minimum fragment size after the rendez-vous
>> MCA btl: parameter "btl_self_max_send_size" (current
>> value: "262144")
>> Maximum fragment size after the rendez-vous
>> MCA btl: parameter "btl_self_min_rdma_size" (current
>> value: "2147483647")
>> Maximum fragment size for the RDMA transfer
>> MCA btl: parameter "btl_self_max_rdma_size" (current
>> value: "2147483647")
>> Maximum fragment size for the RDMA transfer
>> MCA btl: parameter "btl_self_exclusivity" (current
>> value: "65536")
>> Device exclusivity
>> MCA btl: parameter "btl_self_flags" (current value:
>> "10")
>> Active behavior flags
>> MCA btl: parameter "btl_self_priority" (current value:
>> "0")
>> MCA btl: parameter "btl_sm_free_list_num" (current
>> value: "8")
>> MCA btl: parameter "btl_sm_free_list_max" (current
>> value: "-1")
>> MCA btl: parameter "btl_sm_free_list_inc" (current
>> value: "256")
>> MCA btl: parameter "btl_sm_max_procs" (current value:
>> "-1")
>> MCA btl: parameter "btl_sm_sm_extra_procs" (current
>> value: "2")
>> MCA btl: parameter "btl_sm_mpool" (current value: "sm")
>> MCA btl: parameter "btl_sm_eager_limit" (current value:
>> "1024")
>> MCA btl: parameter "btl_sm_max_frag_size" (current
>> value: "8192")
>> MCA btl: parameter "btl_sm_size_of_cb_queue" (current
>> value: "128")
>> MCA btl: parameter "btl_sm_cb_lazy_free_freq" (current
>> value: "120")
>> MCA btl: parameter "btl_sm_priority" (current value:
>> "0")
>> MCA btl: parameter "btl_tcp_if_include" (current value:
>> <none>)
>> MCA btl: parameter "btl_tcp_if_exclude" (current value:
>> "lo")
>> MCA btl: parameter "btl_tcp_free_list_num" (current
>> value: "8")
>> MCA btl: parameter "btl_tcp_free_list_max" (current
>> value: "-1")
>> MCA btl: parameter "btl_tcp_free_list_inc" (current
>> value: "32")
>> MCA btl: parameter "btl_tcp_sndbuf" (current value:
>> "131072")
>> MCA btl: parameter "btl_tcp_rcvbuf" (current value:
>> "131072")
>> MCA btl: parameter "btl_tcp_endpoint_cache" (current
>> value: "30720")
>> MCA btl: parameter "btl_tcp_exclusivity" (current
>> value: "0")
>> MCA btl: parameter "btl_tcp_eager_limit" (current
>> value: "65536")
>> MCA btl: parameter "btl_tcp_min_send_size" (current
>> value: "65536")
>> MCA btl: parameter "btl_tcp_max_send_size" (current
>> value: "131072")
>> MCA btl: parameter "btl_tcp_min_rdma_size" (current
>> value: "131072")
>> MCA btl: parameter "btl_tcp_max_rdma_size" (current
>> value: "2147483647")
>> MCA btl: parameter "btl_tcp_flags" (current value:
>> "10")
>> MCA btl: parameter "btl_tcp_priority" (current value:
>> "0")
>> MCA btl: parameter "btl_base_include" (current value:
>> <none>)
>> MCA btl: parameter "btl_base_exclude" (current value:
>> <none>)
>> MCA topo: parameter "topo" (current value: <none>)
>> Default selection set of components for the
>> topo framework (<none> means "use all components that can be found")
>> MCA topo: parameter "topo_base_verbose" (current value:
>> "0")
>> Verbosity level for the topo framework (0 = no
>> verbosity)
>> MCA osc: parameter "osc" (current value: <none>)
>> Default selection set of components for the
>> osc framework (<none> means "use all components that can be found")
>> MCA osc: parameter "osc_base_verbose" (current value:
>> "0")
>> Verbosity level for the osc framework (0 = no
>> verbosity)
>> MCA osc: parameter "osc_pt2pt_fence_sync_method"
>> (current value: "reduce_scatter")
>> How to synchronize fence: reduce_scatter,
>> allreduce, alltoall
>> MCA osc: parameter "osc_pt2pt_eager_send" (current
>> value: "0")
>> Attempt to start data movement during
>> communication call, instead of at synchrnoization time. Info key of
>> same
>> name overrides this value, if info key given.
>> MCA osc: parameter "osc_pt2pt_no_locks" (current value:
>> "0")
>> Enable optimizations available only if
>> MPI_LOCK is not used.
>> MCA osc: parameter "osc_pt2pt_priority" (current value:
>> "0")
>> MCA errmgr: parameter "errmgr" (current value: <none>)
>> Default selection set of components for the
>> errmgr framework (<none> means "use all components that can be
>> found")
>> MCA gpr: parameter "gpr_base_maxsize" (current value:
>> "2147483647")
>> MCA gpr: parameter "gpr_base_blocksize" (current value:
>> "512")
>> MCA gpr: parameter "gpr" (current value: <none>)
>> Default selection set of components for the
>> gpr framework (<none> means "use all components that can be found")
>> MCA gpr: parameter "gpr_null_priority" (current value:
>> "0")
>> MCA gpr: parameter "gpr_proxy_debug" (current value:
>> "0")
>> MCA gpr: parameter "gpr_proxy_priority" (current value:
>> "0")
>> MCA gpr: parameter "gpr_replica_debug" (current value:
>> "0")
>> MCA gpr: parameter "gpr_replica_isolate" (current
>> value: "0")
>> MCA gpr: parameter "gpr_replica_priority" (current
>> value: "0")
>> MCA iof: parameter "iof_base_window_size" (current
>> value: "4096")
>> MCA iof: parameter "iof_base_service" (current value:
>> "0.0.0")
>> MCA iof: parameter "iof" (current value: <none>)
>> Default selection set of components for the
>> iof framework (<none> means "use all components that can be found")
>> MCA iof: parameter "iof_proxy_debug" (current value:
>> "1")
>> MCA iof: parameter "iof_proxy_priority" (current value:
>> "0")
>> MCA iof: parameter "iof_svc_debug" (current value: "1")
>> MCA iof: parameter "iof_svc_priority" (current value:
>> "0")
>> MCA ns: parameter "ns" (current value: <none>)
>> Default selection set of components for the ns
>> framework (<none> means "use all components that can be found")
>> MCA ns: parameter "ns_proxy_debug" (current value:
>> "0")
>> MCA ns: parameter "ns_proxy_maxsize" (current value:
>> "2147483647")
>> MCA ns: parameter "ns_proxy_blocksize" (current value:
>> "512")
>> MCA ns: parameter "ns_proxy_priority" (current value:
>> "0")
>> MCA ns: parameter "ns_replica_debug" (current value:
>> "0")
>> MCA ns: parameter "ns_replica_isolate" (current value:
>> "0")
>> MCA ns: parameter "ns_replica_maxsize" (current value:
>> "2147483647")
>> MCA ns: parameter "ns_replica_blocksize" (current
>> value: "512")
>> MCA ns: parameter "ns_replica_priority" (current
>> value: "0")
>> MCA oob: parameter "oob" (current value: <none>)
>> Default selection set of components for the
>> oob framework (<none> means "use all components that can be found")
>> MCA oob: parameter "oob_base_verbose" (current value:
>> "0")
>> Verbosity level for the oob framework (0 = no
>> verbosity)
>> MCA oob: parameter "oob_tcp_peer_limit" (current value:
>> "-1")
>> MCA oob: parameter "oob_tcp_peer_retries" (current
>> value: "60")
>> MCA oob: parameter "oob_tcp_debug" (current value: "0")
>> MCA oob: parameter "oob_tcp_include" (current value:
>> <none>)
>> MCA oob: parameter "oob_tcp_exclude" (current value:
>> <none>)
>> MCA oob: parameter "oob_tcp_sndbuf" (current value:
>> "131072")
>> MCA oob: parameter "oob_tcp_rcvbuf" (current value:
>> "131072")
>> MCA oob: parameter "oob_tcp_connect_timeout" (current
>> value: "10")
>> connect() timeout in seconds, before trying
>> next interface
>> MCA oob: parameter "oob_tcp_priority" (current value:
>> "0")
>> MCA ras: parameter "ras" (current value: <none>)
>> Default selection set of components for the
>> ras framework (<none> means "use all components that can be found")
>> MCA ras: parameter "ras_dash_host_priority" (current
>> value: "5")
>> Selection priority for the dash_host RAS
>> component
>> MCA ras: parameter "ras_hostfile_priority" (current
>> value: "10")
>> Selection priority for the hostfile RAS
>> component
>> MCA ras: parameter "ras_localhost_priority" (current
>> value: "0")
>> Selection priority for the localhost RAS
>> component
>> MCA ras: parameter "ras_xgrid_priority" (current value:
>> "100")
>> MCA rds: parameter "rds" (current value: <none>)
>> Default selection set of components for the
>> rds framework (<none> means "use all components that can be found")
>> MCA rds: parameter "rds_hostfile_debug" (current value:
>> "0")
>> Toggle debug output for hostfile RDS component
>> MCA rds: parameter "rds_hostfile_path" (current value:
>> "/usr/local/etc/openmpi-default-hostfile")
>> ORTE Host filename
>> MCA rds: parameter "rds_hostfile_priority" (current
>> value: "0")
>> MCA rds: parameter "rds_resfile_debug" (current value:
>> "0")
>> Toggle debug output for resfile RDS component
>> MCA rds: parameter "rds_resfile_name" (current value:
>> <none>)
>> ORTE Resource filename
>> MCA rds: parameter "rds_resfile_priority" (current
>> value: "0")
>> MCA rmaps: parameter "rmaps_base_verbose" (current value:
>> "0")
>> Verbosity level for the rmaps framework
>> MCA rmaps: parameter "rmaps_base_schedule_policy"
>> (current value: "slot")
>> Scheduling Policy for RMAPS. [slot | node]
>> MCA rmaps: parameter "rmaps_base_schedule_local" (current
>> value: "1")
>> If nonzero, allow scheduling MPI applications
>> on the same node as mpirun (default). If zero, do not schedule any
>> MPI applications on the same node as mpirun
>> MCA rmaps: parameter "rmaps_base_no_oversubscribe"
>> (current value: "0")
>> If nonzero, then do not allow oversubscription
>> of nodes - mpirun will return an error if there aren't enough nodes
>> to launch all processes without
>> oversubscribing
>> MCA rmaps: parameter "rmaps" (current value: <none>)
>> Default selection set of components for the
>> rmaps framework (<none> means "use all components that can be found")
>> MCA rmaps: parameter "rmaps_round_robin_debug" (current
>> value: "1")
>> Toggle debug output for Round Robin RMAPS
>> component
>> MCA rmaps: parameter "rmaps_round_robin_priority"
>> (current value: "1")
>> Selection priority for Round Robin RMAPS
>> component
>> MCA rmgr: parameter "rmgr" (current value: <none>)
>> Default selection set of components for the
>> rmgr framework (<none> means "use all components that can be found")
>> MCA rmgr: parameter "rmgr_proxy_priority" (current
>> value: "0")
>> MCA rmgr: parameter "rmgr_urm_priority" (current value:
>> "0")
>> MCA rml: parameter "rml" (current value: <none>)
>> Default selection set of components for the
>> rml framework (<none> means "use all components that can be found")
>> MCA rml: parameter "rml_base_verbose" (current value:
>> "0")
>> Verbosity level for the rml framework (0 = no
>> verbosity)
>> MCA rml: parameter "rml_oob_priority" (current value:
>> "0")
>> MCA pls: parameter "pls" (current value: <none>)
>> Default selection set of components for the
>> pls framework (<none> means "use all components that can be found")
>> MCA pls: parameter "pls_fork_reap" (current value: "1")
>> Whether to wait to reap all children before
>> finalizing or not
>> MCA pls: parameter "pls_fork_reap_timeout" (current
>> value: "0")
>> When killing children processes, first send a
>> SIGTERM, then wait at least this timeout (in seconds), then send a
>> SIGKILL
>> MCA pls: parameter "pls_fork_priority" (current value:
>> "1")
>> Priority of this component
>> MCA pls: parameter "pls_fork_debug" (current value:
>> "0")
>> Whether to enable debugging output or not
>> MCA pls: parameter "pls_rsh_debug" (current value: "0")
>> Whether or not to enable debugging output for
>> the rsh pls component (0 or 1)
>> MCA pls: parameter "pls_rsh_num_concurrent" (current
>> value: "128")
>> How many pls_rsh_agent instances to invoke
>> concurrently (must be > 0)
>> MCA pls: parameter "pls_rsh_orted" (current value:
>> "orted")
>> The command name that the rsh pls component
>> will invoke for the ORTE daemon
>> MCA pls: parameter "pls_rsh_priority" (current value:
>> "10")
>> Priority of the rsh pls component
>> MCA pls: parameter "pls_rsh_delay" (current value: "1")
>> Delay (in seconds) between invocations of the
>> remote agent, but only used when the "debug" MCA parameter is true,
>> or the top-level MCA debugging is enabled
>> (otherwise this value is ignored)
>> MCA pls: parameter "pls_rsh_reap" (current value: "1")
>> If set to 1, wait for all the processes to
>> complete before exiting. Otherwise, quit immediately -- without
>> waiting for confirmation that all other
>> processes in the job have completed.
>> MCA pls: parameter "pls_rsh_assume_same_shell" (current
>> value: "1")
>> If set to 1, assume that the shell on the
>> remote node is the same as the shell on the local node. Otherwise,
>> probe for what the remote shell.
>> MCA pls: parameter "pls_rsh_agent" (current value: "ssh
>> : rsh")
>> The command used to launch executables on
>> remote nodes (typically either "ssh" or "rsh")
>> MCA pls: parameter "pls_xgrid_orted" (current value:
>> "orted")
>> MCA pls: parameter "pls_xgrid_priority" (current value:
>> "20")
>> MCA pls: parameter "pls_xgrid_delete_job" (current
>> value: "1")
>> MCA sds: parameter "sds" (current value: <none>)
>> Default selection set of components for the
>> sds framework (<none> means "use all components that can be found")
>> MCA sds: parameter "sds_base_verbose" (current value:
>> "0")
>> Verbosity level for the sds framework (0 = no
>> verbosity)
>> MCA sds: parameter "sds_env_priority" (current value:
>> "0")
>> MCA sds: parameter "sds_pipe_priority" (current value:
>> "0")
>> MCA sds: parameter "sds_seed_priority" (current value:
>> "0")
>> MCA sds: parameter "sds_singleton_priority" (current
>> value: "0")
>> MCA soh: parameter "soh" (current value: <none>)
>> Default selection set of components for the
>> soh framework (<none> means "use all components that can be found")
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>