Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Tom Rosmond (rosmond_at_[hidden])
Date: 2006-09-04 14:01:13


Attached is some error output from my tests of 1-sided message passing,
plus my info file. Below are two copies of a simple fortran subroutine
that mimics mpi_allgatherv using mpi-get calls. The top version fails,
the bottom runs OK. It seems clear from these examples, plus the
'self_send' phrases in the error output, that there is a problem
internally with a processor sending data to itself. I know that your
'mpi_get' implementation is simply a wrapper around 'send/recv' calls,
so clearly this shouldn't happen. However, the
problem does not happen in all cases; I tried to duplicate it in a
simple stand-alone program with mpi_get calls and was unable to make it
fail. Go figure.

Tom

*************** This version fails, producing the error output attached
***********

        subroutine allgatherv_get(xrma,ijsiz,ijstrt,xx,iwinget)

      use mpinog
      implicit none
!
      real xrma(*) , xx(*)
      integer ijsiz(nproc) , ijstrt(nproc)
!
      integer iwinget , ierr , msgsize , j , nn,i
!
      itarget_disp = 0
!
! iwinget is 'handle' to array 'xrma'
!
      call mpi_win_fence(0,iwinget,ierr)
!
      do 200 j = 1,nproc
        nn = ijstrt(j) + 1
        msgsize = ijsiz(j)
!
! 'get' data from RMA window
!
          call mpi_get(xx(nn),msgsize,mpireal,j-1,itarget_disp,msgsize,
     & mpireal,iwinget,ierr)
!
  200 continue
!
      call mpi_win_fence(0,iwinget,ierr)
!
      return
      end
  

**************** This version runs *****************************

        subroutine allgatherv_get(xrma,ijsiz,ijstrt,xx,iwinget)
!
      use mpinog
      implicit none
!
      real xrma(*) , xx(*)
      integer ijsiz(nproc) , ijstrt(nproc)
!
      integer iwinget , ierr , msgsize , j , nn,i
!
      itarget_disp = 0
!
! 'iwinget 'is 'handle' to array 'xrma'
! 'ir' is rank+1
!
      call mpi_win_fence(0,iwinget,ierr)
!
      do 200 j = 1,nproc
        nn = ijstrt(j) + 1
        msgsize = ijsiz(j)
!
        if(ir.ne.j)then
!
! if 'off-processor' then 'get' data from RMA window
!
          call mpi_get(xx(nn),msgsize,mpireal,j-1,itarget_disp,msgsize,
     & mpireal,iwinget,ierr)
        else
!
! for 'on-processor case, bypass MPI and do direct memory-memory copies
!
          do 120 i = 1,msgsize
            xx(i+ijstrt(j)) = xrma(i)
  120 continue
!
        endif
!
  200 continue
!
      call mpi_win_fence(0,iwinget,ierr)
!
      return
      end

Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0x8
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0x8
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0x8
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0x8
[0] func:/opt/mpi/pgi/lib/libopal.so.0 [0x2aaaaaef6b38]
[0] func:/opt/mpi/pgi/lib/libopal.so.0 [0x2aaaaaef6b38]
[1] func:/lib64/libpthread.so.0 [0x2aaaab4fe530]
[2] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so [0x2aaaad400e8f]
[3] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_rdma+0xd2) [0x2aaaad6085c2]
[4] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_put+0x321) [0x2aaaad3fe6b1]
[5] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0x35d) [0x2aaaad3fb70d]
[6] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[7] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_schedule+0x28e) [0x2aaaad3fbd9e]
[8] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_progress+0x228) [0x2aaaad3fcdb8]
[9] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_match+0x933) [0x2aaaad3fa883]
[10] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0xb4) [0x2aaaad3fb464]
[11] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[12] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rdma+0xfa) [0x2aaaad3ff2ba]
[13] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_isend+0x333) [0x2aaaad3f88e3]
[14] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so [0x2aaaade46ac9]
[15] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x63) [0x2aaaad6084d3]
[16] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_replyreq_send+0x135) [0x2aaaade457f5]
[17] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_component_fragment_cb+0x203) [0x2aaaade44e33]
[18] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[19] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_sendreq_send+0x276) [0x2aaaade455b6]
[20] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_module_fence+0x1c5) [0x2aaaade485c5]
[21] func:/opt/mpi/pgi/lib/libmpi.so.0(PMPI_Win_fence+0x92) [0x2aaaaac4cc82]
[22] func:/opt/mpi/pgi/lib/libmpi.so.0(mpi_win_fence+0x25) [0x2aaaaac58265]
[23] func:/data/rosmond/nogapsnew_tim/nogaps239l54(allgatherv_get_+0xf8) [0x523a28]
*** End of error message ***
[1] func:/lib64/libpthread.so.0 [0x2aaaab4fe530]
[2] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so [0x2aaaad400e8f]
[3] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_rdma+0xd2) [0x2aaaad6085c2]
[4] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_put+0x321) [0x2aaaad3fe6b1]
[5] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0x35d) [0x2aaaad3fb70d]
[6] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[7] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_schedule+0x28e) [0x2aaaad3fbd9e]
[8] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_progress+0x228) [0x2aaaad3fcdb8]
[9] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_match+0x933) [0x2aaaad3fa883]
[10] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0xb4) [0x2aaaad3fb464]
[11] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[12] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rdma+0xfa) [0x2aaaad3ff2ba]
[13] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_isend+0x333) [0x2aaaad3f88e3]
[14] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so [0x2aaaade46ac9]
[15] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x63) [0x2aaaad6084d3]
[16] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_replyreq_send+0x135) [0x2aaaade457f5]
[17] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_component_fragment_cb+0x203) [0x2aaaade44e33]
[18] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[19] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_sendreq_send+0x276) [0x2aaaade455b6]
[20] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_module_fence+0x1c5) [0x2aaaade485c5]
[21] func:/opt/mpi/pgi/lib/libmpi.so.0(PMPI_Win_fence+0x92) [0x2aaaaac4cc82]
[22] func:/opt/mpi/pgi/lib/libmpi.so.0(mpi_win_fence+0x25) [0x2aaaaac58265]
[23] func:/data/rosmond/nogapsnew_tim/nogaps239l54(allgatherv_get_+0xf8) [0x523a28]
*** End of error message ***
[2] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so [0x2aaaad400e8f]
[3] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_rdma+0xd2) [0x2aaaad6085c2]
[4] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_put+0x321) [0x2aaaad3fe6b1]
[5] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0x35d) [0x2aaaad3fb70d]
[6] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[7] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_schedule+0x28e) [0x2aaaad3fbd9e]
[8] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_progress+0x228) [0x2aaaad3fcdb8]
[9] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_match+0x933) [0x2aaaad3fa883]
[10] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0xb4) [0x2aaaad3fb464]
[11] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[12] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rdma+0xfa) [0x2aaaad3ff2ba]
[13] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_isend+0x333) [0x2aaaad3f88e3]
[14] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so [0x2aaaade46ac9]
[15] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x63) [0x2aaaad6084d3]
[16] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_replyreq_send+0x135) [0x2aaaade457f5]
[17] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_component_fragment_cb+0x203) [0x2aaaade44e33]
[18] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[19] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_sendreq_send+0x276) [0x2aaaade455b6]
[0] func:/opt/mpi/pgi/lib/libopal.so.0 [0x2aaaaaef6b38]
[1] func:/lib64/libpthread.so.0 [0x2aaaab4fe530]
[2] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so [0x2aaaad400e8f]
[3] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_rdma+0xd2) [0x2aaaad6085c2]
[4] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_put+0x321) [0x2aaaad3fe6b1]
[5] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0x35d) [0x2aaaad3fb70d]
[6] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[7] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_schedule+0x28e) [0x2aaaad3fbd9e]
[8] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_request_progress+0x228) [0x2aaaad3fcdb8]
[9] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_match+0x933) [0x2aaaad3fa883]
[20] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_module_fence+0x1c5) [0x2aaaade485c5]
[10] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback+0xb4) [0x2aaaad3fb464]
[11] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[12] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_rdma+0xfa) [0x2aaaad3ff2ba]
[13] func:/opt/mpi/pgi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_isend+0x333) [0x2aaaad3f88e3]
[21] func:/opt/mpi/pgi/lib/libmpi.so.0(PMPI_Win_fence+0x92) [0x2aaaaac4cc82]
[14] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so [0x2aaaade46ac9]
[15] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x63) [0x2aaaad6084d3]
[16] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_replyreq_send+0x135) [0x2aaaade457f5]
[17] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_component_fragment_cb+0x203) [0x2aaaade44e33]
[18] func:/opt/mpi/pgi/lib/openmpi/mca_btl_self.so(mca_btl_self_send+0x45) [0x2aaaad6084b5]
[19] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_sendreq_send+0x276) [0x2aaaade455b6]
[20] func:/opt/mpi/pgi/lib/openmpi/mca_osc_pt2pt.so(ompi_osc_pt2pt_module_fence+0x1c5) [0x2aaaade485c5]
[21] func:/opt/mpi/pgi/lib/libmpi.so.0(PMPI_Win_fence+0x92) [0x2aaaaac4cc82]
[22] func:/opt/mpi/pgi/lib/libmpi.so.0(mpi_win_fence+0x25) [0x2aaaaac58265]
[23] func:/data/rosmond/nogapsnew_tim/nogaps239l54(allgatherv_get_+0xf8) [0x523a28]
*** End of error message ***
[23] func:/data/rosmond/nogapsnew_tim/nogaps239l54(allgatherv_get_+0xf8) [0x523a28]
*** End of error message ***


                Open MPI: 1.1.1b1
   Open MPI SVN revision: r10805
                Open RTE: 1.1.1b1
   Open RTE SVN revision: r10805
                    OPAL: 1.1.1b1
       OPAL SVN revision: r10805
              MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.1.1)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1.1)
           MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.1.1)
               MCA timer: linux (MCA v1.0, API v1.0, Component v1.1.1)
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: hierarch (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
                MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.1.1)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
              MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: self (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
                 MCA gpr: null (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                  MCA ns: replica (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                 MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: localhost (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1.1)
               MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.1.1)
                MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
                MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: fork (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: seed (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1.1)
                 MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1.1)
                  Prefix: /opt/mpi/pgi
                  Bindir: /opt/mpi/pgi/bin
                  Libdir: /opt/mpi/pgi/lib
                  Incdir: /opt/mpi/pgi/include
               Pkglibdir: /opt/mpi/pgi/lib/openmpi
              Sysconfdir: /opt/mpi/pgi/etc
 Configured architecture: x86_64-unknown-linux-gnu
           Configured by: root
           Configured on: Sun Jul 16 09:18:02 PDT 2006
          Configure host: fir.reachone.com
                Built by: root
                Built on: Sun Jul 16 09:26:58 PDT 2006
              Built host: fir.reachone.com
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
             C char size: 1
             C bool size: 1
            C short size: 2
              C int size: 4
             C long size: 8
            C float size: 4
           C double size: 8
          C pointer size: 8
            C char align: 1
            C bool align: 1
             C int align: 4
           C float align: 4
          C double align: 8
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: pgf90
  Fortran77 compiler abs: /usr/local/pgi/linux86-64/6.1/bin/pgf90
      Fortran90 compiler: pgf90
  Fortran90 compiler abs: /usr/local/pgi/linux86-64/6.1/bin/pgf90
       Fort integer size: 4
       Fort logical size: 4
 Fort logical value true: -1
      Fort have integer1: yes
      Fort have integer2: yes
      Fort have integer4: yes
      Fort have integer8: yes
     Fort have integer16: no
         Fort have real4: yes
         Fort have real8: yes
        Fort have real16: no
      Fort have complex8: yes
     Fort have complex16: yes
     Fort have complex32: no
      Fort integer1 size: 1
      Fort integer2 size: 2
      Fort integer4 size: 4
      Fort integer8 size: 8
     Fort integer16 size: -1
          Fort real size: 4
         Fort real4 size: 4
         Fort real8 size: 8
        Fort real16 size: -1
      Fort dbl prec size: 4
          Fort cplx size: 4
      Fort dbl cplx size: 4
         Fort cplx8 size: 8
        Fort cplx16 size: 16
        Fort cplx32 size: -1
      Fort integer align: 4
     Fort integer1 align: 1
     Fort integer2 align: 2
     Fort integer4 align: 4
     Fort integer8 align: 8
    Fort integer16 align: -1
         Fort real align: 4
        Fort real4 align: 4
        Fort real8 align: 8
       Fort real16 align: -1
     Fort dbl prec align: 4
         Fort cplx align: 4
     Fort dbl cplx align: 4
        Fort cplx8 align: 4
       Fort cplx16 align: 8
       Fort cplx32 align: -1
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
            Build CFLAGS: -O3 -DNDEBUG -fno-strict-aliasing -pthread
          Build CXXFLAGS: -O3 -DNDEBUG -finline-functions -pthread
            Build FFLAGS:
           Build FCFLAGS:
           Build LDFLAGS: -export-dynamic
              Build LIBS: -lnsl -lutil -lm -lpthread
    Wrapper extra CFLAGS: -pthread
  Wrapper extra CXXFLAGS: -pthread
    Wrapper extra FFLAGS:
   Wrapper extra FCFLAGS:
   Wrapper extra LDFLAGS:
      Wrapper extra LIBS: -ldl -Wl,--export-dynamic -lnsl -lutil -lm -lpthread -ldl
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
                 MCA mca: parameter "mca_param_files" (current value: "/home/rosmond/.openmpi/mca-params.conf:/opt/mpi/pgi/etc/openmpi-mca-params.conf")
                          Path for MCA configuration files containing default parameter values
                 MCA mca: parameter "mca_component_path" (current value: "/opt/mpi/pgi/lib/openmpi:/home/rosmond/.openmpi/components")
                          Path where to look for Open MPI and ORTE components
                 MCA mca: parameter "mca_verbose" (current value: <none>)
                          Top-level verbosity parameter
                 MCA mca: parameter "mca_component_show_load_errors" (current value: "1")
                          Whether to show errors for components that failed to load or not
                 MCA mca: parameter "mca_component_disable_dlopen" (current value: "0")
                          Whether to attempt to disable opening dynamic components or not
                 MCA mpi: parameter "mpi_param_check" (current value: "1")
                          Whether you want MPI API parameters checked at run-time or not. Possible values are 0 (no checking) and 1 (perform checking at run-time)
                 MCA mpi: parameter "mpi_yield_when_idle" (current value: "0")
                          Yield the processor when waiting for MPI communication (for MPI processes, will default to 1 when oversubscribing nodes)
                 MCA mpi: parameter "mpi_event_tick_rate" (current value: "-1")
                          How often to progress TCP communications (0 = never, otherwise specified in microseconds)
                 MCA mpi: parameter "mpi_show_handle_leaks" (current value: "0")
                          Whether MPI_FINALIZE shows all MPI handles that were not freed or not
                 MCA mpi: parameter "mpi_no_free_handles" (current value: "0")
                          Whether to actually free MPI objects when their handles are freed
                 MCA mpi: parameter "mpi_show_mca_params" (current value: "0")
                          Whether to show all MCA parameter value during MPI_INIT or not (good for reproducability of MPI jobs)
                 MCA mpi: parameter "mpi_show_mca_params_file" (current value: <none>)
                          If mpi_show_mca_params is true, setting this string to a valid filename tells Open MPI to dump all the MCA parameter values into a file suitable for reading via the mca_param_files parameter (good for reproducability of MPI jobs)
                 MCA mpi: parameter "mpi_paffinity_alone" (current value: "0")
                          If nonzero, assume that this job is the only (set of) process(es) running on each node and bind processes to processors, starting with processor ID 0
                 MCA mpi: parameter "mpi_keep_peer_hostnames" (current value: "1")
                          If nonzero, save the string hostnames of all MPI peer processes (mostly for error / debugging output messages). This can add quite a bit of memory usage to each MPI process.
                 MCA mpi: parameter "mpi_abort_delay" (current value: "0")
                          If nonzero, print out an identifying message when MPI_ABORT is invoked (hostname, PID of the process that called MPI_ABORT) and delay for that many seconds before exiting (a negative delay value means to never abort). This allows attaching of a debugger before quitting the job.
                 MCA mpi: parameter "mpi_abort_print_stack" (current value: "0")
                          If nonzero, print out a stack trace when MPI_ABORT is invoked
                 MCA mpi: parameter "mpi_leave_pinned" (current value: "0")
                          leave_pinned
                 MCA mpi: parameter "mpi_leave_pinned_pipeline" (current value: "0")
                          leave_pinned_pipeline
                MCA orte: parameter "orte_base_user_debugger" (current value: "totalview @mpirun@ -a @mpirun_args@ : fxp @mpirun@ -a @mpirun_args@")
                          Sequence of user-level debuggers to search for in orterun
                MCA orte: parameter "orte_debug" (current value: "0")
                          Whether or not to enable debugging output for all ORTE components (0 or 1)
                MCA opal: parameter "opal_signal" (current value: "6,7,8,11")
                          If a signal is received, display the stack trace frame
              MCA memory: parameter "memory" (current value: <none>)
                          Default selection set of components for the memory framework (<none> means "use all components that can be found")
              MCA memory: parameter "memory_base_verbose" (current value: "0")
                          Verbosity level for the memory framework (0 = no verbosity)
              MCA memory: parameter "memory_ptmalloc2_priority" (current value: "0")
           MCA paffinity: parameter "paffinity" (current value: <none>)
                          Default selection set of components for the paffinity framework (<none> means "use all components that can be found")
           MCA paffinity: parameter "paffinity_linux_priority" (current value: "10")
                          Priority of the linux paffinity component
           MCA paffinity: information "paffinity_linux_have_cpu_set_t" (value: "1")
                          Whether this component was compiled on a system with the type cpu_set_t or not (1 = yes, 0 = no)
           MCA paffinity: information "paffinity_linux_CPU_ZERO_ok" (value: "1")
                          Whether this component was compiled on a system where CPU_ZERO() is functional or broken (1 = functional, 0 = broken/not available)
           MCA paffinity: information "paffinity_linux_sched_setaffinity_num_params" (value: "3")
                          The number of parameters that sched_set_affinity() takes on the machine where this component was compiled
           MCA maffinity: parameter "maffinity" (current value: <none>)
                          Default selection set of components for the maffinity framework (<none> means "use all components that can be found")
           MCA maffinity: parameter "maffinity_first_use_priority" (current value: "10")
                          Priority of the first_use maffinity component
               MCA timer: parameter "timer" (current value: <none>)
                          Default selection set of components for the timer framework (<none> means "use all components that can be found")
               MCA timer: parameter "timer_base_verbose" (current value: "0")
                          Verbosity level for the timer framework (0 = no verbosity)
               MCA timer: parameter "timer_linux_priority" (current value: "0")
           MCA allocator: parameter "allocator" (current value: <none>)
                          Default selection set of components for the allocator framework (<none> means "use all components that can be found")
           MCA allocator: parameter "allocator_base_verbose" (current value: "0")
                          Verbosity level for the allocator framework (0 = no verbosity)
           MCA allocator: parameter "allocator_basic_priority" (current value: "0")
           MCA allocator: parameter "allocator_bucket_num_buckets" (current value: "30")
           MCA allocator: parameter "allocator_bucket_priority" (current value: "0")
                MCA coll: parameter "coll" (current value: <none>)
                          Default selection set of components for the coll framework (<none> means "use all components that can be found")
                MCA coll: parameter "coll_base_verbose" (current value: "0")
                          Verbosity level for the coll framework (0 = no verbosity)
                MCA coll: parameter "coll_basic_priority" (current value: "10")
                          Priority of the basic coll component
                MCA coll: parameter "coll_basic_crossover" (current value: "4")
                          Minimum number of processes in a communicator before using the logarithmic algorithms
                MCA coll: parameter "coll_hierarch_priority" (current value: "0")
                          Priority of the hierarchical coll component
                MCA coll: parameter "coll_hierarch_verbose" (current value: "0")
                          Turn verbose message of the hierarchical coll component on/off
                MCA coll: parameter "coll_hierarch_use_rdma" (current value: "0")
                          Switch from the send btl list used to detect hierarchies to the rdma btl list
                MCA coll: parameter "coll_hierarch_ignore_sm" (current value: "0")
                          Ignore sm protocol when detecting hierarchies. Required to enable the usage of protocol specific collective operations
                MCA coll: parameter "coll_hierarch_symmetric" (current value: "0")
                          Assume symmetric configuration
                MCA coll: parameter "coll_self_priority" (current value: "75")
                MCA coll: parameter "coll_sm_priority" (current value: "0")
                          Priority of the sm coll component
                MCA coll: parameter "coll_sm_control_size" (current value: "4096")
                          Length of the control data -- should usually be either the length of a cache line on most SMPs, or the size of a page on machines that support direct memory affinity page placement (in bytes)
                MCA coll: parameter "coll_sm_bootstrap_filename" (current value: "coll-sm-bootstrap")
                          Filename (in the Open MPI session directory) of the coll sm component bootstrap rendezvous mmap file
                MCA coll: parameter "coll_sm_bootstrap_num_segments" (current value: "8")
                          Number of segments in the bootstrap file
                MCA coll: parameter "coll_sm_fragment_size" (current value: "8192")
                          Fragment size (in bytes) used for passing data through shared memory (will be rounded up to the nearest control_size size)
                MCA coll: parameter "coll_sm_mpool" (current value: "sm")
                          Name of the mpool component to use
                MCA coll: parameter "coll_sm_comm_in_use_flags" (current value: "2")
                          Number of "in use" flags, used to mark a message passing area segment as currently being used or not (must be >= 2 and <= comm_num_segments)
                MCA coll: parameter "coll_sm_comm_num_segments" (current value: "8")
                          Number of segments in each communicator's shared memory message passing area (must be >= 2, and must be a multiple of comm_in_use_flags)
                MCA coll: parameter "coll_sm_tree_degree" (current value: "4")
                          Degree of the tree for tree-based operations (must be => 1 and <= min(control_size, 255))
                MCA coll: information "coll_sm_shared_mem_used_bootstrap" (value: "216")
                          Amount of shared memory used in the shared memory bootstrap area (in bytes)
                MCA coll: parameter "coll_sm_info_num_procs" (current value: "4")
                          Number of processes to use for the calculation of the shared_mem_size MCA information parameter (must be => 2)
                MCA coll: information "coll_sm_shared_mem_used_data" (value: "548864")
                          Amount of shared memory used in the shared memory data area for info_num_procs processes (in bytes)
                MCA coll: parameter "coll_tuned_priority" (current value: "30")
                          Priority of the tuned coll component
                MCA coll: parameter "coll_tuned_pre_allocate_memory_comm_size_limit" (current value: "32768")
                          Size of communicator were we stop pre-allocating memory for the fixed internal buffer used for message requests etc that is hung off the communicator data segment. I.e. if you have a 100'000 nodes you might not want to pre-allocate 200'000 request handle slots per communicator instance!
                MCA coll: parameter "coll_tuned_use_dynamic_rules" (current value: "0")
                          Switch used to decide if we use static (if statements) or dynamic (built at runtime) decision function rules
                MCA coll: parameter "coll_tuned_init_tree_fanout" (current value: "4")
                          Inital fanout used in the tree topologies for each communicator. This is only an initial guess, if a tuned collective needs a different fanout for an operation, it build it dynamically. This parameter is only for the first guess and might save a little time
                MCA coll: parameter "coll_tuned_init_chain_fanout" (current value: "4")
                          Inital fanout used in the chain (fanout followed by pipeline) topologies for each communicator. This is only an initial guess, if a tuned collective needs a different fanout for an operation, it build it dynamically. This parameter is only for the first guess and might save a little time
                MCA coll: parameter "coll_tuned_allreduce_algorithm" (current value: "0")
                          Which allreduce algorithm is used. Can be locked down to choice of: 0 ignore, 1 basic linear, 2 nonoverlapping (tuned reduce + tuned bcast)
                MCA coll: parameter "coll_tuned_allreduce_algorithm_segmentsize" (current value: "0")
                          Segment size in bytes used by default for allreduce algorithms. Only has meaning if algorithm is forced and supports segmenting. 0 bytes means no segmentation.
                MCA coll: parameter "coll_tuned_allreduce_algorithm_tree_fanout" (current value: "4")
                          Fanout for n-tree used for allreduce algorithms. Only has meaning if algorithm is forced and supports n-tree topo based operation.
                MCA coll: parameter "coll_tuned_allreduce_algorithm_chain_fanout" (current value: "4")
                          Fanout for chains used for allreduce algorithms. Only has meaning if algorithm is forced and supports chain topo based operation.
                MCA coll: parameter "coll_tuned_alltoall_algorithm" (current value: "0")
                          Which alltoall algorithm is used. Can be locked down to choice of: 0 ignore, 1 basic linear, 2 pairwise, 3: modified bruck, 4: two proc only.
                MCA coll: parameter "coll_tuned_alltoall_algorithm_segmentsize" (current value: "0")
                          Segment size in bytes used by default for alltoall algorithms. Only has meaning if algorithm is forced and supports segmenting. 0 bytes means no segmentation.
                MCA coll: parameter "coll_tuned_alltoall_algorithm_tree_fanout" (current value: "4")
                          Fanout for n-tree used for alltoall algorithms. Only has meaning if algorithm is forced and supports n-tree topo based operation.
                MCA coll: parameter "coll_tuned_alltoall_algorithm_chain_fanout" (current value: "4")
                          Fanout for chains used for alltoall algorithms. Only has meaning if algorithm is forced and supports chain topo based operation.
                MCA coll: parameter "coll_tuned_barrier_algorithm" (current value: "0")
                          Which barrier algorithm is used. Can be locked down to choice of: 0 ignore, 1 linear, 2 double ring, 3: recursive doubling 4: bruck, 5: two proc only, 6: step based bmtree
                MCA coll: parameter "coll_tuned_bcast_algorithm" (current value: "0")
                          Which bcast algorithm is used. Can be locked down to choice of: 0 ignore, 1 basic linear, 2 chain, 3: pipeline, 4: split binary tree, 5: binary tree.
                MCA coll: parameter "coll_tuned_bcast_algorithm_segmentsize" (current value: "0")
                          Segment size in bytes used by default for bcast algorithms. Only has meaning if algorithm is forced and supports segmenting. 0 bytes means no segmentation.
                MCA coll: parameter "coll_tuned_bcast_algorithm_tree_fanout" (current value: "4")
                          Fanout for n-tree used for bcast algorithms. Only has meaning if algorithm is forced and supports n-tree topo based operation.
                MCA coll: parameter "coll_tuned_bcast_algorithm_chain_fanout" (current value: "4")
                          Fanout for chains used for bcast algorithms. Only has meaning if algorithm is forced and supports chain topo based operation.
                MCA coll: parameter "coll_tuned_reduce_algorithm" (current value: "0")
                          Which reduce algorithm is used. Can be locked down to choice of: 0 ignore, 1 linear, 2 chain, 3 pipeline
                MCA coll: parameter "coll_tuned_reduce_algorithm_segmentsize" (current value: "0")
                          Segment size in bytes used by default for reduce algorithms. Only has meaning if algorithm is forced and supports segmenting. 0 bytes means no segmentation.
                MCA coll: parameter "coll_tuned_reduce_algorithm_tree_fanout" (current value: "4")
                          Fanout for n-tree used for reduce algorithms. Only has meaning if algorithm is forced and supports n-tree topo based operation.
                MCA coll: parameter "coll_tuned_reduce_algorithm_chain_fanout" (current value: "4")
                          Fanout for chains used for reduce algorithms. Only has meaning if algorithm is forced and supports chain topo based operation.
                  MCA io: parameter "io_base_freelist_initial_size" (current value: "16")
                          Initial MPI-2 IO request freelist size
                  MCA io: parameter "io_base_freelist_max_size" (current value: "64")
                          Max size of the MPI-2 IO request freelist
                  MCA io: parameter "io_base_freelist_increment" (current value: "16")
                          Increment size of the MPI-2 IO request freelist
                  MCA io: parameter "io" (current value: <none>)
                          Default selection set of components for the io framework (<none> means "use all components that can be found")
                  MCA io: parameter "io_base_verbose" (current value: "0")
                          Verbosity level for the io framework (0 = no verbosity)
                  MCA io: parameter "io_romio_priority" (current value: "10")
                          Priority of the io romio component
                  MCA io: parameter "io_romio_delete_priority" (current value: "10")
                          Delete priority of the io romio component
                  MCA io: parameter "io_romio_enable_parallel_optimizations" (current value: "0")
                          Enable set of Open MPI-added options to improve collective file i/o performance
               MCA mpool: parameter "mpool" (current value: <none>)
                          Default selection set of components for the mpool framework (<none> means "use all components that can be found")
               MCA mpool: parameter "mpool_base_verbose" (current value: "0")
                          Verbosity level for the mpool framework (0 = no verbosity)
               MCA mpool: parameter "mpool_sm_size" (current value: "536870912")
               MCA mpool: parameter "mpool_sm_allocator" (current value: "bucket")
               MCA mpool: parameter "mpool_sm_priority" (current value: "0")
               MCA mpool: parameter "mpool_base_use_mem_hooks" (current value: "0")
                          use memory hooks for deregistering freed memory
               MCA mpool: parameter "mpool_use_mem_hooks" (current value: "0")
                          (deprecated, use mpool_base_use_mem_hooks)
               MCA mpool: parameter "mpool_base_disable_sbrk" (current value: "0")
                          use mallopt to override calling sbrk (doesn't return memory to OS!)
               MCA mpool: parameter "mpool_disable_sbrk" (current value: "0")
                          (deprecated, use mca_mpool_base_disable_sbrk)
                 MCA pml: parameter "pml" (current value: "ob1")
                          Default selection set of components for the pml framework (<none> means "use all components that can be found")
                 MCA pml: parameter "pml_base_verbose" (current value: "0")
                          Verbosity level for the pml framework (0 = no verbosity)
                 MCA pml: parameter "pml_ob1_free_list_num" (current value: "4")
                 MCA pml: parameter "pml_ob1_free_list_max" (current value: "-1")
                 MCA pml: parameter "pml_ob1_free_list_inc" (current value: "64")
                 MCA pml: parameter "pml_ob1_priority" (current value: "1")
                 MCA pml: parameter "pml_ob1_eager_limit" (current value: "131072")
                 MCA pml: parameter "pml_ob1_send_pipeline_depth" (current value: "3")
                 MCA pml: parameter "pml_ob1_recv_pipeline_depth" (current value: "4")
                 MCA bml: parameter "bml" (current value: <none>)
                          Default selection set of components for the bml framework (<none> means "use all components that can be found")
                 MCA bml: parameter "bml_base_verbose" (current value: "0")
                          Verbosity level for the bml framework (0 = no verbosity)
                 MCA bml: parameter "bml_r2_priority" (current value: "0")
              MCA rcache: parameter "rcache" (current value: <none>)
                          Default selection set of components for the rcache framework (<none> means "use all components that can be found")
              MCA rcache: parameter "rcache_base_verbose" (current value: "0")
                          Verbosity level for the rcache framework (0 = no verbosity)
              MCA rcache: parameter "rcache_rb_priority" (current value: "0")
                 MCA btl: parameter "btl_base_debug" (current value: "0")
                          If btl_base_debug is 1 standard debug is output, if > 1 verbose debug is output
                 MCA btl: parameter "btl" (current value: <none>)
                          Default selection set of components for the btl framework (<none> means "use all components that can be found")
                 MCA btl: parameter "btl_base_verbose" (current value: "0")
                          Verbosity level for the btl framework (0 = no verbosity)
                 MCA btl: parameter "btl_self_free_list_num" (current value: "0")
                 MCA btl: parameter "btl_self_free_list_max" (current value: "-1")
                 MCA btl: parameter "btl_self_free_list_inc" (current value: "32")
                 MCA btl: parameter "btl_self_eager_limit" (current value: "131072")
                 MCA btl: parameter "btl_self_max_send_size" (current value: "262144")
                 MCA btl: parameter "btl_self_max_rdma_size" (current value: "2147483647")
                 MCA btl: parameter "btl_self_exclusivity" (current value: "65536")
                 MCA btl: parameter "btl_self_flags" (current value: "2")
                 MCA btl: parameter "btl_self_priority" (current value: "0")
                 MCA btl: parameter "btl_sm_free_list_num" (current value: "8")
                 MCA btl: parameter "btl_sm_free_list_max" (current value: "-1")
                 MCA btl: parameter "btl_sm_free_list_inc" (current value: "256")
                 MCA btl: parameter "btl_sm_max_procs" (current value: "-1")
                 MCA btl: parameter "btl_sm_sm_extra_procs" (current value: "2")
                 MCA btl: parameter "btl_sm_mpool" (current value: "sm")
                 MCA btl: parameter "btl_sm_eager_limit" (current value: "1024")
                 MCA btl: parameter "btl_sm_max_frag_size" (current value: "8192")
                 MCA btl: parameter "btl_sm_size_of_cb_queue" (current value: "128")
                 MCA btl: parameter "btl_sm_cb_lazy_free_freq" (current value: "120")
                 MCA btl: parameter "btl_sm_priority" (current value: "0")
                 MCA btl: parameter "btl_tcp_if_include" (current value: <none>)
                 MCA btl: parameter "btl_tcp_if_exclude" (current value: "lo")
                 MCA btl: parameter "btl_tcp_free_list_num" (current value: "8")
                 MCA btl: parameter "btl_tcp_free_list_max" (current value: "-1")
                 MCA btl: parameter "btl_tcp_free_list_inc" (current value: "32")
                 MCA btl: parameter "btl_tcp_sndbuf" (current value: "131072")
                 MCA btl: parameter "btl_tcp_rcvbuf" (current value: "131072")
                 MCA btl: parameter "btl_tcp_endpoint_cache" (current value: "30720")
                 MCA btl: parameter "btl_tcp_exclusivity" (current value: "0")
                 MCA btl: parameter "btl_tcp_eager_limit" (current value: "65536")
                 MCA btl: parameter "btl_tcp_min_send_size" (current value: "65536")
                 MCA btl: parameter "btl_tcp_max_send_size" (current value: "131072")
                 MCA btl: parameter "btl_tcp_min_rdma_size" (current value: "131072")
                 MCA btl: parameter "btl_tcp_max_rdma_size" (current value: "2147483647")
                 MCA btl: parameter "btl_tcp_flags" (current value: "10")
                 MCA btl: parameter "btl_tcp_priority" (current value: "0")
                 MCA btl: parameter "btl_base_include" (current value: <none>)
                 MCA btl: parameter "btl_base_exclude" (current value: <none>)
                MCA topo: parameter "topo" (current value: <none>)
                          Default selection set of components for the topo framework (<none> means "use all components that can be found")
                MCA topo: parameter "topo_base_verbose" (current value: "0")
                          Verbosity level for the topo framework (0 = no verbosity)
                 MCA osc: parameter "osc" (current value: <none>)
                          Default selection set of components for the osc framework (<none> means "use all components that can be found")
                 MCA osc: parameter "osc_base_verbose" (current value: "0")
                          Verbosity level for the osc framework (0 = no verbosity)
                 MCA osc: parameter "osc_pt2pt_fence_sync_method" (current value: "reduce_scatter")
                          How to synchronize fence: reduce_scatter, allreduce, alltoall
                 MCA osc: parameter "osc_pt2pt_eager_send" (current value: "0")
                          Attempt to start data movement during communication call, instead of at synchrnoization time. Info key of same name overrides this value, if info key given.
                 MCA osc: parameter "osc_pt2pt_no_locks" (current value: "0")
                          Enable optimizations available only if MPI_LOCK is not used.
                 MCA osc: parameter "osc_pt2pt_priority" (current value: "0")
              MCA errmgr: parameter "errmgr" (current value: <none>)
                          Default selection set of components for the errmgr framework (<none> means "use all components that can be found")
                 MCA gpr: parameter "gpr_base_maxsize" (current value: "2147483647")
                 MCA gpr: parameter "gpr_base_blocksize" (current value: "512")
                 MCA gpr: parameter "gpr" (current value: <none>)
                          Default selection set of components for the gpr framework (<none> means "use all components that can be found")
                 MCA gpr: parameter "gpr_null_priority" (current value: "0")
                 MCA gpr: parameter "gpr_proxy_debug" (current value: "0")
                 MCA gpr: parameter "gpr_proxy_priority" (current value: "0")
                 MCA gpr: parameter "gpr_replica_debug" (current value: "0")
                 MCA gpr: parameter "gpr_replica_isolate" (current value: "0")
                 MCA gpr: parameter "gpr_replica_priority" (current value: "0")
                 MCA iof: parameter "iof_base_window_size" (current value: "4096")
                 MCA iof: parameter "iof_base_service" (current value: "0.0.0")
                 MCA iof: parameter "iof" (current value: <none>)
                          Default selection set of components for the iof framework (<none> means "use all components that can be found")
                 MCA iof: parameter "iof_proxy_debug" (current value: "1")
                 MCA iof: parameter "iof_proxy_priority" (current value: "0")
                 MCA iof: parameter "iof_svc_debug" (current value: "1")
                 MCA iof: parameter "iof_svc_priority" (current value: "0")
                  MCA ns: parameter "ns" (current value: <none>)
                          Default selection set of components for the ns framework (<none> means "use all components that can be found")
                  MCA ns: parameter "ns_proxy_debug" (current value: "0")
                  MCA ns: parameter "ns_proxy_maxsize" (current value: "2147483647")
                  MCA ns: parameter "ns_proxy_blocksize" (current value: "512")
                  MCA ns: parameter "ns_proxy_priority" (current value: "0")
                  MCA ns: parameter "ns_replica_debug" (current value: "0")
                  MCA ns: parameter "ns_replica_isolate" (current value: "0")
                  MCA ns: parameter "ns_replica_maxsize" (current value: "2147483647")
                  MCA ns: parameter "ns_replica_blocksize" (current value: "512")
                  MCA ns: parameter "ns_replica_priority" (current value: "0")
                 MCA oob: parameter "oob" (current value: <none>)
                          Default selection set of components for the oob framework (<none> means "use all components that can be found")
                 MCA oob: parameter "oob_base_verbose" (current value: "0")
                          Verbosity level for the oob framework (0 = no verbosity)
                 MCA oob: parameter "oob_tcp_peer_limit" (current value: "-1")
                 MCA oob: parameter "oob_tcp_peer_retries" (current value: "60")
                 MCA oob: parameter "oob_tcp_debug" (current value: "0")
                 MCA oob: parameter "oob_tcp_include" (current value: <none>)
                 MCA oob: parameter "oob_tcp_exclude" (current value: <none>)
                 MCA oob: parameter "oob_tcp_sndbuf" (current value: "131072")
                 MCA oob: parameter "oob_tcp_rcvbuf" (current value: "131072")
                 MCA oob: parameter "oob_tcp_priority" (current value: "0")
                 MCA ras: parameter "ras" (current value: <none>)
                          Default selection set of components for the ras framework (<none> means "use all components that can be found")
                 MCA ras: parameter "ras_dash_host_priority" (current value: "5")
                          Selection priority for the dash_host RAS component
                 MCA ras: parameter "ras_hostfile_priority" (current value: "10")
                          Selection priority for the hostfile RAS component
                 MCA ras: parameter "ras_localhost_priority" (current value: "0")
                          Selection priority for the localhost RAS component
                 MCA ras: parameter "ras_slurm_priority" (current value: "75")
                          Priority of the slurm ras component
                 MCA rds: parameter "rds" (current value: <none>)
                          Default selection set of components for the rds framework (<none> means "use all components that can be found")
                 MCA rds: parameter "rds_hostfile_debug" (current value: "0")
                          Toggle debug output for hostfile RDS component
                 MCA rds: parameter "rds_hostfile_path" (current value: "/opt/mpi/pgi/etc/openmpi-default-hostfile")
                          ORTE Host filename
                 MCA rds: parameter "rds_hostfile_priority" (current value: "0")
                 MCA rds: parameter "rds_resfile_debug" (current value: "0")
                          Toggle debug output for resfile RDS component
                 MCA rds: parameter "rds_resfile_name" (current value: <none>)
                          ORTE Resource filename
                 MCA rds: parameter "rds_resfile_priority" (current value: "0")
               MCA rmaps: parameter "rmaps_base_verbose" (current value: "0")
                          Verbosity level for the rmaps framework
               MCA rmaps: parameter "rmaps_base_schedule_policy" (current value: "slot")
                          Scheduling Policy for RMAPS. [slot | node]
               MCA rmaps: parameter "rmaps_base_schedule_local" (current value: "1")
                          If nonzero, allow scheduling MPI applications on the same node as mpirun (default). If zero, do not schedule any MPI applications on the same node as mpirun
               MCA rmaps: parameter "rmaps_base_no_oversubscribe" (current value: "0")
                          If nonzero, then do not allow oversubscription of nodes - mpirun will return an error if there aren't enough nodes to launch all processes without oversubscribing
               MCA rmaps: parameter "rmaps" (current value: <none>)
                          Default selection set of components for the rmaps framework (<none> means "use all components that can be found")
               MCA rmaps: parameter "rmaps_round_robin_debug" (current value: "1")
                          Toggle debug output for Round Robin RMAPS component
               MCA rmaps: parameter "rmaps_round_robin_priority" (current value: "1")
                          Selection priority for Round Robin RMAPS component
                MCA rmgr: parameter "rmgr" (current value: <none>)
                          Default selection set of components for the rmgr framework (<none> means "use all components that can be found")
                MCA rmgr: parameter "rmgr_proxy_priority" (current value: "0")
                MCA rmgr: parameter "rmgr_urm_priority" (current value: "0")
                 MCA rml: parameter "rml" (current value: <none>)
                          Default selection set of components for the rml framework (<none> means "use all components that can be found")
                 MCA rml: parameter "rml_base_verbose" (current value: "0")
                          Verbosity level for the rml framework (0 = no verbosity)
                 MCA rml: parameter "rml_oob_priority" (current value: "0")
                 MCA pls: parameter "pls" (current value: <none>)
                          Default selection set of components for the pls framework (<none> means "use all components that can be found")
                 MCA pls: parameter "pls_fork_reap" (current value: "1")
                          Whether to wait to reap all children before finalizing or not
                 MCA pls: parameter "pls_fork_reap_timeout" (current value: "0")
                          When killing children processes, first send a SIGTERM, then wait at least this timeout (in seconds), then send a SIGKILL
                 MCA pls: parameter "pls_fork_priority" (current value: "1")
                          Priority of this component
                 MCA pls: parameter "pls_fork_debug" (current value: "0")
                          Whether to enable debugging output or not
                 MCA pls: parameter "pls_rsh_debug" (current value: "0")
                          Whether or not to enable debugging output for the rsh pls component (0 or 1)
                 MCA pls: parameter "pls_rsh_num_concurrent" (current value: "128")
                          How many pls_rsh_agent instances to invoke concurrently (must be > 0)
                 MCA pls: parameter "pls_rsh_orted" (current value: "orted")
                          The command name that the rsh pls component will invoke for the ORTE daemon
                 MCA pls: parameter "pls_rsh_priority" (current value: "10")
                          Priority of the rsh pls component
                 MCA pls: parameter "pls_rsh_delay" (current value: "1")
                          Delay (in seconds) between invocations of the remote agent, but only used when the "debug" MCA parameter is true, or the top-level MCA debugging is enabled (otherwise this value is ignored)
                 MCA pls: parameter "pls_rsh_reap" (current value: "1")
                          If set to 1, wait for all the processes to complete before exiting. Otherwise, quit immediately -- without waiting for confirmation that all other processes in the job have completed.
                 MCA pls: parameter "pls_rsh_assume_same_shell" (current value: "1")
                          If set to 1, assume that the shell on the remote node is the same as the shell on the local node. Otherwise, probe for what the remote shell.
                 MCA pls: parameter "pls_rsh_agent" (current value: "ssh : rsh")
                          The command used to launch executables on remote nodes (typically either "ssh" or "rsh")
                 MCA pls: parameter "pls_slurm_debug" (current value: "0")
                          Enable debugging of slurm pls
                 MCA pls: parameter "pls_slurm_priority" (current value: "75")
                          Default selection priority
                 MCA pls: parameter "pls_slurm_orted" (current value: "orted")
                          Command to use to start proxy orted
                 MCA pls: parameter "pls_slurm_args" (current value: <none>)
                          Custom arguments to srun
                 MCA sds: parameter "sds" (current value: <none>)
                          Default selection set of components for the sds framework (<none> means "use all components that can be found")
                 MCA sds: parameter "sds_base_verbose" (current value: "0")
                          Verbosity level for the sds framework (0 = no verbosity)
                 MCA sds: parameter "sds_env_priority" (current value: "0")
                 MCA sds: parameter "sds_pipe_priority" (current value: "0")
                 MCA sds: parameter "sds_seed_priority" (current value: "0")
                 MCA sds: parameter "sds_singleton_priority" (current value: "0")
                 MCA sds: parameter "sds_slurm_priority" (current value: "0")
                 MCA soh: parameter "soh" (current value: <none>)
                          Default selection set of components for the soh framework (<none> means "use all components that can be found")