From: Mohamad Chaarawi (mschaara_at_[hidden])
Date: 2007-09-21 16:48:37


I have ofed 1.1 installed..
I looked in the lib64 dir, there is something called libibumad..
BUT anyway i tried what u told with setenvying the variables myself in
the ini file:

mvapich2_setenv = F77 gfortran
mvapich2_setenv = LIBS -L/usr/local/ofed/lib64 -libverbs -lpthread

and that worked fine :)

in their make.* file however when they set LIB they had
-L${OPEN_IB_LIB} ${BLCR_LIB} ${RDMA_CM_LIBS} -libverbs -libumad -lpthread
i tried using the variables instead of hardcoding the
(/usr/local/ofed/lib64), but it wouldn't translate them.
mvapich2_setenv = LIBS -L${OPEN_IB_LIB} ${BLCR_LIB} ${RDMA_CM_LIBS}
-libverbs -lpthread
is there a way for setenv to do as they did with export?

Jeff Squyres wrote:
> On Sep 21, 2007, at 2:34 PM, Mohamad Chaarawi wrote:
>
>
>> Attached..
>>
>> something to do with libumad i guess..
>>
>
> configure:3032: gcc -D_X86_64_ -D_SMP_ -DUSE_HEADER_CACHING -
> DONE_SIDED -DMPID_USE_SEQUENCE_NUMBERS -D_SHMEM_COLL_ -I/usr/local/
> ofed/include -O2 conftest.c -L/usr/local/ofed/lib64 -libverbs -
> libumad -lpthread >&5
> /usr/bin/ld: cannot find -libumad
>
> Weird. I see that both MVAPICH and MVAPICH2 hard-code LIBS to
> include -lumad.
>
> What version of OFED do you have installed? I don't remember if
> libumad was something new in OFED 1.2 or not (I see it installed via
> OFED 1.2 but I don't have any machines left that have OFED 1.1
> installed).
>

> You may want to run it by hand and run it through MTT and compare the
> config.logs -- *something* is different if it works by hand and
> doesn't work via MTT...
>
> See below.
>
>
>> interesting enough, if i run mtt driectly from the head node
>> (before i was submitting it as a batch job), the C test passes, but
>> the configure is picking up g77 instead gfortran, which fails the
>> f77 tests since gcc-4.2.0 doesn't have g77..
>>
>
> Looking through their two make.* scripts, you can setenv F77 in the
> MPI install section to override this. This will override their hard-
> coded default of g77.
>
> Similarly, you can override their settings of -lumad by setenv'ing
> your own LIBS in the MPI install section. Look at their make.*
> scripts to see the values that they're setting and then set your own
> override without -lumad (if that's the proper solution).
>
> Make sense?
>
>
>
>> Jeff Squyres wrote:
>>
>>> Yoinks. :-(
>>>
>>> What does the corresponding config.log say? It should contain
>>> the exact error that occurred.
>>>
>>> This configure test is simply checking to see if the C compiler
>>> works. IIRC, it's trying to compile, link, and run a trivial C
>>> application (something akin to "hello world").
>>>
>>>
>>> On Sep 21, 2007, at 1:30 PM, Mohamad Chaarawi wrote:
>>>
>>>
>>>
>>>> Hey all,
>>>>
>>>> I'm trying to execute the collective bakeoff tests for OMPI,
>>>> MPICH2, and
>>>> MVAPICH2. OMPI and MPICH2 are working out fine, However when
>>>> MVAPICH2 is
>>>> configuring, it gives an error with the C compiler, pasted at
>>>> the end..
>>>>
>>>> Note that I get the error when im running the mtt client. When i
>>>> go in
>>>> the scratch directory to the MVAPICH2 sources and configure it
>>>> myself,
>>>> with the same configure arguments that it did from config.log,
>>>> it works
>>>> out fine..
>>>>
>>>> Ive been banging my head a while now to figure this out, but i got
>>>> nowhere. Probably it's some environment settings being messed up
>>>> somewhere, but i don't know..
>>>> If anyone has stumbled upon this before, let me know..
>>>> I attached my ini file..
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> OUT:Configuring MVAPICH2...
>>>> OUT:Configuring MPICH2 version MVAPICH2-0.9.8 with
>>>> --prefix=/home/mschaara/mtt-testing/scratch-coll/installs/
>>>> NhjQ/ install
>>>> --with-device=osu_ch3:mrail --with-rdma=gen2 --with-pm=mpd
>>>> --disable-romio --without-mpe
>>>> OUT:sourcing
>>>> /home/mschaara/mtt-testing/scratch-coll/installs/NhjQ/src/
>>>> mvapich2-0.9.8
>>>> p3/src/pm/mpd/setup_pm
>>>> OUT:checking for gcc... OUT:gcc
>>>> OUT:checking for C compiler default output file name...
>>>> OUT:configure:
>>>> error: C compiler cannot create executables
>>>> See `config.log' for more details.
>>>> OUT:Configuring MPICH2 version MVAPICH2-0.9.8 with
>>>> --prefix=/home/mschaara/mtt-testing/scratch-coll/installs/
>>>> NhjQ/ install
>>>> --with-device=osu_ch3:mrail --with-rdma=gen2 --with-pm=mpd
>>>> --disable-romio --without-mpe
>>>> sourcing
>>>> /home/mschaara/mtt-testing/scratch-coll/installs/NhjQ/src/
>>>> mvapich2-0.9.8
>>>> p3/src/pm/mpd/setup_pm
>>>> checking for gcc... gcc
>>>> checking for C compiler default output file name...
>>>> configure: error: C
>>>> compiler cannot create executables
>>>> See `config.log' for more details.
>>>> OUT:Failure in configuration.
>>>> Please file an error report to mvapich-discuss_at_cse.ohio-
>>>> state.edu with
>>>> all your log files.
>>>> Command complete, exit status: 1
>>>>
>>>>
>>>> --
>>>> Mohamad Chaarawi
>>>> Instructional Assistant http://www.cs.uh.edu/~mschaara
>>>> Department of Computer Science University of Houston
>>>> 4800 Calhoun, PGH Room 526 Houston, TX 77204, USA
>>>> #===================================================================
>>>> == =
>>>> # Generic OMPI core performance testing template configuration
>>>> #===================================================================
>>>> == =
>>>>
>>>> [MTT]
>>>> # Leave this string so that we can identify this data subset in the
>>>> # database
>>>> # OMPI Core: Use a "test" label until we're ready to run real
>>>> results
>>>> description = [testbake]
>>>> #description = [2007 collective performance bakeoff]
>>>> # OMPI Core: Use the "trial" flag until we're ready to run real
>>>> results
>>>> trial = 1
>>>>
>>>> # Put other values here as relevant to your environment.
>>>>
>>>> #-------------------------------------------------------------------
>>>> -- -
>>>>
>>>> [Lock]
>>>> # Put values here as relevant to your environment.
>>>>
>>>> #===================================================================
>>>> == =
>>>> # MPI get phase
>>>> #===================================================================
>>>> == =
>>>>
>>>> [MPI get: MVAPICH2]
>>>> mpi_details = MVAPICH2
>>>>
>>>> module = Download
>>>> download_url = http://mvapich.cse.ohio-state.edu/download/
>>>> mvapich2/ mvapich2-0.9.8p3.tar.gz
>>>>
>>>> #===================================================================
>>>> == =
>>>> # Install MPI phase
>>>> #===================================================================
>>>> == =
>>>>
>>>> [MPI install: MVAPICH2]
>>>> mpi_get = mvapich2
>>>> save_stdout_on_success = 1
>>>> merge_stdout_stderr = 0
>>>> # Adjust this for your site (this is what works at Cisco).
>>>> Needed to
>>>> # launch in SLURM; adding this to LD_LIBRARY_PATH here propagates
>>>> this
>>>> # all the way through the test run phases that use this MPI install,
>>>> # where the test executables will need to have this set.
>>>> prepend_path = LD_LIBRARY_PATH /opt/SLURM/lib
>>>>
>>>> module = MVAPICH2
>>>> # Adjust this to be where your OFED is installed
>>>> mvapich2_setenv = OPEN_IB_HOME /usr/local/ofed
>>>> mvapich2_build_script = make.mvapich2.ofa
>>>> mvapich2_compiler_name = gnu
>>>> mvapich2_compiler_version = &get_gcc_version()
>>>> # These are needed to launch through SLURM; adjust as appropriate.
>>>> mvapich2_additional_wrapper_ldflags = -L/opt/SLURM/lib
>>>> mvapich2_additional_wrapper_libs = -lpmi
>>>>
>>>> #===================================================================
>>>> == =
>>>> # MPI run details
>>>> #===================================================================
>>>> == =
>>>>
>>>> [MPI Details: MVAPICH2]
>>>>
>>>> # Launching through SLURM. If you use mpdboot instead, you need to
>>>> # ensure that multiple mpd's on the same node don't conflict (or
>>>> never
>>>> # happen).
>>>> exec = srun @alloc@ -n &test_np() &test_executable() &test_argv()
>>>>
>>>> # If not using SLURM, you'll need something like this (not tested).
>>>> # You may need different hostfiles for running by slot/by node.
>>>> #exec = mpiexec -np &test_np() --host &hostlist() &test_executable()
>>>>
>>>> network = loopback,verbs,shmem
>>>>
>>>> # In this example, each node has 4 CPUs, so to launch "by node",
>>>> just
>>>> # specify that each MPI process will use 4 CPUs.
>>>> alloc = &if(&eq(&test_alloc(), "node"), "-c 2", "")
>>>>
>>>> #===================================================================
>>>> == =
>>>> # Test get phase
>>>> #===================================================================
>>>> == =
>>>>
>>>> [Test get: skampi]
>>>> module = SVN
>>>> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/skampi-5.0.1
>>>>
>>>> #===================================================================
>>>> == =
>>>> # Test build phase
>>>> #===================================================================
>>>> == =
>>>>
>>>> [Test build: skampi]
>>>> test_get = skampi
>>>> save_stdout_on_success = 1
>>>> merge_stdout_stderr = 1
>>>> stderr_save_lines = 100
>>>>
>>>> module = Shell
>>>> # Set EVERYONE_CAN_MPI_IO for HP MPI
>>>> shell_build_command = <<EOT
>>>> make CFLAGS="-O2 -DPRODUCE_SPARSE_OUTPUT -DEVERYONE_CAN_MPI_IO"
>>>> EOT
>>>>
>>>> #===================================================================
>>>> == =
>>>> # Test Run phase
>>>> #===================================================================
>>>> == =
>>>>
>>>> [Test run: skampi]
>>>> test_build = skampi
>>>> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
>>>> # Timeout hueristic: 10 minutes
>>>> timeout = 10:00
>>>> save_stdout_on_pass = 1
>>>> # Ensure to leave this value as "-1", or performance results
>>>> could be lost!
>>>> stdout_save_lines = -1
>>>> merge_stdout_stderr = 1
>>>> np = &env_max_procs()
>>>>
>>>> argv = -i &find("mtt_.+.ski", "input_files_bakeoff")
>>>>
>>>> specify_module = Simple
>>>> analyze_module = SKaMPI
>>>>
>>>> simple_pass:tests = skampi
>>>>
>>>> #===================================================================
>>>> == =
>>>> # Reporter phase
>>>> #===================================================================
>>>> == =
>>>>
>>>> [Reporter: IU database]
>>>> module = MTTDatabase
>>>>
>>>> mttdatabase_realm = OMPI
>>>> mttdatabase_url = https://www.open-mpi.org/mtt/submit/
>>>> # Change this to be the username and password for your submit user.
>>>> # Get this from the OMPI MTT administrator.
>>>> mttdatabase_username = uh
>>>> mttdatabase_password = &stringify(&cat("/home/mschaara/mtt-
>>>> testing/ mtt-db-password.txt"))
>>>> # Change this to be some short string identifying your cluster.
>>>> mttdatabase_platform = shark
>>>>
>>>> mttdatabase_debug_filename = mttdb_debug_file_perf
>>>> mttdatabase_keep_debug_files = 1
>>>>
>>>> #-------------------------------------------------------------------
>>>> -- -
>>>>
>>>> # This is a backup reporter; it also writes results to a local text
>>>> # file
>>>>
>>>> [Reporter: text file backup]
>>>> module = TextFile
>>>>
>>>> textfile_filename = $phase-$section-$mpi_name-$mpi_version.txt
>>>>
>>>> textfile_summary_header = <<EOT
>>>> Hostname: &shell("hostname")
>>>> uname: &shell("uname -a")
>>>> Username: &shell("who am i")
>>>> EOT
>>>>
>>>> textfile_summary_footer =
>>>> textfile_detail_header =
>>>> textfile_detail_footer =
>>>>
>>>> textfile_textwrap = 78
>>>> _______________________________________________
>>>> mtt-users mailing list
>>>> mtt-users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>>>>
>>>>
>>>
>>>
>> This file contains any messages produced by compilers while
>> running configure, to aid debugging if configure makes a mistake.
>>
>> It was created by configure, which was
>> generated by GNU Autoconf 2.59. Invocation command line was
>>
>> $ ./configure --prefix=/home/mschaara/mtt-testing/scratch-coll/
>> installs/UvUA/install --with-device=osu_ch3:mrail --with-rdma=gen2
>> --with-pm=mpd --disable-romio --without-mpe
>>
>> ## --------- ##
>> ## Platform. ##
>> ## --------- ##
>>
>> hostname = shark01
>> uname -m = x86_64
>> uname -r = 2.6.16.21-smp
>> uname -s = Linux
>> uname -v = #2 SMP Thu Mar 1 10:09:02 CST 2007
>>
>> /usr/bin/uname -p = unknown
>> /bin/uname -X = unknown
>>
>> /bin/arch = x86_64
>> /usr/bin/arch -k = unknown
>> /usr/convex/getsysinfo = unknown
>> hostinfo = unknown
>> /bin/machine = unknown
>> /usr/bin/oslevel = unknown
>> /bin/universe = unknown
>>
>> PATH: /opt/papi-3.5.0/bin
>> PATH: /home/mschaara/OpenMPI/bin
>> PATH: /opt/gcc-4.2.0/bin
>> PATH: /opt/SLURM/bin
>> PATH: /opt/papi-3.5.0/bin
>> PATH: /home/mschaara/OpenMPI/bin
>> PATH: /opt/gcc-4.2.0/bin
>> PATH: /opt/SLURM/bin
>> PATH: /opt/papi-3.5.0/bin
>> PATH: /home/mschaara/OpenMPI/bin
>> PATH: /opt/gcc-4.2.0/bin
>> PATH: /opt/SLURM/bin
>> PATH: /home/mschaara/bin
>> PATH: /usr/local/bin
>> PATH: /usr/bin
>> PATH: /usr/X11R6/bin
>> PATH: /bin
>> PATH: /usr/games
>> PATH: /opt/bin
>> PATH: /opt/gnome/bin
>> PATH: /opt/kde3/bin
>> PATH: /usr/lib64/jvm/jre/bin
>> PATH: /opt/c3-4/
>> PATH: /usr/local/ofed/bin
>> PATH: /usr/lib/mit/bin
>> PATH: /usr/lib/mit/sbin
>> PATH: /usr/local/ofed/sbin
>>
>>
>> ## ----------- ##
>> ## Core tests. ##
>> ## ----------- ##
>>
>> configure:2720: checking for gcc
>> configure:2746: result: gcc
>> configure:2990: checking for C compiler version
>> configure:2993: gcc --version </dev/null >&5
>> gcc (GCC) 4.2.0
>> Copyright (C) 2007 Free Software Foundation, Inc.
>> This is free software; see the source for copying conditions.
>> There is NO
>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>> PURPOSE.
>>
>> configure:2996: $? = 0
>> configure:2998: gcc -v </dev/null >&5
>> Using built-in specs.
>> Target: x86_64-unknown-linux-gnu
>> Configured with: ../gcc-4.2.0/configure --prefix=/opt/gcc-4.2.0/ --
>> disable-multilib --enable-languages=c,c++,fortran,java
>> Thread model: posix
>> gcc version 4.2.0
>> configure:3001: $? = 0
>> configure:3003: gcc -V </dev/null >&5
>> gcc: '-V' option must have argument
>> configure:3006: $? = 1
>> configure:3029: checking for C compiler default output file name
>> configure:3032: gcc -D_X86_64_ -D_SMP_ -DUSE_HEADER_CACHING -
>> DONE_SIDED -DMPID_USE_SEQUENCE_NUMBERS -D_SHMEM_COLL_ -I/usr/
>> local/ofed/include -O2 conftest.c -L/usr/local/ofed/lib64 -
>> libverbs -libumad -lpthread >&5
>> /usr/bin/ld: cannot find -libumad
>> collect2: ld returned 1 exit status
>> configure:3035: $? = 1
>> configure: failed program was:
>> | /* confdefs.h. */
>> |
>> | #define PACKAGE_NAME ""
>> | #define PACKAGE_TARNAME ""
>> | #define PACKAGE_VERSION ""
>> | #define PACKAGE_STRING ""
>> | #define PACKAGE_BUGREPORT ""
>> | #define HAVE_ERROR_CHECKING MPID_ERROR_LEVEL_ALL
>> | #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL
>> | #define USE_LOGGING MPID_LOGGING_NONE
>> | #define MPICH_SINGLE_THREADED 1
>> | #define MPICH_THREAD_LEVEL MPI_THREAD_FUNNELED
>> | #define USE_THREAD_IMPL MPICH_THREAD_IMPL_NONE
>> | /* end confdefs.h. */
>> |
>> | int
>> | main ()
>> | {
>> |
>> | ;
>> | return 0;
>> | }
>> configure:3074: error: C compiler cannot create executables
>> See `config.log' for more details.
>>
>> ## ---------------- ##
>> ## Cache variables. ##
>> ## ---------------- ##
>>
>> ac_cv_env_CC_set=set
>> ac_cv_env_CC_value=gcc
>> ac_cv_env_CFLAGS_set=set
>> ac_cv_env_CFLAGS_value='-D_X86_64_ -D_SMP_ -DUSE_HEADER_CACHING -
>> DONE_SIDED -DMPID_USE_SEQUENCE_NUMBERS -D_SHMEM_COLL_ -I/usr/
>> local/ofed/include -O2'
>> ac_cv_env_CPPFLAGS_set=
>> ac_cv_env_CPPFLAGS_value=
>> ac_cv_env_CPP_set=
>> ac_cv_env_CPP_value=
>> ac_cv_env_CXXFLAGS_set=
>> ac_cv_env_CXXFLAGS_value=
>> ac_cv_env_CXX_set=set
>> ac_cv_env_CXX_value=g++
>> ac_cv_env_F77_set=set
>> ac_cv_env_F77_value=g77
>> ac_cv_env_F90FLAGS_set=
>> ac_cv_env_F90FLAGS_value=
>> ac_cv_env_F90_set=set
>> ac_cv_env_F90_value=
>> ac_cv_env_FFLAGS_set=set
>> ac_cv_env_FFLAGS_value=-L/usr/local/ofed/lib64
>> ac_cv_env_LDFLAGS_set=
>> ac_cv_env_LDFLAGS_value=
>> ac_cv_env_build_alias_set=
>> ac_cv_env_build_alias_value=
>> ac_cv_env_host_alias_set=
>> ac_cv_env_host_alias_value=
>> ac_cv_env_target_alias_set=
>> ac_cv_env_target_alias_value=
>> ac_cv_prog_ac_ct_CC=gcc
>> pac_cv_my_conf_dir=/home/mschaara/mtt-testing/scratch-coll/installs/
>> UvUA/src/mvapich2-0.9.8p3
>>
>> ## ----------------- ##
>> ## Output variables. ##
>> ## ----------------- ##
>>
>> ADDRESS_KIND=''
>> ALLOCA=''
>> AR=''
>> BSEND_OVERHEAD=''
>> BUILD_TVDLL=''
>> CC='gcc'
>> CC_SHL=''
>> CC_SHL_DBG=''
>> CFLAGS='-D_X86_64_ -D_SMP_ -DUSE_HEADER_CACHING -DONE_SIDED -
>> DMPID_USE_SEQUENCE_NUMBERS -D_SHMEM_COLL_ -I/usr/local/ofed/
>> include -O2'
>> CONFIGURE_ARGUMENTS='--prefix=/home/mschaara/mtt-testing/scratch-
>> coll/installs/UvUA/install --with-device=osu_ch3:mrail --with-
>> rdma=gen2 --with-pm=mpd --disable-romio --without-mpe'
>> CPP=''
>> CPPFLAGS=''
>> CREATESHLIB=''
>> CXX='g++'
>> CXXFLAGS=''
>> CXX_LINKPATH_SHL=''
>> CXX_SHL=''
>> C_LINKPATH_SHL=''
>> C_LINK_SHL=''
>> C_LINK_SHL_DBG=''
>> DEFS=''
>> DEVICE='osu_ch3:mrail'
>> DLLIMPORT=''
>> DOCTEXT=''
>> DOCTEXTSTYLE=''
>> ECHO_C=''
>> ECHO_N='-n'
>> ECHO_T=''
>> EGREP=''
>> ENABLE_SHLIB=''
>> ETAGS=''
>> ETAGSADD=''
>> EXAMPLE_LIBS=''
>> EXEEXT=''
>> EXTERNAL_SRC_DIRS=''
>> EXTRA_STATUS_DECL=''
>> F77='g77'
>> F77CPP=''
>> F77_COMPLEX16=''
>> F77_COMPLEX32=''
>> F77_COMPLEX8=''
>> F77_INCDIR=''
>> F77_INTEGER16=''
>> F77_INTEGER1=''
>> F77_INTEGER2=''
>> F77_INTEGER4=''
>> F77_INTEGER8=''
>> F77_IN_C_LIBS=''
>> F77_LIBDIR_LEADER=''
>> F77_NAME_MANGLE=''
>> F77_REAL16=''
>> F77_REAL4=''
>> F77_REAL8=''
>> F90=''
>> F90CPP=''
>> F90EXT=''
>> F90FLAGS=''
>> F90INC=''
>> F90INCFLAG=''
>> F90MODEXT=''
>> F90MODINCFLAG=''
>> F90MODINCSPEC=''
>> F90_LINKPATH_SHL=''
>> F90_SHL=''
>> F90_WORK_FILES_ARG=''
>> FC=''
>> FC_LINKPATH_SHL=''
>> FC_SHL=''
>> FFLAGS='-L/usr/local/ofed/lib64'
>> FINCLUDES=''
>> FLIBS=''
>> FWRAPNAME='fmpich'
>> GCC=''
>> HAVE_CXX_EXCEPTIONS=''
>> HAVE_ROMIO=''
>> INCLUDE_MPICXX_H=''
>> INSTALL_DATA=''
>> INSTALL_PROGRAM=''
>> INSTALL_SCRIPT=''
>> INT16_T=''
>> INT32_T=''
>> INT64_T=''
>> LDFLAGS=''
>> LIBOBJS=''
>> LIBS='-L/usr/local/ofed/lib64 -libverbs -libumad -lpthread'
>> LIBTOOL=''
>> LTLIBOBJS=''
>> MAKE=''
>> MAKE_DEPEND_C=''
>> MANY_PM='no'
>> MKDIR_P=''
>> MPE_THREAD_LIB_NAME=''
>> MPICC=''
>> MPICH_TIMER_KIND=''
>> MPICVSHOME=''
>> MPICXX=''
>> MPICXXLIBNAME='mpichcxx'
>> MPID_TIMER_TYPE=''
>> MPIF77=''
>> MPIF90=''
>> MPIFLIBNAME='mpich'
>> MPIFPMPI=''
>> MPILIBNAME='mpich'
>> MPIMODNAME=''
>> MPIU_DLL_SPEC_DEF=''
>> MPI_2COMPLEX=''
>> MPI_2DOUBLE_COMPLEX=''
>> MPI_2DOUBLE_PRECISION=''
>> MPI_2INT=''
>> MPI_2INTEGER=''
>> MPI_2REAL=''
>> MPI_AINT=''
>> MPI_BYTE=''
>> MPI_CFLAGS=''
>> MPI_CHAR=''
>> MPI_CHARACTER=''
>> MPI_COMPLEX16=''
>> MPI_COMPLEX32=''
>> MPI_COMPLEX8=''
>> MPI_COMPLEX=''
>> MPI_CXXFLAGS=''
>> MPI_DOUBLE=''
>> MPI_DOUBLE_COMPLEX=''
>> MPI_DOUBLE_INT=''
>> MPI_DOUBLE_PRECISION=''
>> MPI_F77_BYTE=''
>> MPI_F77_LB=''
>> MPI_F77_PACKED=''
>> MPI_F77_UB=''
>> MPI_F90FLAGS='-O2'
>> MPI_FFLAGS=''
>> MPI_FINT=''
>> MPI_FLOAT=''
>> MPI_FLOAT_INT=''
>> MPI_INT=''
>> MPI_INTEGER16=''
>> MPI_INTEGER1=''
>> MPI_INTEGER2=''
>> MPI_INTEGER4=''
>> MPI_INTEGER8=''
>> MPI_INTEGER=''
>> MPI_LB=''
>> MPI_LDFLAGS=''
>> MPI_LOGICAL=''
>> MPI_LONG=''
>> MPI_LONG_DOUBLE=''
>> MPI_LONG_DOUBLE_INT=''
>> MPI_LONG_INT=''
>> MPI_LONG_LONG=''
>> MPI_MAX_PROCESSOR_NAME=''
>> MPI_OFFSET=''
>> MPI_OFFSET_TYPEDEF=''
>> MPI_PACKED=''
>> MPI_REAL16=''
>> MPI_REAL4=''
>> MPI_REAL8=''
>> MPI_REAL=''
>> MPI_SHORT=''
>> MPI_SHORT_INT=''
>> MPI_SIGNED_CHAR=''
>> MPI_STATUS_SIZE=''
>> MPI_UB=''
>> MPI_UNSIGNED_CHAR=''
>> MPI_UNSIGNED_INT=''
>> MPI_UNSIGNED_LONG=''
>> MPI_UNSIGNED_LONG_LONG=''
>> MPI_UNSIGNED_SHORT=''
>> MPI_WCHAR=''
>> NEEDSPLIB=''
>> NO_WEAK_SYM=''
>> NO_WEAK_SYM_TARGET=''
>> OBJEXT=''
>> OFFSET_KIND=''
>> PACKAGE_BUGREPORT='mpich2-maint_at_[hidden]'
>> PACKAGE_NAME='MPICH2'
>> PACKAGE_STRING=''
>> PACKAGE_TARNAME='mpich2-MVAPICH2-0.9.8'
>> PACKAGE_VERSION='MVAPICH2-0.9.8'
>> PATH_SEPARATOR=':'
>> PERL5=''
>> PERL=''
>> PMPIFLIBNAME='pmpich'
>> PMPILIBNAME='pmpich'
>> PROFILE_DEF_MPI=''
>> RANLIB=''
>> RANLIB_AFTER_INSTALL=''
>> SET_CFLAGS=''
>> SET_MAKE=''
>> SHELL='/bin/sh'
>> SHLIB_EXT=''
>> SHLIB_FROM_LO=''
>> SHLIB_INSTALL=''
>> SIZEOF_MPI_STATUS=''
>> TESTCPP=''
>> THR_CFLAGS=''
>> THR_CPPFLAGS=''
>> THR_DEFS=''
>> THR_LDFLAGS=''
>> THR_LIBS=''
>> VERSION='MVAPICH2-0.9.8'
>> VPATH=''
>> ac_ct_CC='gcc'
>> ac_ct_CXX=''
>> ac_ct_F77=''
>> ac_ct_F90=''
>> ac_ct_RANLIB=''
>> bindings=''
>> bindings_dirs=''
>> bindir='${exec_prefix}/bin'
>> build_alias=''
>> datadir='${prefix}/share'
>> debugger_dir=''
>> device_name='osu_ch3'
>> docdir='${prefix}/doc'
>> exec_prefix='NONE'
>> host_alias=''
>> htmldir='${prefix}/www'
>> includedir='${prefix}/include'
>> infodir='${prefix}/info'
>> libdir='${exec_prefix}/lib'
>> libexecdir='${exec_prefix}/libexec'
>> localstatedir='${prefix}/var'
>> logging_dir=''
>> logging_name='none'
>> logging_subdirs=''
>> mandir='${prefix}/man'
>> master_top_builddir='/home/mschaara/mtt-testing/scratch-coll/
>> installs/UvUA/src/mvapich2-0.9.8p3'
>> master_top_srcdir='/home/mschaara/mtt-testing/scratch-coll/installs/
>> UvUA/src/mvapich2-0.9.8p3'
>> modincdir=''
>> mpe_dir=''
>> nameserv_name=''
>> oldincludedir='/usr/include'
>> other_install_dirs=' src/pm/mpd'
>> other_pm_names=''
>> pac_prog=''
>> pm_name='mpd'
>> pmi_name='simple'
>> prefix='/home/mschaara/mtt-testing/scratch-coll/installs/UvUA/install'
>> program_transform_name='s,x,x,'
>> romio_dir=''
>> sbindir='${exec_prefix}/sbin'
>> sharedstatedir='${prefix}/com'
>> subdirs=''
>> subsystems=' src/pmi/simple src/pm/mpd'
>> sysconfdir='${prefix}/etc'
>> target_alias=''
>>
>> ## ------------- ##
>> ## Output files. ##
>> ## ------------- ##
>>
>> MPE_THREAD_FUNCS=''
>> MPE_THREAD_TYPEDEFS=''
>>
>> ## ----------- ##
>> ## confdefs.h. ##
>> ## ----------- ##
>>
>> #define HAVE_ERROR_CHECKING MPID_ERROR_LEVEL_ALL
>> #define MPICH_ERROR_MSG_LEVEL MPICH_ERROR_MSG_ALL
>> #define MPICH_SINGLE_THREADED 1
>> #define MPICH_THREAD_LEVEL MPI_THREAD_FUNNELED
>> #define PACKAGE_BUGREPORT ""
>> #define PACKAGE_NAME ""
>> #define PACKAGE_STRING ""
>> #define PACKAGE_TARNAME ""
>> #define PACKAGE_VERSION ""
>> #define USE_LOGGING MPID_LOGGING_NONE
>> #define USE_THREAD_IMPL MPICH_THREAD_IMPL_NONE
>>
>> configure: exit 77
>> _______________________________________________
>> mtt-users mailing list
>> mtt-users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>>
>
>
>