Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Anybody built a working 1.4.1 on Solaris 8 (Sparc)?
From: Terry Dontje (Terry.Dontje_at_[hidden])
Date: 2010-02-05 12:24:13


We haven't tried Solaris 8 in quite some time. However, for your first
issue did you include the --enable-heterogeneous option on your
configure command?

Since you are mix IA-32 and SPARC nodes you'll want to include this so
the endian issue doesn't bite you.

--td

> Message: 5
> Date: Thu, 04 Feb 2010 13:52:05 -0800
> From: "David Mathog" <mathog_at_[hidden]>
> Subject: [OMPI users] Anybody built a working 1.4.1 on Solaris 8
> (Sparc)?
> To: users_at_[hidden]
> Message-ID: <E1Nd9cL-00016Z-6d_at_[hidden]>
> Content-Type: text/plain; charset=iso-8859-1
>
> Has anybody built 1.4.1 on Solaris 8 (Sparc), because it isn't going
> very well here. If you succeeded at this please tell me how you did it.
>
> Here is my tale of woe.
>
> First attempt with gcc (3.4.6 from SunFreeware) and
>
> ./configure --with-sge --prefix=/opt/ompi141
> gmake all install >build_log1 2>&1
>
> built ok (needed the same source changes described by Brian Blank for
> 1.3.1 or it wouldn't compile
>
> http://www.open-mpi.org/community/lists/users/2009/02/7994.php
>
> ), and mpirun worked OK with hello_c so long as it only sent jobs to the
> same machine like this:
>
> cat >justme <<EOD
> nameofsolaris8machine
> EOD
> mpirun --prefix /opt/ompi141 --mca plm_rsh_agent rsh \
> -np 1 --hostfile justme hello_c
>
> but try to send them to another machine (Linux IA32) and...
>
> cat >linuxboxen <<EOD
> linuxia32_1
> linuxia32_2
> linuxia32_3
> EOD
> mpirun --prefix /opt/ompi141 --mca plm_rsh_agent rsh \
> -np 1 --hostfile linuxboxen hello_c
>
> mca_oob_tcp_msg_send_handler: writev failed; Bad file descriptor
>
> Note that the IA32 machines can send jobs back and forth to each other
> without a problem using the same sort of test.
>
> Second attempt, tried building with the Forte 7 tools instead of gcc:
>
> ./configure --with-sge --prefix=/opt/ompi141 \
> CFLAGS="-xarch=v8plusa" \
> CXXFLAGS="-xarch=v8plusa" \
> FFLAGS="-xarch=v8plusa" \
> FCFLAGS="-xarch=v8plusa" \
> CC=/opt/SUNWspro/bin/cc \
> CXX=/opt/SUNWspro/bin/CC \
> F77=/opt/SUNWspro/bin/f77 \
> FC=/opt/SUNWspro/bin/f95 \
> CCAS=/opt/SUNWspro/bin/cc \
> CCASFLAGS="-xarch=v8plusa" >saf_configure_4.log 2>&1 &
> gmake all install >build_log2 2>&1
>
> and it had problems with vt_tracefilter.cc, where it stopped at:
>
> "The compiler currently does not permit non-POD variables ("name") in
> OpenMP regions".
>
> This was because of -DVT_OMP on the compilation line. Removed it, then
> the compiler did not include omp.h (ancient OpenMPI header file that
> came with Forte 7), compiled that one file manually, restarted the gmake
> and hit:
>
> /opt/SUNWspro/bin/CC -DHAVE_CONFIG_H -I. -I../.. \
> -I../../extlib/otf/otflib -I../../extlib/otf/otflib \
> -I../../vtlib/ -xopenmp -DVT_OMP -xarch=v8plusa -c \
> -o vtunify-vt_unify.o \
> `test -f 'vt_unify.cc' || echo './'`vt_unify.cc
> CC: Warning: Optimizer level changed from 0 to 3 to support parallelized
> code
> "vt_unify_stats.h", line 70: Error: m_iFuncStatSortFlags is not
> accessible from Statistics::FuncStat_struct::operator<(const
> Statistics::FuncStat_struct&) const.
> 1 Error(s) detected.
>
> This error didn't go away when -DVT_OMP was removed, nor when -xopenmp
> was also taken away, and so the final score is: working OpenMPI 1.4.1
> on Solaris = 0 for 2 attempts.
>
> Thanks,
>
> David Mathog
> mathog_at_[hidden]
> Manager, Sequence Analysis Facility, Biology Division, Caltech