Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] mpi problems,
From: Nehemiah Dacres (dacresni_at_[hidden])
Date: 2011-04-07 12:20:22


oh thank you ! that might work
On Thu, Apr 7, 2011 at 5:31 AM, Terry Dontje <terry.dontje_at_[hidden]>wrote:

> Nehemiah,
> I took a look at an old version of a hpl Makefile I have. I think what you
> really want to do is not set the MP* variables to anything and near the end
> of the Makefile set CC and LINKER to mpicc. You may need to also change the
> CFLAGS and LINKERFLAGS variables to match which compiler/arch you are
> using.
>
> --td
>
> On 04/07/2011 06:20 AM, Terry Dontje wrote:
>
> On 04/06/2011 03:38 PM, Nehemiah Dacres wrote:
>
> I am also trying to get netlib's hpl to run via sun cluster tools so i am
> trying to compile it and am having trouble. Which is the proper mpi library
> to give?
> naturally this isn't going to work
>
> MPdir = /opt/SUNWhpc/HPC8.2.1c/sun/
> MPinc = -I$(MPdir)/include
> *MPlib = $(MPdir)/lib/libmpi.a*
>
> Is there a reason you are trying to link with a static libmpi. You really
> want to link with libmpi.so. It also seems like whatever Makefile you are
> using is not using mpicc, is that true. The reason that is important is
> mpicc would pick up the right libs you needed. Which brings me to Ralph's
> comment, if you really want to go around the mpicc way of compiling use
> mpicc --showme, copy the compile line shown in that commands output and
> insert your files accordingly.
>
> --td
>
>
> because that doesn't exist
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libotf.a
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libvt.fmpi.a
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libvt.omp.a
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libvt.a
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libvt.mpi.a
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libvt.ompi.a
>
> is what I have for listing *.a in the lib directory. none of those are
> equivilant becasue they are all linked with vampire trace if I am reading
> the names right. I've already tried putting
> /opt/SUNWhpc-O/HPC8.2.1c/sun/lib/libvt.mpi.a for this and it didnt work
> giving errors like
>
> On Wed, Apr 6, 2011 at 12:42 PM, Terry Dontje <terry.dontje_at_[hidden]>wrote:
>
>> Something looks fishy about your numbers. The first two sets of numbers
>> look the same and the last set do look better for the most part. Your
>> mpirun command line looks weird to me with the "-mca
>> orte_base_help_aggregate btl,openib,self," did something get chopped off
>> with the text copy? You should have had a "-mca btl openib,self". Can you
>> do a run with "-mca btl tcp,self", it should be slower.
>>
>> I really wouldn't have expected another compiler over IB to be that
>> dramatically lower performing.
>>
>> --td
>>
>>
>>
>> On 04/06/2011 12:40 PM, Nehemiah Dacres wrote:
>>
>> also, I'm not sure if I'm reading the results right. According to the
>> last run, did using the sun compilers (update 1 ) result in higher
>> performance with sunct?
>>
>> On Wed, Apr 6, 2011 at 11:38 AM, Nehemiah Dacres <dacresni_at_[hidden]>wrote:
>>
>>> some tests I did. I hope this isn't an abuse of the list. please tell me
>>> if it is but thanks to all those who helped me.
>>>
>>> this goes to say that the sun MPI works with programs not compiled with
>>> sun’s compilers.
>>> this first test was run as a base case to see if MPI works., the sedcond
>>> run is to see the speed up using OpenIB provides
>>> jian_at_therock ~]$ mpirun -machinefile list
>>> /opt/iba/src/mpi_apps/mpi_stress/mpi_stress
>>> Start mpi_stress at Wed Apr 6 10:56:29 2011
>>>
>>> Size (bytes) TxMessages TxMillionBytes/s
>>> TxMessages/s
>>> 32 10000 2.77
>>> 86485.67
>>> 64 10000 5.76
>>> 90049.42
>>> 128 10000 11.00
>>> 85923.85
>>> 256 10000 18.78
>>> 73344.43
>>> 512 10000 34.47
>>> 67331.98
>>> 1024 10000 34.81
>>> 33998.09
>>> 2048 10000 17.31
>>> 8454.27
>>> 4096 10000 18.34
>>> 4476.61
>>> 8192 10000 25.43
>>> 3104.28
>>> 16384 10000 15.56
>>> 949.50
>>> 32768 10000 13.95
>>> 425.74
>>>
>>> 65536 10000 9.88
>>> 150.79
>>> 131072 8192 11.05
>>> 84.31
>>> 262144 4096 13.12
>>> 50.04
>>> 524288 2048 16.54
>>> 31.55
>>> 1048576 1024 19.92
>>> 18.99
>>> 2097152 512 22.54
>>> 10.75
>>> 4194304 256 25.46
>>> 6.07
>>>
>>> Iteration 0 : errors = 0, total = 0 (495 secs, Wed Apr 6 11:04:44 2011)
>>> After 1 iteration(s), 8 mins and 15 secs, total errors = 0
>>>
>>> here is the infiniband run
>>>
>>> [jian_at_therock ~]$ mpirun -mca orte_base_help_aggregate btl,openib,self,
>>> -machinefile list /opt/iba/src/mpi_apps/mpi_stress/mpi_stress
>>> Start mpi_stress at Wed Apr 6 11:07:06 2011
>>>
>>> Size (bytes) TxMessages TxMillionBytes/s
>>> TxMessages/s
>>> 32 10000 2.72 84907.69
>>> 64 10000 5.83 91097.94
>>> 128 10000 10.75 83959.63
>>> 256 10000 18.53 72384.48
>>> 512 10000 34.96 68285.00
>>> 1024 10000 11.40 11133.10
>>> 2048 10000 20.88 10196.34
>>> 4096 10000 10.13 2472.13
>>> 8192 10000 19.32 2358.25
>>> 16384 10000 14.58 890.10
>>> 32768 10000 15.85 483.61
>>> 65536 10000 9.04 137.95
>>> 131072 8192 10.90 83.12
>>> 262144 4096 13.57
>>> 51.76
>>> 524288 2048 16.82
>>> 32.08
>>> 1048576 1024 19.10 18.21
>>> 2097152 512 22.13 10.55
>>> 4194304 256 21.66 5.16
>>>
>>> Iteration 0 : errors = 0, total = 0 (511 secs, Wed Apr 6 11:15:37 2011)
>>> After 1 iteration(s), 8 mins and 31 secs, total errors = 0
>>> compiled with the sun compilers i think
>>> [jian_at_therock ~]$ mpirun -mca orte_base_help_aggregate btl,openib,self,
>>> -machinefile list sunMpiStress
>>> Start mpi_stress at Wed Apr 6 11:23:18 2011
>>>
>>> Size (bytes) TxMessages TxMillionBytes/s
>>> TxMessages/s
>>> 32 10000 2.60
>>> 81159.60
>>> 64 10000 5.19
>>> 81016.95
>>> 128 10000 10.23
>>> 79953.34
>>> 256 10000 16.74
>>> 65406.52
>>> 512 10000 23.71
>>> 46304.92
>>> 1024 10000 54.62
>>> 53340.73
>>> 2048 10000 45.75
>>> 22340.58
>>> 4096 10000 29.32
>>> 7158.87
>>> 8192 10000 28.61
>>> 3492.77
>>> 16384 10000 184.03
>>> 11232.26
>>> 32768 10000 215.69
>>> 6582.21
>>> 65536 10000 229.88
>>> 3507.64
>>> 131072 8192 231.64
>>> 1767.25
>>> 262144 4096 220.73
>>> 842.00
>>> 524288 2048 121.61
>>> 231.95
>>> 1048576 1024 66.54
>>> 63.46
>>> 2097152 512 44.20
>>> 21.08
>>> 4194304 256 45.17
>>> 10.77
>>>
>>> Iteration 0 : errors = 0, total = 0 (93 secs, Wed Apr 6 11:24:52 2011)
>>> After 1 iteration(s), 1 mins and 33 secs, total errors = 0
>>>
>>> sanity check: was sunMpiStress compiled using the sun compilers or oracle
>>> compilerrs ?
>>> [jian_at_therock ~]$ which mpirun
>>>
>>> /opt/SUNWhpc/HPC8.2.1c/sun/bin/mpirun
>>> [jian_at_therock ~]$ ldd sunMpiStress
>>> libmpi.so.0 => /opt/SUNWhpc/HPC8.2.1c/sun/lib/lib64/libmpi.so.0
>>> (0x00002b5d2c6c3000)
>>> libopen-rte.so.0 =>
>>> /opt/SUNWhpc/HPC8.2.1c/sun/lib/lib64/libopen-rte.so.0 (0x00002b5d2c8c1000)
>>> libopen-pal.so.0 =>
>>> /opt/SUNWhpc/HPC8.2.1c/sun/lib/lib64/libopen-pal.so.0 (0x00002b5d2ca19000)
>>> libnsl.so.1 => /lib64/libnsl.so.1 (0x0000003361400000)
>>> librt.so.1 => /lib64/librt.so.1 (0x000000335f400000)
>>> libm.so.6 => /lib64/libm.so.6 (0x000000335e400000)
>>> libdl.so.2 => /lib64/libdl.so.2 (0x000000335e800000)
>>> libutil.so.1 => /lib64/libutil.so.1 (0x000000336ba00000)
>>> libpthread.so.0 => /lib64/libpthread.so.0 (0x000000335ec00000)
>>> libc.so.6 => /lib64/libc.so.6 (0x000000335e000000)
>>> /lib64/ld-linux-x86-64.so.2 (0x000000335dc00000)
>>> [jian_at_therock ~]$ which mpicc
>>> /opt/SUNWhpc/HPC8.2.1c/sun/bin/mpicc
>>> [jian_at_therock ~]$ mpicc /opt/iba/src/mpi_apps/mpi_stress/mpi_stress.c -o
>>> sunMpiStress --showme
>>> cc /opt/iba/src/mpi_apps/mpi_stress/mpi_stress.c -o sunMpiStress
>>> -I/opt/SUNWhpc/HPC8.2.1c/sun/include/64
>>> -I/opt/SUNWhpc/HPC8.2.1c/sun/include/64/openmpi -R/opt/mx/lib/lib64
>>> -R/opt/SUNWhpc/HPC8.2.1c/sun/lib/lib64
>>> -L/opt/SUNWhpc/HPC8.2.1c/sun/lib/lib64 -lmpi -lopen-rte -lopen-pal -lnsl
>>> -lrt -lm -ldl -lutil -lpthread
>>> [jian_at_therock ~]$ which cc
>>> /opt/sun/sunstudio12.1/bin/cc
>>>
>>> looks like it!
>>>
>>>
>>>
>>
>>
>> --
>> Nehemiah I. Dacres
>> System Administrator
>> Advanced Technology Group Saint Louis University
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>> --
>> [image: Oracle]
>> Terry D. Dontje | Principal Software Engineer
>> Developer Tools Engineering | +1.781.442.2631
>> Oracle * - Performance Technologies*
>> 95 Network Drive, Burlington, MA 01803
>> Email terry.dontje_at_[hidden]
>>
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
> Nehemiah I. Dacres
> System Administrator
> Advanced Technology Group Saint Louis University
>
>
> _______________________________________________
> users mailing listusers_at_[hidden]http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> [image: Oracle]
> Terry D. Dontje | Principal Software Engineer
> Developer Tools Engineering | +1.781.442.2631
> Oracle * - Performance Technologies*
> 95 Network Drive, Burlington, MA 01803
> Email terry.dontje_at_[hidden]
>
>
>
>
> _______________________________________________
> users mailing listusers_at_[hidden]http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> [image: Oracle]
> Terry D. Dontje | Principal Software Engineer
> Developer Tools Engineering | +1.781.442.2631
> Oracle * - Performance Technologies*
> 95 Network Drive, Burlington, MA 01803
> Email terry.dontje_at_[hidden]
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

-- 
Nehemiah I. Dacres
System Administrator
Advanced Technology Group Saint Louis University