Does the difference persist if you run the single process using mpirun? In other words, does "mpirun -np 1 ./my_hybrid_app..." behave the same as "mpirun -np 2 ./..."?

There is a slight difference in the way procs start when run as singletons. It shouldn't make a difference here, but worth testing.

On Oct 24, 2011, at 12:37 AM, 吕慧伟 wrote:

Dear List,

I have a hybrid MPI/Pthreads program named "my_hybrid_app", this program is memory-intensive and take advantage of multi-threading to improve memory throughput. I run "my_hybrid_app" on two machines, which have same hardware configuration but different OS and GCC. The problem is: when I run "my_hybrid_app" with one process, two machines behaves the same: the more number of threads, the better the performance; however, when I run "my_hybrid_app" with two or more processes. The first machine still increase performance with more threads, the second machine degrades in performance with more threads. 

Since running "my_hybrid_app" with one process behaves correctly, I suspect my linking to MPI library has some problem. Would somebody point me in the right direction? Thanks in advance.

Attached are the commandline used, my machine informantion and link informantion.
p.s. 1: Commandline
single process: ./my_hybrid_app <number of threads>
multiple process: mpirun -np 2 ./my_hybrid_app <number of threads>

p.s. 2: Machine Informantion
The first machine is CentOS 5.3 with GCC 4.1.2:
Target: x86_64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/java-1.4.2-gcj- --with-cpu=generic --host=x86_64-redhat-linux
Thread model: posix
gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)
The second machine is SUSE Enterprise Server 11 with GCC 4.3.4:
Target: x86_64-suse-linux
Configured with: ../configure --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,java,ada --enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3 --enable-ssp --disable-libssp --with-bugurl= --with-pkgversion='SUSE Linux' --disable-libgcj --disable-libmudflap --with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit --enable-libstdcxx-allocator=new --disable-libstdcxx-pch --enable-version-specific-runtime-libs --program-suffix=-4.3 --enable-linux-futex --without-system-libunwind --with-cpu=generic --build=x86_64-suse-linux
Thread model: posix
gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux)

p.s. 3: ldd Informantion
The first machine:
$ ldd my_hybrid_app => /lib64/ (0x000000358d400000) => /usr/local/openmpi/lib/ (0x00002af0d53a7000) => /usr/local/openmpi/lib/ (0x00002af0d564a000) => /usr/local/openmpi/lib/ (0x00002af0d5895000) => /lib64/ (0x000000358d000000) => /lib64/ (0x000000358f000000) => /lib64/ (0x000000359a600000) => /usr/lib64/ (0x00002af0d5b07000) => /lib64/ (0x000000358d800000) => /lib64/ (0x000000358cc00000)
        /lib64/ (0x000000358c800000) => /lib64/ (0x000000358dc00000)
The second machine:
$ ldd my_hybrid_app =>  (0x00007fff3eb5f000) => /root/opt/openmpi/lib/ (0x00007f68627a1000) => /lib64/ (0x00007f686254b000) => /root/opt/openmpi/lib/ (0x00007f68622fc000) => /root/opt/openmpi/lib/ (0x00007f68620a5000) => /lib64/ (0x00007f6861ea1000) => /lib64/ (0x00007f6861c89000) => /lib64/ (0x00007f6861a86000) => /usr/lib64/ (0x00007f686187d000) => /lib64/ (0x00007f6861660000) => /lib64/ (0x00007f6861302000)
        /lib64/ (0x00007f6862a58000) => /lib64/ (0x00007f68610f9000)
I installed openmpi-1.4.2 to a user directory /root/opt/openmpi and use "-L/root/opt/openmpi -Wl,-rpath,/root/opt/openmpi" when linking.
Huiwei Lv
PhD. student at Institute of Computing Technology, 
Beijing, China
users mailing list