On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen <email@example.com>
Actually I had a much differnt results,
gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 pgi/7.2
For some reason, the difference in minutes didn't come through, it seems, but I would guess that if it's a medium-large difference, then it has its roots in PGI7.2 vs. GCC rather than MPICH2 vs. OpenMPI. Though, to be fair, I find GCC vs. PGI (for C code) is often a toss-up - one may beat the other handily on one code, and then lose just as badly on another.
I think my install of mpich2 may be bad, I have never installed it before, only mpich1, OpenMPI and LAM. So take my mpich2 numbers with salt, Lots of salt.
I think the biggest difference in performance with various MPICH2 install comes from differences in the 'channel' used.. I tend to make sure that I use the 'nemesis' channel, which may or may not be the default these days. If not, though, most people would probably want it. I think it has issues with threading (or did ages ago?), but I seem to recall it being considerably faster than even the 'ssm' channel.
Sangamesh: My advice to you would be to recompile Gromacs and specify, in the Gromacs compile / configure, to use the same CFLAGS you used with MPICH2. Eg, "-O2 -m64", whatever. If you do that, I bet the times between MPICH2 and OpenMPI will be pretty comparable for your benchmark case - especially when run on a single processor.