On Oct 23, 2007, at 10:58 AM, Patrick Geoffray wrote:
> Bogdan Costescu wrote:
>> I made some progress: if I configure with "--without-memory-manager"
>> (along with all other options that I mentioned before), then it
>> This was inspired by the fact that the segmentation fault occured in
>> ptmalloc2. I have previously tried to remove the MX support without
>> any effect; with ptmalloc2 out of the picture I have had test runs
>> over MX and TCP without problems.
> We have had portability problems using ptmalloc2 in MPICH-GM,
> relative to threads. In MX, we choose to use dlmalloc instead. It
> is not
> as optimized and its thread-safety has a coarser grain, but it is much
> more portable.
> Disabling the memory manager in OpenMPI is not a bad thing for MX, as
> its own dlmalloc-based registration cache will operate transparently
> with MX_RCACHE=1 (default).
If you're not packaging Open MPI with MX support, I'd configure Open
MPI with the extra parameters:
This will provide the least possibility of something getting in the
way of MX doing its thing with its memory hooks. It causes libmpi.so
to depend on libmyriexpress.so, which is both a good and bad thing.
Good because the malloc hooks in libmyriexpress aren't "seen" when we
dlopen the OMPI MX drivers to suck in libmyriexpress, but they would
be with this configuration. Bad in that libmpi.so now depends on
libmyriexpress, so packaging for multiple machines could be more