Okay cool, mine already breaks with P=2, so I'll try this soon. Thanks
for the impatient-idiot's-guide :)
On 18 May 2011 14:15, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> If you're only running with a few MPI processes, you might be able to get away with:
> mpirun -np 4 valgrind ./my_mpi_application
> If you run any more than that, the output gets too jumbled and you should output each process' valgrind stdout to a different file with the --log-file option (IIRC).
> I personally like these valgrind options:
> valgrind --num-callers=50 --db-attach=yes --tool=memcheck --leak-check=yes --show-reachable=yes
> On May 18, 2011, at 8:49 AM, Paul van der Walt wrote:
>> Hi Jeff,
>> Thanks for the response.
>> On 18 May 2011 13:30, Jeff Squyres <jsquyres_at_[hidden]> wrote:
>>> *Usually* when we see segv's in calls to alloc, it means that there was previously some kind of memory bug, such as an array overflow or something like that (i.e., something that stomped on the memory allocation tables, causing the next alloc to fail).
>>> Have you tried running your code through a memory-checking debugger?
>> I sort-of tried with valgrind, but I'm not really sure how to
>> interpret the output (I'm not such a C-wizard). I'll have another look
>> a little later then and report back. I suppose I should RTFM on how to
>> properly invoke valgrind so it makes sense with an MPI program?
>> O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
>> users mailing list
> Jeff Squyres
> For corporate legal information go to:
> users mailing list
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org