Hi,
 
I am trying to run an MPICH2 application over 2 processors on a dual processor x64 Linux box (SuSE 10). I am getting the following error message:
 
------------------------------
Fatal error in MPI_Waitall: Other MPI error, error stack:
MPI_Waitall(242)..........................: MPI_Waitall(count=2, req_array=0x5bbda70, status_array=0x7fff461d9ce0) failed
MPIDI_CH3_Progress_wait(212)..............: an error occurred while handling an event returned by MPIDU_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(413):
MPIDU_Socki_handle_read(633)..............: connection failure (set=0,sock=1,errno=104:Connection reset by peer)
rank 0 in job 2  Demeter_18432   caused collective abort of all ranks
  exit status of rank 0: killed by signal 11
------------------------------
 
The "cpi" example that comes with MPICH2 executes correctly. I am using MPICH2-1.0.5p2 which I compiled from source.
 
Does anyone know what the problem is?
 
cheers
steve

************************************************************************

Climate change will impact on everyone… Queensland takes action

Register your interest in attending at http://www.nrw.qld.gov.au/events/nrconference/index.html

Natural Resources Conference 2007

Climate Change - Queensland takes action

Wednesday 23 May 2007
Brisbane Convention and Exhibition Centre

************************************************************************

The information in this email together with any attachments is

intended only for the person or entity to which it is addressed

and may contain confidential and/or privileged material.

Any form of review, disclosure, modification, distribution

and/or publication of this email message is prohibited, unless

as a necessary part of Departmental business.

If you have received this message in error, you are asked to

inform the sender as quickly as possible and delete this message

and any copies of this message from your computer and/or your

computer system network.

************************************************************************