Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Program deadlocks, on simple send/recv loop
From: Jed Brown (jed_at_[hidden])
Date: 2009-12-03 12:42:32


On Thu, 3 Dec 2009 12:21:50 -0500, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> On Dec 3, 2009, at 10:56 AM, Brock Palen wrote:
>
> > The allocation statement is ok:
> > allocate(vec(vec_size,vec_per_proc*(size-1)))
> >
> > This allocates memory vec(32768, 2350)

It's easier to translate to C rather than trying to read Fortran
directly.

  #define M 2350
  #define N 32768
  complex double vec[M*N];

> This means that in the first iteration, you're calling:
>
> irank = 1
> ivec = 1
> vec_ind = (47 - 1) * 50 + 1 =
> call MPI_RECV(vec(1, 2301), 32768, ...)

  MPI_Recv(&vec[2300*N],N,...);

> And in the last iteration, you're calling:
>
> irank = 47
> ivec = 50
> vec_ind = (47 - 1) * 50 + 50 =
> call MPI_RECV(vec(1, 2350), 32768, ...)

  MPI_Recv(&vec[2349*N],N,...);

> That doesn't seem right.

Should be one non-overlapping column (C row) at a time. It will be
contiguous in memory, but this isn't using that property.

Jed