Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI error in a loop
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2013-07-30 10:53:39


It sounds like you have some kind of memory error in your application; you should run your code through a memory-checking debugger, such as valgrind.

On Jul 24, 2013, at 2:44 AM, Zhubq <zhubenqiang_at_[hidden]> wrote:

>
>>
>> Hi all,
>>
>> I got a problem when call MPI subroutines in a loop. For example, I have Fortran codes to randomly
>> select 10 points in a 2D space domain and change the values at those points near these 10 points to -10:
>>
>> real A( (100*rank+1):(100*rank+100),100 )
>> real inmax(2),outmax(2)
>> integer maxlocation(2),maxrank
>>
>>
>> call random_number(A)
>> maxlocation=maxloc(A); !!! find the coordinates of the local maximum;
>> inmax(1)=maxval(A); !!! get the local maximum value
>> inmax(2)=myrank; !!!! put the process rank
>> do i=1, 10
>>
>> call MPI_allreduce(inmax,outmax,1,mpi_2real, mpi_maxloc,MPI_comm_world,error) !!!get the global maximum and the corresponding rank
>> maxrank=outmax(2£©
>> call MPI_Bcast(maxlocation,2,mpi_integer,maxrank,mpi_comm_world,error);
>> ...
>> let points around maxlocation within distance of 10 equal to -10;
>> ....
>> enddo
>>
>>
>> The problem is there is runtime error like " segmentation fault".
>> But If I put the codes within the loop into a subroutine, then write the code as
>> do i=1,10
>> call subroutine
>>
>> enddo
>>
>> there will be no error.
>>
>> Another problem is MPI_allreduce seems not as efficient as the combined use of " MPI_reduce & MPI_Bcast" to realize the same purpose.
>>
>>
>> Ben
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/