Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] issue with addresses
From: Priyesh Srivastava (priyesh_at_[hidden])
Date: 2012-07-23 20:02:56


hello Hristo

Thank you for your reply. I was able to understand some parts of your
response, but still had some doubts due to my lack of knowledge about the
way memory is allocated.

I have created a small sample program and the resulting output which will
help me pin point my question.
The program is :

       *program test include'mpif.h' integer
a,b,c(10),ierr,id,datatype,size(3),type(3),i,status
integer(kind=MPI_ADDRESS_KIND) add(3)*
* call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,id,ierr) call
MPI_GET_ADDRESS(a,add(1),ierr) write(*,*) 'address of a ,id ', add(1), id
call MPI_GET_ADDRESS(b,add(2),ierr) write(*,*) 'address of b,id ', add(2),
id call MPI_GET_ADDRESS(c,add(3),ierr) write(*,*) 'address of c,id ',
add(3), id add(3)=add(3)-add(1) add(2)=add(2)-add(1) add(1)=add(1)-add(1)
size(1)=1 size(2)=1 size(3)=10 type(1)=MPI_INTEGER type(2)=MPI_INTEGER
type(3)=MPI_INTEGER call
MPI_TYPE_CREATE_STRUCT(3,size,add,type,datatype,ierr) call
MPI_TYPE_COMMIT(datatype,ierr) write(*,*) 'datatype ,id', datatype , id
write(*,*) ' relative add1 ',add(1), 'id',id write(*,*) ' relative add2
',add(2), 'id',id write(*,*) ' relative add3 ',add(3), 'id',id if(id==0)
then a = 1000 b=2000 do i=1,10 c(i)=i end do c(10)=700 c(1)=600 end if*
*
*
* if(id==0) then call MPI_SEND(a,1,datatype,1,8,MPI_COMM_WORLD,ierr) end if
if(id==1) then call MPI_RECV(a,1,datatype,0,8,MPI_COMM_WORLD,status,ierr)
write(*,*) 'id =',id write(*,*) 'a=' , a write(*,*) 'b=' , b do i=1,10
write(*,*) 'c(',i,')=',c(i) end do end if call MPI_FINALIZE(ierr) end*
*
*
* *
the output is *:*
*
*
* address of a ,id 140736841025492 0*
* address of b,id 140736841025496 0*
* address of c,id 6994640 0*
* datatype ,id 58 0*
* relative add1 0 id 0*
* relative add2 4 id 0*
* relative add3 -140736834030852 id 0*
* address of a ,id 140736078234324 1*
* address of b,id 140736078234328 1*
* address of c,id 6994640 1*
* datatype ,id 58 1*
* relative add1 0 id 1*
* relative add2 4 id 1*
* relative add3 -140736071239684 id 1*
* id = 1*
* a= 1000*
* b= 2000*
* c( 1 )= 600*
* c( 2 )= 2*
* c( 3 )= 3*
* c( 4 )= 4*
* c(5 )= 5*
* c( 6 )= 6*
* c( 7 )= 7*
* c( 8 )= 8*
* c(9 )= 9*
* c(10 )= 700*
*
*

As I had mentioned that the smaller address(of array c) is same for both
the processors. However the larger ones(of 'a' and 'b' ) are different.
This gets explained by what you had mentioned.

So the relative address of the array 'c ' with respect to 'a' is different
for both the processors . The way I am passing data should not
work(specifically the passing of array 'c') but still everything is
correctly sent from processor 0 to 1. I have noticed that this way of
sending non contiguous data is common but I am confused why this works.

thanks
priyesh
On Mon, Jul 23, 2012 at 12:00 PM, <users-request_at_[hidden]> wrote:

> Send users mailing list submissions to
> users_at_[hidden]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-request_at_[hidden]
>
> You can reach the person managing the list at
> users-owner_at_[hidden]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
> 1. Efficient polling for both incoming messages and request
> completion (Geoffrey Irving)
> 2. checkpoint problem (=?gb2312?B?s8LLyQ==?=)
> 3. Re: checkpoint problem (Reuti)
> 4. Re: Re :Re: OpenMP and OpenMPI Issue (Paul Kapinos)
> 5. Re: issue with addresses (Iliev, Hristo)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 22 Jul 2012 15:01:09 -0700
> From: Geoffrey Irving <irving_at_[hidden]>
> Subject: [OMPI users] Efficient polling for both incoming messages and
> request completion
> To: users <users_at_[hidden]>
> Message-ID:
> <CAJ1ofpdNxSVD=_
> FFN1j3kN9KTzjgJehB0XJF3EyL76ajwvDN2Q_at_[hidden]>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hello,
>
> Is it possible to efficiently poll for both incoming messages and
> request completion using only one thread? As far as I know, busy
> waiting with alternate MPI_Iprobe and MPI_Testsome calls is the only
> way to do this. Is that approach dangerous to do performance-wise?
>
> Background: my application is memory constrained, so when requests
> complete I may suddenly be able to schedule new computation. At the
> same time, I need to be responding to a variety of asynchronous
> messages from unknown processors with unknown message sizes, which as
> far as I know I can't turn into a request to poll on.
>
> Thanks,
> Geoffrey
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 23 Jul 2012 16:02:03 +0800
> From: "=?gb2312?B?s8LLyQ==?=" <chensong_at_[hidden]>
> Subject: [OMPI users] checkpoint problem
> To: "Open MPI Users" <users_at_[hidden]>
> Message-ID: <4b55b3e5fc79bad3009c21962e84892c_at_[hidden]>
> Content-Type: text/plain; charset="gb2312"
>
> &nbsp;Hi all,How can I create ckpt files regularly? I mean, do checkpoint
> every 100 seconds. Is there any options to do this? Or I have to write a
> script myself?THANKS,---------------CHEN SongR&amp;D DepartmentNational
> Supercomputer Center in TianjinBinhai New Area, Tianjin, China
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 3
> Date: Mon, 23 Jul 2012 12:15:49 +0200
> From: Reuti <reuti_at_[hidden]>
> Subject: Re: [OMPI users] checkpoint problem
> To: ?? <chensong_at_[hidden]>, Open MPI Users <users_at_[hidden]
> >
> Message-ID:
> <623C01F7-8D8C-4DCF-AA47-2C3EDED2811F_at_[hidden]>
> Content-Type: text/plain; charset=GB2312
>
> Am 23.07.2012 um 10:02 schrieb ????:
>
> > How can I create ckpt files regularly? I mean, do checkpoint every 100
> seconds. Is there any options to do this? Or I have to write a script
> myself?
>
> Yes, or use a queuing system which supports creation of a checkpoint in
> fixed time intervals.
>
> -- Reuti
>
>
> > THANKS,
> >
> >
> >
> > ---------------
> > CHEN Song
> > R&D Department
> > National Supercomputer Center in Tianjin
> > Binhai New Area, Tianjin, China
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 23 Jul 2012 12:26:24 +0200
> From: Paul Kapinos <kapinos_at_[hidden]>
> Subject: Re: [OMPI users] Re :Re: OpenMP and OpenMPI Issue
> To: Open MPI Users <users_at_[hidden]>
> Message-ID: <500D26D0.4070704_at_[hidden]>
> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>
> Jack,
> note that support for THREAD_MULTIPLE is available in [newer] versions of
> open
> MPI, but disabled by default. You have to enable it by configuring, in 1.6:
>
> --enable-mpi-thread-multiple
> Enable MPI_THREAD_MULTIPLE support (default:
> disabled)
>
> You may check the available threading supprt level by using the attaches
> program.
>
>
> On 07/20/12 19:33, Jack Galloway wrote:
> > This is an old thread, and I'm curious if there is support now for this?
> I have
> > a large code that I'm running, a hybrid MPI/OpenMP code, that is having
> trouble
> > over our infiniband network. I'm running a fairly large problem (uses
> about
> > 18GB), and part way in, I get the following errors:
>
> You say "big footprint"? I hear a bell ringing...
> http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
>
>
>
>
>
>
>
>
> --
> Dipl.-Inform. Paul Kapinos - High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23, D 52074 Aachen (Germany)
> Tel: +49 241/80-24915
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: mpi_threading_support.f
> Type: text/x-fortran
> Size: 411 bytes
> Desc: not available
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment.bin
> >
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 4471 bytes
> Desc: S/MIME Cryptographic Signature
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/1f30ae61/attachment-0001.bin
> >
>
> ------------------------------
>
> Message: 5
> Date: Mon, 23 Jul 2012 11:18:32 +0000
> From: "Iliev, Hristo" <iliev_at_[hidden]>
> Subject: Re: [OMPI users] issue with addresses
> To: Open MPI Users <users_at_[hidden]>
> Message-ID:
> <
> FDAA43115FAF4A4F88865097FC2C3CC9030E21BF_at_[hidden]>
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello,
>
> Placement of data in memory is highly implementation dependent. I assume
> you
> are running on Linux. This OS? libc (glibc) provides two different methods
> for dynamic allocation of memory ? heap allocation and anonymous mappings.
> Heap allocation is used for small data up to MMAP_TRESHOLD bytes in length
> (128 KiB by default, controllable by calls to ?mallopt(3)?). Such
> allocations end up at predictable memory addresses as long as all processes
> in your MPI job allocate memory following exactly the same pattern. For
> larger memory blocks malloc() uses private anonymous mappings which might
> end at different locations in the virtual address space depending on how it
> is being used.
>
> What this has to do with your Fortran code? Fortran runtimes use malloc()
> behind the scenes to allocate automatic heap arrays as well as ALLOCATABLE
> ones. Small arrays are allocated on the stack usually and will mostly have
> the same addresses unless some stack placement randomisation is in effect.
>
> Hope that helps.
>
> Kind regards,
> Hristo
>
> > From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
> Behalf Of Priyesh Srivastava
> > Sent: Saturday, July 21, 2012 10:00 PM
> > To: users_at_[hidden]
> > Subject: [OMPI users] issue with addresses
> >
> > Hello?
> >
> > I am working on a mpi program. I have been printing?the?addresses of
> different variables and arrays using the MPI_GET_ADDRESS command. What I
> have > noticed is that all the processors are giving the same address of a
> particular variable as long as the address is less than 2 GB size. When the
> address of a > variable/ array?is?more than 2GB size different processors
> are giving different addresses for the same variable. (I am working on a 64
> bit system and am using > the new MPI Functions and MPI_ADDRESS_KIND
> integers for getting?the?addresses).
> >
> > my question is that should?all?the processors give the same address for
> same variables? If so then why is this not happening for variables with
> larger addresses.
> >
> >
> > thanks
> > priyesh
>
> --
> Hristo Iliev, Ph.D. -- High Performance Computing
> RWTH Aachen University, Center for Computing and Communication
> Rechen- und Kommunikationszentrum der RWTH Aachen
> Seffenter Weg 23, D 52074 Aachen (Germany)
> Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 5494 bytes
> Desc: not available
> URL: <
> http://www.open-mpi.org/MailArchives/users/attachments/20120723/abceb9c3/attachment.bin
> >
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2304, Issue 1
> **************************************
>