Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] problems on parallel writing
From: w k (thuwk99_at_[hidden])
Date: 2010-02-25 21:42:52


Hi Jody,

I tried your suggestion but it still failed. Attached is the modified code.
If your machine has fortran compiler as well, you can try it.

BTW, how many processors did you use for testing your C code?

Thanks,
Kan

On Thu, Feb 25, 2010 at 3:35 AM, jody <jody.xha_at_[hidden]> wrote:

> Hi
> Just wanted to let you know:
>
> I translated your program to C ran it, and it crashed at MPI_FILE_SET_VIEW
> in a similar way than yours did.
> then i added an if-clause to prevent the call of MPI_FILE_WRITE with
> the undefined value.
> if (myid == 0) {
> MPI_File_write(fh, temp, count, MPI_DOUBLE, &status);
> }
> After this it ran without crash.
> However, the output is not what you expected:
> The number 2122010.0 was not there - probably overwritten by the
> MPI_FILE_WRITE_ALL.
> But this was fixed by replacing the line
> disp=0
> by
> disp=8
> and removing the
> if (single_no .gt. 0) map = map + 1
> statement.
>
> So here's what all looks like:
>
> ===========================================================================================================
> program test_MPI_write_adv2
>
>
> !-- Template for any mpi program
>
> implicit none
>
> !--Include the mpi header file
> include 'mpif.h' ! --> Required statement
>
> !--Declare all variables and arrays.
> integer :: fh, ierr, myid, numprocs, itag, etype, filetype, info
> integer :: status(MPI_STATUS_SIZE)
> integer :: irc, ip
> integer(kind=mpi_offset_kind) :: offset, disp
> integer :: i, j, k
>
> integer :: num
>
> character(len=64) :: filename
>
> real(8), pointer :: q(:), temp(:)
> integer, pointer :: map(:)
> integer :: single_no, count
>
>
> !--Initialize MPI
> call MPI_INIT( ierr ) ! --> Required statement
>
> !--Who am I? --- get my rank=myid
> call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
>
> !--How many processes in the global group?
> call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
>
> if ( myid == 0 ) then
> single_no = 4
> elseif ( myid == 1 ) then
> single_no = 2
> elseif ( myid == 2 ) then
> single_no = 2
> elseif ( myid == 3 ) then
> single_no = 3
> else
> single_no = 0
> end if
>
> if (single_no .gt. 0) allocate(map(single_no))
>
> if ( myid == 0 ) then
> map = (/ 0, 2, 5, 6 /)
> elseif ( myid == 1 ) then
> map = (/ 1, 4 /)
> elseif ( myid == 2 ) then
> map = (/ 3, 9 /)
> elseif ( myid == 3 ) then
> map = (/ 7, 8, 10 /)
> end if
>
> if (single_no .gt. 0) allocate(q(single_no))
>
> if (single_no .gt. 0) then
> do i = 1,single_no
> q(i) = dble(myid+1)*100.0d0 + dble(map(i)+1)
> end do
> end if
>
>
> if ( myid == 0 ) then
> count = 1
> else
> count = 0
> end if
>
> if (count .gt. 0) then
> allocate(temp(count))
> temp(1) = 2122010.0d0
> end if
>
> write(filename,'(a)') 'test_write.bin'
>
> call MPI_FILE_OPEN(MPI_COMM_WORLD, filename,
> MPI_MODE_RDWR+MPI_MODE_CREATE, MPI_INFO_NULL, fh, ierr)
>
> if (my_id == 0) then
> call MPI_FILE_WRITE(FH, temp, COUNT, MPI_REAL8, STATUS, IERR)
> endif
>
> call MPI_TYPE_CREATE_INDEXED_BLOCK(single_no, 1, map,
> MPI_DOUBLE_PRECISION, filetype, ierr)
> call MPI_TYPE_COMMIT(filetype, ierr)
> disp = 8 ! ---> size of MPI_REAL8 (number written when my_id = 0)
> call MPI_FILE_SET_VIEW(fh, disp, MPI_DOUBLE_PRECISION, filetype,
> 'native', MPI_INFO_NULL, ierr)
> call MPI_FILE_WRITE_ALL(fh, q, single_no, MPI_DOUBLE_PRECISION, status,
> ierr)
> call MPI_FILE_CLOSE(fh, ierr)
>
>
> if (single_no .gt. 0) deallocate(map)
>
> if (single_no .gt. 0) deallocate(q)
>
> if (count .gt. 0) deallocate(temp)
>
> !--Finilize MPI
> call MPI_FINALIZE(irc) ! ---> Required statement
>
> stop
>
>
> end program test_MPI_write_adv2
>
> ===========================================================================================================
>
> Regards
> jody
>
> On Thu, Feb 25, 2010 at 2:47 AM, Terry Frankcombe <terry_at_[hidden]>
> wrote:
> > On Wed, 2010-02-24 at 13:40 -0500, w k wrote:
> >> Hi Jordy,
> >>
> >> I don't think this part caused the problem. For fortran, it doesn't
> >> matter if the pointer is NULL as long as the count requested from the
> >> processor is 0. Actually I tested the code and it passed this part
> >> without problem. I believe it aborted at MPI_FILE_SET_VIEW part.
> >>
> >
> > For the record: A pointer is not NULL unless you've nullified it.
> > IIRC, the Fortran standard says that any non-assigning reference to an
> > unassigned, unnullified pointer is undefined (or maybe illegal... check
> > the standard).
> >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>