Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Best way to reduce 3D array
From: Gus Correa (gus_at_[hidden])
Date: 2010-03-30 20:29:17


Salve Ricardo Reis!

Como vai a Radio Zero?

Doesn't this serialize the I/O operation across the processors,
whereas MPI_Gather followed by rank_0 I/O may perhaps move
the data faster to rank_0, and eventually to disk
(particularly when the number of processes is large)?

I never thought of your solution,
hence I never tried/tested/compared it
to my common wisdom suggestion to Derek either.
So, I really don't know the answer.

Abrac,o
Gus

Ricardo Reis wrote:
>
> If using the master/slace IO model, would it be better to cicle through
> all the process and each one would write it's part of the array into the
> file. This file would be open in "stream" mode...
>
> like
>
> do p=0,nprocs-1
>
> if(my_rank.eq.i)then
>
> openfile (append mode)
> write_to_file
> closefile
>
> endif
>
> call MPI_Barrier(world,ierr)
>
> enddo
>
>
> cheers,
>
>
> Ricardo Reis
>
> 'Non Serviam'
>
> PhD candidate @ Lasef
> Computational Fluid Dynamics, High Performance Computing, Turbulence
> http://www.lasef.ist.utl.pt
>
> Cultural Instigator @ Rádio Zero
> http://www.radiozero.pt
>
> Keep them Flying! Ajude a/help Aero Fénix!
>
> http://www.aeronauta.com/aero.fenix
>
> http://www.flickr.com/photos/rreis/
>
> < sent with alpine 2.00 >
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users