Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Best way to reduce 3D array
From: Gus Correa (gus_at_[hidden])
Date: 2010-03-30 20:29:17


Salve Ricardo Reis!

Como vai a Radio Zero?

Doesn't this serialize the I/O operation across the processors,
whereas MPI_Gather followed by rank_0 I/O may perhaps move
the data faster to rank_0, and eventually to disk
(particularly when the number of processes is large)?

I never thought of your solution,
hence I never tried/tested/compared it
to my common wisdom suggestion to Derek either.
So, I really don't know the answer.

Abrac,o
Gus

Ricardo Reis wrote:
>
> If using the master/slace IO model, would it be better to cicle through
> all the process and each one would write it's part of the array into the
> file. This file would be open in "stream" mode...
>
> like
>
> do p=0,nprocs-1
>
> if(my_rank.eq.i)then
>
> openfile (append mode)
> write_to_file
> closefile
>
> endif
>
> call MPI_Barrier(world,ierr)
>
> enddo
>
>
> cheers,
>
>
> Ricardo Reis
>
> 'Non Serviam'
>
> PhD candidate @ Lasef
> Computational Fluid Dynamics, High Performance Computing, Turbulence
> http://www.lasef.ist.utl.pt
>
> Cultural Instigator @ Rádio Zero
> http://www.radiozero.pt
>
> Keep them Flying! Ajude a/help Aero Fénix!
>
> http://www.aeronauta.com/aero.fenix
>
> http://www.flickr.com/photos/rreis/
>
> < sent with alpine 2.00 >
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users