This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Wed, Jul 23, 2008 at 09:47:56AM -0400, Robert Kubrick wrote:
> HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I
> think the API is easier than direct MPI-I/O, maybe even easier than raw
> read/writes given its support for hierarchal objects and metadata.
In addition to the API provided by parallel HDF5 and parallel-NetCDF,
these high level libraries offer a self-describing portable file
format. Pretty nice when collaborating with others. Plus there are a
host of viewers for these file formats, so that's another thing you
don't have to worry about.
> HDF5 supports multiple storage models and it supports MPI-IO.
> HDF5 has an open interface to access raw storage. This enables HDF5
> files to be written to a variety of media, including sequential files,
> families of files, memory, Unix sockets (i.e., a network).
> New "Virtual File" drivers can be added to support new storage access
> HDF5 also supports MPI-IO with Parallel HDF5. When building HDF5,
> parallel support is included by configuring with the --enable-parallel
> option. A tutorial for Parallel HDF5 is included with the HDF5 Tutorial
It's a very good tutorial. Do read the parallel I/O chapter closely,
especially the parts about enabling collective I/O via property lists
and transfer templates. For many HDF5 workloads today, collective I/O
is the key to getting good performance (this was not always the case
back in the bad old days of MPICH1 and LAM, but has been since at
least the HDF5-1.6 series).
Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA B29D F333 664A 4280 315B