I was trying to use non-blocking MPI I/O and to my surprise,
MPI_File_iwrite() is *blocking*. Please see the attached iwrite-test.c
and run it (mpiexec -n 1 or standalone). This is what I get:
MPI_File_iwrite: 10.706 s
MPI_Wait: 0.000 s
I take that to mean that MPI_File_iwrite() blocks until the write is
complete, and MPI_Wait() has nothing to do and returns right away. I
_was_ expecting the iwrite to return immediately, so I can crunch
numbers in the meantime. It doesn't, so this non-blocking API gains me
We tried to see what the standard calls for, but came up with different
ways of understanding the semantics of MPI's asynchronous file I/O APIs;
a blocking iwrite might or might not be compliant with the specs.
We can reproduce this behavior on OpenMPI 1.3.3 (as well as SunMPI 8.2)
and IntelMPI 3.2, on a NetApp file server and on a Lustre setup. Each
instance shows the expected throughput, but blocking File_iwrite() and
Please see the attached ompi_info_dump.txt for our environment. I
couldn't scare up a config.log just now; it probably never survived
beyond the deployment.
What can we do to get non-blocking MPI I/O to work as expected?
High Performance Computing Group
Center for Computing and Communication
RWTH Aachen University