This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Hi Steven, Dmytry
Not sure if this web page is still valid or totally out of date,
but there it goes anyway, in the hopes that it may help:
On the other hand, one expert seems to dismiss NFS
for paralllel IO:
I must say that this has been a gray area for me too.
It would be nice if MPI - or the various MPIs -
documentation told us a bit more clearly what types of
underlying file system support MPI parallel IO:
local disk (ext?, xfs, etc), NFS mounts,
the various parallel file systems (PVFS/OrangeFS, Lustre,
And perhaps provide some setup information, plus
functionality, and performance comparisons.
My two cents,
On 11/07/2013 12:21 PM, Dmitry N. Mikushin wrote:
> Not sure if this is related, but:
> I've seen a case of performance degradation on NFS and Lustre when
> writing NetCDF files. The reason was that the file was filled with a
> loop writing one 4-byte record at once. Performance became close to
> local hard drive, when I simply introduced buffering of records and
> writing them to files with one row at once.
> - D.
> 2013/11/7 Steven G Johnson<stevenj_at_[hidden]>:
>> The simple C program attached below hangs on MPI_File_write when I am using an NFS-mounted filesystem. Is MPI-IO supported in OpenMPI for NFS filesystems?
>> I'm using OpenMPI 1.4.5 on Debian stable (wheezy), 64-bit Opteron CPU, Linux 3.2.51. I was surprised by this because the problems only started occurring recently when I upgraded my Debian system to wheezy; with OpenMPI in the previous Debian release, output to NFS-mounted filesystems worked fine.
>> Is there any easy way to get this working? Any tips are appreciated.
>> Steven G. Johnson
>> void perr(const char *label, int err)
>> char s[MPI_MAX_ERROR_STRING];
>> int len;
>> MPI_Error_string(err, s,&len);
>> printf("%s: %d = %s\n", label, err, s);
>> int main(int argc, char **argv)
>> MPI_File fh;
>> int err;
>> err = MPI_File_open(MPI_COMM_WORLD, "tstmpiio.dat", MPI_MODE_CREATE | MPI_MODE_WRONLY, MPI_INFO_NULL,&fh);
>> perr("open", err);
>> const char s = "Hello world!\n";
>> MPI_Status status;
>> err = MPI_File_write(fh, (void*) s, strlen(s), MPI_CHAR,&status);
>> perr("write", err);
>> err = MPI_File_close(&fh);
>> perr("close", err);
>> return 0;
>> users mailing list
> users mailing list