On 04/08/2014 05:49 PM, Daniel Milroy wrote:
> The file system in question is indeed Lustre, and mounting with flock
> isnt possible in our environment. I recommended the following changes
> to the users code:
Hi. I'm the ROMIO guy, though I do rely on the community to help me
keep the lustre driver up to snuff.
> MPI_Info_set(info, "collective_buffering", "true");
> MPI_Info_set(info, "romio_lustre_ds_in_coll", "disable");
> MPI_Info_set(info, "romio_ds_read", "disable");
> MPI_Info_set(info, "romio_ds_write", "disable");
> Which results in the same error as before. Are there any other MPI
> options I can set?
I'd like to hear more about the workload generating these lock messages,
but I can tell you the situations in which ADIOI_SetLock gets called:
- everywhere in NFS. If you have a Lustre file system exported to some
clients as NFS, you'll get NFS (er, that might not be true unless you
pick up a recent patch)
- when writing a non-contiguous region in file, unless you disable data
sieving, as you did above.
- note: you don't need to disable data sieving for reads, though you
might want to if the data sieving algorithm is wasting a lot of data.
- if atomic mode was set on the file (i.e. you called
- if you use any of the shared file pointer operations
- if you use any of the ordered mode collective operations
you've turned off data sieving writes, which is what I would have first
guessed would trigger this lock message. So I guess you are hitting one
of the other cases.
Mathematics and Computer Science Division
Argonne National Lab, IL USA