Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Can compute, but can not output files
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-05-03 11:14:41


On Apr 30, 2010, at 10:36 PM, JiangjunZheng wrote:

> I am using Rocks+openmpi+hdf5+pvfs2. The soft on the rocks+pvfs2 cluster will output hdf5 files after computing. However, when the output starts, it shows errors:
> [root_at_nanohv pvfs2]# ./hdf5_mpio DH-ey-001400.20.h5
> Testing simple C MPIO program with 1 processes accessing file DH-ey-001400.20.h5
> (Filename can be specified via program argument)
> Proc 0: hostname=nanohv.columbia.edu
> Proc 0: MPI_File_open failed (MPI_ERR_IO: input/output error)
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1.
>
> If run in a none shared folder on the main node, the program goes well. it shows:
> Proc 0: hostname=nanohv.columbia.edu
> Proc 0: all tests passed

This seems to indicate that the file failed to open for some reason in your first test.

Given that this is an HDF5 test program, you might want to ping them for more details...?

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/