Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Brian Barrett (bbarrett_at_[hidden])
Date: 2007-05-02 14:33:15


Yup, it does. There's nothing in the standard that says it isn't
allowed to. Given the number of system/libc calls involved in doing
communication, pretty much every MPI function is going to change the
value of errno. If you expect otherwise, I'd modify your
application. Most cluster-based MPI implementations are going to
randomly change the errno on you.

Brian

On May 2, 2007, at 12:18 PM, Chudin, Eugene wrote:

> I am trying to experiment with openmpi and following trivial code
> (although runs) affects value of errno
>
> #include <cerrno>
> #include <mpi.h>
>
>
> int main(int argc, char** argv)
> {
> int _procid, _np;
> std::cout << "errno=\t" << errno << std::endl;
> MPI_Init(&argc, &argv);
> std::cout << "errno=\t" << errno << "\tafter MPI_Init()\t" <<
> std::endl;
> MPI_Comm_rank (MPI_COMM_WORLD, &_procid);
> MPI_Comm_size (MPI_COMM_WORLD, &_np);
> std::cout << "errno msg=\t" << strerror(errno) << "\tprocessor=
> \t" << _procid << std::endl;
> MPI_Finalize();
> return 0;
> }
>
> Compiled with
> mpiCC -Wall test.cpp -o test
>
> Produces following output when run just on single processor using
> mpirun -np 1 --prefix /toolbox/openmpi ./test
> errno= 0
> errno= 2 after MPI_Init()
> errno msg= No such file or directory processor= 0
>
> When run on two processors using
> mpirun -np 2 --prefix /toolbox/openmpi ./test
> errno= 0
> errno= 0
> errno= 11 after MPI_Init()
> errno= 115 after MPI_Init()
> errno msg= Operation now in progress processor= 0
> errno msg= Resource temporarily unavailable
> processor= 1
>
> The output of ompi_info --all is attached
>
> <<ompi_info.txt>>
>
>
>
> ----------------------------------------------------------------------
> --------
> Notice: This e-mail message, together with any attachments, contains
> information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
> New Jersey, USA 08889), and/or its affiliates (which may be known
> outside the United States as Merck Frosst, Merck Sharp & Dohme or MSD
> and in Japan, as Banyu - direct contact information for affiliates is
> available at http://www.merck.com/contact/contacts.html) that may be
> confidential, proprietary copyrighted and/or legally privileged. It is
> intended solely for the use of the individual or entity named on this
> message. If you are not the intended recipient, and have received this
> message in error, please notify us immediately by reply e-mail and
> then
> delete it from your system.
>
> ----------------------------------------------------------------------
> --------
> <ompi_info.txt>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users