This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Have you checked your compiler switches ?? Some have options to
perform IEEE arithmetic, which is supposed to give identical results -
-Kieee -Knoieee (default)
Perform floating-point operations in strict conformance with the
IEEE 754 standard. Some optimizations are disabled with -Kieee, and a
more accurate math library is used. The default -Knoieee uses faster
but very slightly less accurate methods.
Lahey ( lf95 ):
Compile only. Default: --nap
Specify --ap to guarantee the consistency of REAL and COMPLEX
of optimization level; user variables are not assigned to registers...
We have found these necessary for regression testing codes -
otherwise, very minor processor differences will generate different
rounding errors ( without any assistance from MPI )
Jim Conboy ( Culham Ctr for Fusion Energy )
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
Behalf Of Ashley Pittman
Sent: 26 April 2010 09:02
To: Open MPI Users
Subject: Re: [OMPI users] open-mpi behaviour on Fedora, Ubuntu,Debian
On 25 Apr 2010, at 22:27, Asad Ali wrote:
> Yes I use different machines such as
> machine 1 uses AMD Opterons. (Fedora)
> machine 2 and 3 use Intel Xeons. (CentOS)
> machine 4 uses slightly older Intel Xeons. (Debian)
> Only machine 1 gives correct results. While CentOS and Debian results
> are same but are wrong and different from those of machine 1.
Have you verified the are actually wrong or are they just different?
It's actually perfectly possible for the same program to get different
results from run to run even on the same hardware and the same OS. All
floating point operations by the MPI library are expected to be
deterministic but changing the process layout or and MPI settings can
affect this and of course anything the application does can introduce
differences as well.
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing
users mailing list