Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] MPI_REAL16
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-06-21 09:16:13


Thanks for looking into this, David.

So if I understand that correctly, it means you have to assign all
literals in your fortran program with a "_16" suffix. I don't know if
that's standard Fortran or not.

But I modified our configure test and now the types seem to match.
Can you give the mercurial branch at the following URL a whirl and see
if it works for you:

     http://bitbucket.org/jsquyres/fortran-real16/

On Jun 20, 2009, at 6:28 PM, David Robertson wrote:

> Hi Jeff,
>
> Bellow is the reply I got from Intel and it seemed to work:
>
> David,
>
> I received your issue. There isn't an equivalent type to Real*16 in
> icc
> without the -Qoption,cpp,--extended_float_types option because there
> is
> no runtime library support for quad precision.
>
> Your test case has a bug in the Fortran code. Your literal is not QuAD
> precision. You need to do the assignement as follows:
>
> foo = 1.1_16
>
> After making this change you still see fortran equal and a and b will
> also be equal.
>
> Can you also request the owner of the code online make this
> correction?
>
> Please let me know if you have additional questions.
>
> Regards,
>
> Elizabeth S.
> Intel Developer Support
>
>
> Jeff Squyres wrote:
> > Greetings David.
> >
> > I think we should have a more explicit note about MPI_REAL16
> support in
> > the README.
> >
> > This issue has come up before; see
> > https://svn.open-mpi.org/trac/ompi/ticket/1603.
> >
> > If you read through that ticket, you'll see that I was unable to
> find a
> > C equivalent type for REAL*16 with the Intel compilers. This is
> what
> > blocked us from making that work. :-\ But then again, I haven't
> tried
> > the test codes on that ticket with the Intel 11.0 compilers to see
> what
> > would happen (last tests were with 10.something). It *seems* to
> be a
> > compiler issue, but I confess that we never had a high enough
> priority
> > to follow through and figure it out completely.
> >
> > If you have an Intel support contract, you might want to take some
> of
> > the final observations on #1603 (e.g., the test codes I put near the
> > end) and see what Intel has to say about it. Perhaps we're doing
> > something wrong...?
> >
> > I hate to pass the buck here, but I unfortunately have a whole
> pile of
> > higher-priority items that I need to work on...
> >
> >
> >
> > On Jun 19, 2009, at 1:32 PM, David Robertson wrote:
> >
> >> Hi all,
> >>
> >> I have compiled Open MPI 1.3.2 with Intel Fortran and C/C++ 11.0
> >> compilers. Fortran Real*16 seems to be working except for
> MPI_Allreduce.
> >> I have attached a simple program to show what I mean. I am not an
> MPI
> >> programmer but I work for one and he actually wrote the attached
> >> program. The program sets a variable to 1 on all processes then
> sums.
> >>
> >> Running with real*8 (comment #define REAL16 in quad_test.F)
> produces the
> >> expected results:
> >>
> >> Number of Nodes = 4
> >>
> >> ALLREDUCE sum = 4.00000000000000
> >> ALLGATHER sum = 4.00000000000000
> >> ISEND/IRECV sum = 4.00000000000000
> >>
> >> Node = 0 Value = 1.00000000000000
> >> Node = 2 Value = 1.00000000000000
> >> Node = 3 Value = 1.00000000000000
> >> Node = 1 Value = 1.00000000000000
> >>
> >> Running with real*16 produces the following:
> >>
> >> Number of Nodes = 4
> >>
> >> ALLREDUCE sum = 1.00000000000000000000000000000000
> >> ALLGATHER sum = 4.00000000000000000000000000000000
> >> ISEND/IRECV sum = 4.00000000000000000000000000000000
> >> Node = 0 Value =
> 1.00000000000000000000000000000000
> >> Node = 1 Value =
> 1.00000000000000000000000000000000
> >> Node = 2 Value =
> 1.00000000000000000000000000000000
> >> Node = 3 Value =
> 1.00000000000000000000000000000000
> >>
> >> As I mentioned, I'm not a parallel programmer but I would expect
> the
> >> similar results from identical operations on real*8 and real*16
> >> variables.
> >>
> >> NOTE: I get the same behavior with MPICH and MPICH2.
> >>
> >
> >
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>

-- 
Jeff Squyres
Cisco Systems