Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Initializing OMPI with invoking the array constructor on Fortran derived types causes the executable to crash
From: Gus Correa (gus_at_[hidden])
Date: 2013-01-15 17:44:26

Hi Hristo, Stefan

This is more Fortran than MPI or OpenMPI, but anyway ...

Thank you for clarifying the point on F2003 automatic
What was illegal in the 20th century
became legal in the new millenium,
at least when it comes to Fortran.

I confess I have to read each sentence in Metcalf's book
several times to try to understand the nifty new Matlabish
syntax and the semantics of 21st century Fortran ..., and alas,
I am still interpreting it wrong.
The only consolation is that even the Gnu compiler seems to be
in my boat! :)

I don't know how much is gained by concatenating arrays that don't
exist yet (or objects that have an allocatable but not yet allocated
array as their single component), and how much memory management
contortion and fragmentation this may perhaps entail,
but there may be some advantage to it.

Stefan reports that if he allocates explicitly the arrays,
"everything is fine".
So, why not? :)
At least for portability, as it would allow compilation with
compilers that are not yet fully F2003 compliant,
older F90 compilers, etc.

I may be wrong, but my recollection from a situation that
happened here years ago,
and which may bear some similarity to the problem reported by Stefan,
is that Intel ifort would assign one memory word to
allocatable arrays, even before they were actually allocated,
so that you could have a cavalier approach and refer to those
arrays in the RHS of equations at any time,
whereas Gnu gfortran required explicit memory allocation before
those arrays were used.
At that point the ifort behavior seemed to be a lenient extension,
whereas the gfortran behavior a more strict compliance to the standard
(which was still F90 or F95 then, IIRR).
However, this may have changed on more recent versions of those
compilers, and also to conform with more recent versions of the Fortran

Thank you,
Gus Correa

On 01/14/2013 04:48 AM, Iliev, Hristo wrote:
> Hi, Gus,
> Automatic allocation (an reallocation) on assignment is among the nifty
> features of Fortran 2003. In this case "conc" is automatically allocated so
> to match the shape of its initialiser array "[ xx, yy ]". Note that "xx" and
> "yy" are not allocatable though their derived type has an allocatable
> element.
> Kind regards,
> Hristo Iliev
>> -----Original Message-----
>> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]]
>> On Behalf Of Gus Correa
>> Sent: Friday, January 11, 2013 7:19 PM
>> To: Open MPI Users
>> Subject: Re: [OMPI users] Initializing OMPI with invoking the array
>> constructor on Fortran derived types causes the executable to crash
>> Hi Stefan
>> Don't you need to allocate xx, yy and conc, before you use them?
>> In the short program below, they are declared as allocatable, but not
> actually
>> allocated.
>> I hope this helps,
>> Gus Correa
>> On 01/11/2013 09:58 AM, Stefan Mauerberger wrote:
>>> Dear Paul!
>>> Thanks for your reply. This problem seems to get complicated.
>>> Unfortunately, I can not reproduce what you are describing. I tried
>>> with some GCCs as 4.7.1, 4.7.2 and 4.8.0 (20121008). As you suggested,
>>> replacing the MPI_Init and MPI_Finalize calls with WRITE(*,*) "foooo"
>>> and commenting out use mpi, everything is just fine. No segfault no
>>> core dump, just the result as I expect it (I put a write(*,*)
>>> size(conc) in, which must print 2). I simply compiled with a bare
>>> mpif90 ... and executed typing mpirun -np 1 ./a.out .
>>> I also tried on three different architectures - all 64-bit - and, as
>>> soon as MPI_Init is invoked, the program gets core dumped.
>>> I also tried with IBM's MPI implementation just with the difference
>>> using include 'mpif.h' instead of use mpi. Everything is fine and the
>>> result is as in serial runs.
>>> Well, it's not surprising that 4.4.x has its problems. Using modern
>>> Fortran as F03, GCC in a version younger than 4.7.x is just mandatory.
>>> Cheers,
>>> Stefan
>>> On Fri, 2013-01-11 at 14:26 +0100, Paul Kapinos wrote:
>>>> This is hardly an Open MPI issue:
>>>> switch the calls to MPI_Init, MPI_Finalize against
>>>> WRITE(*,*) "foooo"
>>>> comment aut 'USE mpi' .... an see your error (SIGSEGV) again, now
>>>> without any MPI part in the program.
>>>> So my suspiction is this is an bug in your GCC version. Especially
>>>> because there is no SIGSEGV using 4.7.2 GCC (whereby it crasehs using
>>>> 4.4.6)
>>>> ==> Update your compilers!
>>>> On 01/11/13 14:01, Stefan Mauerberger wrote:
>>>>> Hi There!
>>>>> First of all, this is my first post here. In case I am doing
>>>>> something inappropriate pleas be soft with me. On top of that I am
>>>>> not quite sure whether that issue is related to Open MPI or GCC.
>>>>> Regarding my problem: Well, it is a little bulky, see below. I could
>>>>> figure out that the actual crash is caused by invoking Fortran's
>>>>> array constructor [ xx, yy ] on derived-data-types xx and yy. The
>>>>> one key factor is that those types have allocatable member variables.
>>>>> Well, that fact points to blame gfortran for that. However, the
>>>>> crash does not occur if MPI_Iinit is not called in before. Compiled
>>>>> as a serial program everything works perfectly fine. I am pretty
>>>>> sure, the lines I wrote are valid F2003 code.
>>>>> Here is a minimal working example:
>>>>> PROGRAM main
>>>>> USE mpi
>>>>> INTEGER :: ierr
>>>>> TYPE :: test_typ
>>>>> REAL, ALLOCATABLE :: a(:)
>>>>> END TYPE
>>>>> TYPE(test_typ) :: xx, yy
>>>>> TYPE(test_typ), ALLOCATABLE :: conc(:)
>>>>> CALL mpi_init( ierr )
>>>>> conc = [ xx, yy ]
>>>>> CALL mpi_finalize( ierr )
>>>>> END PROGRAM main
>>>>> Just compile with mpif90 ... and execute leads to:
>>>>>> *** glibc detected *** ./a.out: free(): invalid pointer:
>>>>>> 0x00007fefd2a147f8 *** ======= Backtrace: =========
>>>>>> /lib/x86_64-linux-gnu/[0x7fefd26dab96]
>>>>>> ./a.out[0x400fdb]
>>>>>> ./a.out(main+0x34)[0x401132]
>>>>>> /lib/x86_64-linux-gnu/[0x7fefd267d
>>>>>> 76d]
>>>>>> ./a.out[0x400ad9]
>>>>> With commenting out 'CALL MPI_Init' and 'MPI_Finalize' everything
>> seems to be fine.
>>>>> What do you think: Is this a OMPI or a GCC related bug?
>>>>> Cheers,
>>>>> Stefan
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> --
> Hristo Iliev, Ph.D. -- High Performance Computing
> RWTH Aachen University, Center for Computing and Communication
> Rechen- und Kommunikationszentrum der RWTH Aachen
> Seffenter Weg 23, D 52074 Aachen (Germany)
> _______________________________________________
> users mailing list
> users_at_[hidden]