Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Memory question and possible bug in 64bit addressing under Leopard!
From: Audet, Martin (Martin.Audet_at_[hidden])
Date: 2008-04-25 16:43:46


This has nothing to do with the segmentation fault you got but in addition to Brian comment, I would suggest you to not forget that with ISO C++ (the C++98 standard and the upcoming C++0x) a constant expression known at compile time is needed for dimensions of local arrays.

In other words, a construct like:

        int n = 10000000;
        float X[n];

isn't standard compliant because n isn't a constant expression. It compile only because it is a g++ extension (try this with Visual C++ for example). A construct like:

        const int n = 10000000;
        float X[n];

however is standard compliant since n is a constant expression known at compile time.

Variable length arrays would allow setting dimensions of local arrays using any integral expression (whether or not it is constant or known at compile time). This feature was added to the ISO C language in the C99 standard but not in C++.

Martin

-----Original Message-----
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On Behalf Of Brian Barrett
Sent: April 25, 2008 16:11
To: Open MPI Users
Subject: Re: [OMPI users] Memory question and possible bug in 64bit addressing under Leopard!

On Apr 25, 2008, at 2:06 PM, Gregory John Orris wrote:

> produces a core dump on a machine with 12Gb of RAM.
>
> and the error message
>
> mpiexec noticed that job rank 0 with PID 75545 on node mymachine.com
> exited on signal 4 (Illegal instruction).
>
> However, substituting in
>
> float *X = new float[n];
> for
> float X[n];
>
> Succeeds!

You're running off the end of the stack, because of the large amount
of data you're trying to put there. OS X by default has a tiny stack
size, so codes that run on Linux (which defaults to a much larger
stack size) sometimes show this problem. Your best bets are either to
increase the max stack size or (more portably) just allocate
everything on the heap with malloc/new.

Hope this helps,

Brian

--
   Brian Barrett
   Open MPI developer
   http://www.open-mpi.org/
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users