This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
it is possible that the problem is not in MPI - I've seen similar problem
on some of our workstations some time ago.
Juan, are you sure you can allocate more than 2x 4GB memory of data in
non-mpi program on your system?
On Wed, 1 Aug 2007, George Bosilca wrote:
> I have to check to see what's wrong there. We build Open MPI with
> full support for data transfer up to sizeof(size_t) bytes. so you
> case should be covered. However, there are some known problems with
> the MPI interface for data larger than sizeof(int). As an example the
> _count field in the MPI_Status structure will be truncated ...
> On Jul 30, 2007, at 1:47 AM, Juan Carlos Guzman wrote:
>> Does anyone know the maximum buffer size I can use in MPI_Send()
>> (MPI_Recv) function?. I was doing some testing using two nodes on my
>> cluster to measure the point-to-point MPI message rate depending on
>> size. The test program exchanges MPI_FLOAT datatypes between two
>> nodes. I was able to send up to 4 GB of data (500 Mega MPI_FLOATs)
>> before the process crashed with a segmentation fault message.
>> Is the maximum size of the message limited by the sizeof(int) * sizeof
>> (MPI data type) used in the MPI_Send()/MPI_Recv() functions?
>> My cluster has openmpi 1.2.3 installed. Each node has 2 x dual core
>> AMD Opteron and 12 GB RAM.
>> Thanks in advance.
>> users mailing list
> users mailing list
Jelena Pjesivac-Grbovic, Pjesa
Graduate Research Assistant
Innovative Computing Laboratory
Computer Science Department, UTK
Claxton Complex 350
(865) 974 - 6722
(865) 974 - 6321
"The only difference between a problem and a solution is that
people understand the solution."
-- Charles Kettering