Yes I can fill the buffer entirely with dummy value to ensure that the memory allocated is actually used, so I don't think the problem is in the OS.Allocating memory is one thing. Being able to use it it's acompletely different story. Once you allocate the 8GB array can youfill it with some random values ? This will force the kernel toreally give you the 8GB of memory. If this segfault, then that's theproblem. If not ... the problem come from Open MPI I guess.
Thanks,george.On Aug 2, 2007, at 6:59 PM, Juan Carlos Guzman wrote:Jelena, George,Thanks for your replies.it is possible that the problem is not in MPI - I've seen similarproblemon some of our workstations some time ago.Juan, are you sure you can allocate more than 2x 4GB memory ofdata innon-mpi program on your system?Yes, I did a small program that can allocate more than 8 GB of memory(using malloc()).Cheers,Juan-Carlos.Thanks,JelenaOn Wed, 1 Aug 2007, George Bosilca wrote:Juan,I have to check to see what's wrong there. We build Open MPI withfull support for data transfer up to sizeof(size_t) bytes. so youcase should be covered. However, there are some known problems withthe MPI interface for data larger than sizeof(int). As an examplethe_count field in the MPI_Status structure will be truncated ...Thanks,george.On Jul 30, 2007, at 1:47 AM, Juan Carlos Guzman wrote:Hi,Does anyone know the maximum buffer size I can use in MPI_Send()(MPI_Recv) function?. I was doing some testing using two nodeson mycluster to measure the point-to-point MPI message rate depending onsize. The test program exchanges MPI_FLOAT datatypes between twonodes. I was able to send up to 4 GB of data (500 Mega MPI_FLOATs)before the process crashed with a segmentation fault message.Is the maximum size of the message limited by the sizeof(int) *sizeof(MPI data type) used in the MPI_Send()/MPI_Recv() functions?My cluster has openmpi 1.2.3 installed. Each node has 2 x dual coreAMD Opteron and 12 GB RAM.Thanks in advance.Juan-Carlos._______________________________________________users mailing list_______________________________________________users mailing list--Jelena Pjesivac-Grbovic, PjesaGraduate Research AssistantInnovative Computing LaboratoryComputer Science Department, UTKClaxton Complex 350(865) 974 - 6722(865) 974 - 6321"The only difference between a problem and a solution is thatpeople understand the solution."-- Charles Kettering------------------------------Message: 2Date: Wed, 1 Aug 2007 15:06:56 -0500From: "Adams, Samuel D Contr AFRL/HEDR" <Samuel.Adams@BROOKS.AF.MIL>Subject: Re: [OMPI users] torque and openmpiTo: "Open MPI Users" <email@example.com>Message-ID:
Content-Type: text/plain; charset="us-ascii"I reran the configure script with the --with-tm flag this time.Thanksfor the info. It was working before for clients with ssh properlyconfigured (i.e. my account only). But now it is working withouthavingto use ssh for all accounts (i.e. biologist and physicists users).Sam AdamsGeneral Dynamics Information TechnologyPhone: 210.536.5945-----Original Message-----From: firstname.lastname@example.org [mailto:email@example.com] OnBehalf Of Jeff SquyresSent: Friday, July 27, 2007 2:58 PMTo: Open MPI UsersSubject: Re: [OMPI users] torque and openmpiOn Jul 27, 2007, at 2:48 PM, Galen Shipman wrote:I set up ompi before I configured Torque. Do I need to recompileompiwith appropriate torque configure options to get betterintegration?If libtorque wasn't present on the machine at configure then yes,youneed to run:./configure --with-tm=<path>You don't *have* to do this, of course. If you've got it workingwith ssh, that's fine. But the integration with torque can bebetter:- you can disable ssh for non-root accounts (assuming no otherservices need rsh/ssh)- users don't have to setup ssh keys to run MPI jobs (a small thing,but sometimes nice when the users aren't computer scientists)- torque knows about all processes on all nodes (not just the mothersuperior) and can therefore both track and kill them if necessaryJust my $0.02...--Jeff SquyresCisco Systems_______________________________________________users mailing list------------------------------Message: 3Date: Wed, 1 Aug 2007 20:58:44 -0400From: Jeff Squyres <firstname.lastname@example.org>Subject: Re: [OMPI users] unable to compile open mpi using pgf90 inAMD opteron systemTo: Open MPI Users <email@example.com>Message-ID: <5453C030-B7C9-48E1-BBA7-F04BCC43C9CB@cisco.com>Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowedOn Aug 1, 2007, at 11:38 AM, S.Sundar Raman wrote:dear openmpi users,i m trying to compile openmpi with pgf90 compiler in AMD opteronsystem.i followed the procedure given in the mailer archives.What procedure are you referring to, specifically?i found the following problem.please kindly help me in this regard and i m eagerly waiting foryour replymake: Entering directory `/usr/local/openmpi-1.2.3/ompi/mpi/f90'/bin/sh ../../../libtool --mode=link pgf90 -I../../../ompi/include -I../../../ompi/include -I. -I. -I../../../ompi/mpi/f90 -export-dynamic -o libmpi_f90.la -rpath /usr/local/mpi/lib mpi.lompi_sizeof.lo mpi_comm_spawn_multiple_f90.lo mpi_testall_f90.lompi_testsome_f90.lo mpi_waitall_f90.lo mpi_waitsome_f90.lompi_wtick_f90.lo mpi_wtime_f90.lo -lnsl -lutil -lmlibtool: link: pgf90 -shared -fPIC -Mnomain .libs/mpi.o .libs/mpi_sizeof.o .libs/mpi_comm_spawn_multiple_f90.o .libs/mpi_testall_f90.o .libs/mpi_testsome_f90.o .libs/mpi_waitall_f90.o .libs/mpi_waitsome_f90.o .libs/mpi_wtick_f90.o .libs/mpi_wtime_f90.o -lnsl -lutil -lm -Wl,-soname -Wl,libmpi_f90.so.0 -o .libs/libmpi_f90.so.0.0.0/usr/bin/ld: .libs/mpi.o: relocation R_X86_64_PC32 against`__pgio_ini' can not be used when making a shared object; recompilewith -fPICI can usually compile with the PGI compilers without needing to doanything special (PGI v6.2-5 and 7.0-2), although I usually do addthe following option to configure:--with-wrapper-cxxflags=-fPICThis puts "-fPIC" in the flags that the mpiCC wrapper compiler willautomatically insert when compiling MPI C++ applications.Can you send all the information listed here:--Jeff SquyresCisco Systems------------------------------_______________________________________________users mailing listEnd of users Digest, Vol 657, Issue 1*************************************_______________________________________________users mailing list_______________________________________________users mailing list