Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] »Ø¸´£º can you help me please ?thanks
From: ºúÑî (781578278_at_[hidden])
Date: 2013-12-05 06:27:11


three nodes with 3 ranks.and number is the size of the array ¡£
  int*a=(int*)malloc(sizeof(int)*number);
MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD);

 int*b=(int*)malloc(sizeof(int)*number);
MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
 why£¿£¿ is the limite of the speed between MPI_Send and MPI_Recv£¿£¿£¿
   

 

 ------------------ ԭʼÓʼþ ------------------
  ·¢¼þÈË: "Ralph Castain";<rhc_at_[hidden]>;
 ·¢ËÍʱ¼ä: 2013Äê12ÔÂ5ÈÕ(ÐÇÆÚËÄ) ÍíÉÏ6:52
 ÊÕ¼þÈË: "Open MPI Users"<users_at_[hidden]>;
 
 Ö÷Ìâ: Re: [OMPI users] can you help me please ?thanks

 

 You are running 15000 ranks on two nodes?? My best guess is that you are swapping like crazy as your memory footprint problem exceeds available physical memory.


 

 On Thu, Dec 5, 2013 at 1:04 AM, ºúÑî <781578278_at_[hidden]> wrote:
  My ROCKS cluster includes one frontend and two compute nodes.In my program,I have use the openmpi API such as MPI_Send and MPI_Recv . but when I run the progam with 3 processors . one processor send a message ,other receive message .here are some code.
 int*a=(int*)malloc(sizeof(int)*number);
MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD);
  
  int*b=(int*)malloc(sizeof(int)*number);
MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
  
 when number is least than 10000,it runs fast.
but number is more than 15000,it runs slowly
  
 why?? becesue openmpi API ?? or other problems?

  ------------------ ԭʼÓʼþ ------------------
  ·¢¼þÈË: "Ralph Castain";<rhc_at_[hidden]>;
 ·¢ËÍʱ¼ä: 2013Äê12ÔÂ3ÈÕ(ÐÇÆÚ¶þ) ÖÐÎç1:39
 ÊÕ¼þÈË: "Open MPI Users"<users_at_[hidden]>;
 
 Ö÷Ìâ: Re: [OMPI users] can you help me please ?thanks

 

 
 

 
 
 On Mon, Dec 2, 2013 at 9:23 PM, ºúÑî <781578278_at_[hidden]> wrote:
 A simple program at my 4-node ROCKS cluster runs fine with command:
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./sort_mpi6
 

Another bigger programs runs fine on the head node only with command:
 
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/sort_mpi6
 
But with the command:
 
cd /sphere; /opt/openmpi/bin/mpirun -np 4 -machinefile ../machines
../bin/sort_mpi6
 
It gives output that:
 
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot open
shared object file: No such file or directory
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot open
shared object file: No such file or directory
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot open
shared object file: No such file or directory
  

_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users




 


_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users