hwloc can only tell where CPU/device are, and place programs on the right CPUs. hwloc isn't going to convert your parallel program into a GPU program. If you want to use NVIDIA GPUs, you have to rewrite your program using CUDA, OpenCL, or a high-level heterogeneous langage.

Le 21/06/2013 12:04, Solibakke Per Bjarte a écrit :

I have been using OPEN-MPI for several years now on 8-16 CPU/Core machines. I want to extend the usage to graphic-card devices (NVIDIA-cards). Therefore,  

I use open-mpi implementation on x number of CPU´s working well (linux/ubuntu):


The CPU installation:


1) makefile look like this:



CC       = mpic++

SDIR     = ./

IMPI     = /usr/lib/openmpi/include

LMPI     = /usr/lib/openmpi/lib

ISCL     = $(HOME)/applik-libscl/libscl/gpp

LSCL     = $(HOME)/applik-libscl/libscl/gpp

IDIRS    = -I. -I$(SDIR) -I$(IMPI) -I$(ISCL)

LDIRS    = -L$(LMPI) -L$(LSCL)

CFLAGS   = -O -Wall -c  $(IDIRS)

LFLAGS   = $(LDIRS)  -lscl -lm


hello : hello.o

      $(CC) -o hello hello.o $(LFLAGS)


hello.o : $(SDIR)/hello.cpp $(HEADERS)

      $(CC) $(CFLAGS) $(SDIR)/hello.cpp


clean :

      rm -f *.o core core.*


veryclean   :

      rm -f *.o core core.*

      rm -f  hello




2) and I  simultaneously compile and execute with the  sh-file:



echo "localhost cpu=24" > OpenMPIhosts


test -f hello.err  && mv -f hello.err  hello.err.bak

test -f hello.out  && mv -f hello.out  hello.out.bak


make -f makefile.mpi.OpenMPI_1.4 >hello.out 2>&1 && \

  mpirun --hostfile OpenMPIhosts ${PWD}/hello >>hello.out 2>hello.err




case $RC in

  0) exit 0 ;;


exit 1;



I have now some questions:


Can this parallel program (hello) be extended by also using Graphic processors card (i.e. Nvidia-cards) using hwloc = internal in version  Open-mpi 1.7.1 (installation).


If yes:


Any changes in makefiles? Execute-files? Program-files?


Suggestions for implementations are appreciated!

The graphic card devices should be the extensions of a machine's CPUs.





hwloc-users mailing list