Open MPI logo

Hardware Locality Users' Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Users mailing list

Subject: [hwloc-users] Open-mpi + hwloc ...
From: Solibakke Per Bjarte (Per.B.Solibakke_at_[hidden])
Date: 2013-06-21 06:04:50


Hello,

I have been using OPEN-MPI for several years now on 8-16 CPU/Core machines. I want to extend the usage to graphic-card devices (NVIDIA-cards). Therefore,

I use open-mpi implementation on x number of CPU´s working well (linux/ubuntu):

The CPU installation:

1) makefile look like this:

****

CC = mpic++

SDIR = ./

IMPI = /usr/lib/openmpi/include

LMPI = /usr/lib/openmpi/lib

ISCL = $(HOME)/applik-libscl/libscl/gpp

LSCL = $(HOME)/applik-libscl/libscl/gpp

IDIRS = -I. -I$(SDIR) -I$(IMPI) -I$(ISCL)

LDIRS = -L$(LMPI) -L$(LSCL)

CFLAGS = -O -Wall -c $(IDIRS)

LFLAGS = $(LDIRS) -lscl -lm

hello : hello.o

      $(CC) -o hello hello.o $(LFLAGS)

hello.o : $(SDIR)/hello.cpp $(HEADERS)

      $(CC) $(CFLAGS) $(SDIR)/hello.cpp

clean :

      rm -f *.o core core.*

veryclean :

      rm -f *.o core core.*

      rm -f hello

*****

2) and I simultaneously compile and execute with the sh-file:

*****

echo "localhost cpu=24" > OpenMPIhosts

test -f hello.err && mv -f hello.err hello.err.bak

test -f hello.out && mv -f hello.out hello.out.bak

make -f makefile.mpi.OpenMPI_1.4 >hello.out 2>&1 && \

  mpirun --hostfile OpenMPIhosts ${PWD}/hello >>hello.out 2>hello.err

RC=$?

case $RC in

  0) exit 0 ;;

  esac

exit 1;

I have now some questions:

Can this parallel program (hello) be extended by also using Graphic processors card (i.e. Nvidia-cards) using hwloc = internal in version Open-mpi 1.7.1 (installation).

If yes:

Any changes in makefiles? Execute-files? Program-files?

Suggestions for implementations are appreciated!
The graphic card devices should be the extensions of a machine's CPUs.

Regards
PBSolibakke
Professor