Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Core ids not coming properly
From: Ralph Castain (rhc_at_[hidden])
Date: 2013-02-15 14:46:03


Looks to me like you are really saying that taskset didn't do what you expected - with that cmd line, OMPI didn't do anything to bind your procs. It just launched "taskset".

On Feb 15, 2013, at 11:34 AM, Kranthi Kumar <kranthipls_at_[hidden]> wrote:

> With Open MPI this is the command I used:
>
> mpirun -n 6 taskset -c 0,2,4,6,8,10 ./a.out
>
> With intel library I set environment variable I_MPI_PIN_MAPPING=6:0 0,1 2,2 4,3 6,4 8,5 10
> and ran by saying
>
> mpirun -n 6 ./a.out
> On Fri, Feb 15, 2013 at 10:30 PM, <users-request_at_[hidden]> wrote:
> Send users mailing list submissions to
> users_at_[hidden]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
> users-request_at_[hidden]
>
> You can reach the person managing the list at
> users-owner_at_[hidden]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
> 1. Core ids not coming properly (Kranthi Kumar)
> 2. Re: Core ids not coming properly (Brice Goglin)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 15 Feb 2013 22:04:11 +0530
> From: Kranthi Kumar <kranthipls_at_[hidden]>
> Subject: [OMPI users] Core ids not coming properly
> To: users_at_[hidden]
> Message-ID:
> <CAL97QqiVvW+GKBBFPJN_bBovhnUgXKvMg0-NTYpd=1rsVsPt=w_at_[hidden]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello Sir
>
> Here below is the code which I wrote using hwloc for getting the bindings
> of the processes.
> I tested this code on SDSC Gordon Super Computer which has Open MPI 1.4.3
> and on TACC Stampede which uses intel's MPI library IMPI.
> With Open MPI I get all the core ids for all the processes as 0. Using
> Intel MPI library I get different coreids. I tried even binding the
> processes
> in the command line using taskset. Open MPI gives me core id 0 for all the
> processes whereas IMPI gives me correct bindings.
> Please look into this
>
>
> #include <stdio.h>
> #include <sched.h>
> #include <math.h>
> #include "mpi.h"
> #include <hwloc.h>
> int main(int argc, char* argv[])
> {
> int rank, size;
> cpu_set_t mask;
> long num;
> int proc_num(long num);
>
> hwloc_topology_t topology;
> hwloc_cpuset_t cpuset;
> hwloc_obj_t obj;
>
> MPI_Init(&argc, &argv);
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> MPI_Comm_size(MPI_COMM_WORLD, &size);
>
> hwloc_topology_init ( &topology);
> hwloc_topology_load ( topology);
>
> hwloc_bitmap_t set = hwloc_bitmap_alloc();
> hwloc_obj_t pu;
> int err;
>
> err = hwloc_get_proc_cpubind(topology, getpid(), set,
> HWLOC_CPUBIND_PROCESS);
> if (err) {
> printf ("Error Cannot find\n"), exit(1);
> }
>
> pu = hwloc_get_pu_obj_by_os_index(topology, hwloc_bitmap_first(set));
> printf ("Hello World, I am %d and pid: %d
> coreid:%d\n",rank,getpid(),hwloc_bitmap_first(set));
>
> hwloc_bitmap_free(set);
> MPI_Finalize();
> fclose(stdout);
> return 0;
> }
> Thank You
> --
> Kranthi
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 2
> Date: Fri, 15 Feb 2013 17:46:25 +0100
> From: Brice Goglin <Brice.Goglin_at_[hidden]>
> Subject: Re: [OMPI users] Core ids not coming properly
> To: Open MPI Users <users_at_[hidden]>
> Message-ID: <511E6661.40608_at_[hidden]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> IntelMPI binds processes by default, while OMPI doesn't. What's your
> mpiexec/mpirun command-line?
>
> Brice
>
>
>
> Le 15/02/2013 17:34, Kranthi Kumar a ?crit :
> > Hello Sir
> >
> > Here below is the code which I wrote using hwloc for getting the
> > bindings of the processes.
> > I tested this code on SDSC Gordon Super Computer which has Open MPI
> > 1.4.3 and on TACC Stampede which uses intel's MPI library IMPI.
> > With Open MPI I get all the core ids for all the processes as 0. Using
> > Intel MPI library I get different coreids. I tried even binding the
> > processes
> > in the command line using taskset. Open MPI gives me core id 0 for all
> > the processes whereas IMPI gives me correct bindings.
> > Please look into this
> >
> >
> > #include <stdio.h>
> > #include <sched.h>
> > #include <math.h>
> > #include "mpi.h"
> > #include <hwloc.h>
> > int main(int argc, char* argv[])
> > {
> > int rank, size;
> > cpu_set_t mask;
> > long num;
> > int proc_num(long num);
> >
> > hwloc_topology_t topology;
> > hwloc_cpuset_t cpuset;
> > hwloc_obj_t obj;
> >
> > MPI_Init(&argc, &argv);
> > MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> > MPI_Comm_size(MPI_COMM_WORLD, &size);
> >
> > hwloc_topology_init ( &topology);
> > hwloc_topology_load ( topology);
> >
> > hwloc_bitmap_t set = hwloc_bitmap_alloc();
> > hwloc_obj_t pu;
> > int err;
> >
> > err = hwloc_get_proc_cpubind(topology, getpid(), set,
> > HWLOC_CPUBIND_PROCESS);
> > if (err) {
> > printf ("Error Cannot find\n"), exit(1);
> > }
> >
> > pu = hwloc_get_pu_obj_by_os_index(topology, hwloc_bitmap_first(set));
> > printf ("Hello World, I am %d and pid: %d
> > coreid:%d\n",rank,getpid(),hwloc_bitmap_first(set));
> >
> > hwloc_bitmap_free(set);
> > MPI_Finalize();
> > fclose(stdout);
> > return 0;
> > }
> > Thank You
> > --
> > Kranthi
> >
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2494, Issue 2
> **************************************
>
>
>
> --
> Kranthi
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users