Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] How to know which task on which node
From: Gijsbert Wiesenekker (gijsbert.wiesenekker_at_[hidden])
Date: 2009-01-19 03:33:12


gaurav gupta wrote:
> Hello,
>
> I want to know that which task is running on which node. Is there any
> way to know this.
> Is there any profiling tool provided along with openmpi to calculate
> time taken in various steps.
>
> --
> GAURAV GUPTA
> B.Tech III Yr. , Department of Computer Science & Engineering
> IT BHU , Varanasi
> Contacts
> Phone No: +91-99569-49491
>
> e-mail :
> gaurav.gupta_at_[hidden] <mailto:gaurav.gupta_at_[hidden]>
> gaurav.gupta.cse06_at_[hidden] <mailto:gaurav.gupta.cse06_at_[hidden]>
> 1989.gaurav_at_[hidden] <mailto:1989.gaurav_at_[hidden]>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
Hi Gupta,

I ran into the same problem. In my case I wanted to define the root node
on a specific host for a synchronization step using rsync between the
hosts running the processes. Here is some linux C code that might help
you. It builds an array mpi_host with the hostname of each node, and an
index array mpi_host_rank that shows which processes are running on the
same node. The BUG, MY_MALLOC and my_printf macro's are wrappers for
C-functions assert, malloc and printf. The code assumes name-resolution
is the same on all nodes.

#define LINE_MAX 1024
#define MPI_NPROCS_MAX 256
#define INVALID (-1)

int mpi_nprocs;
int mpi_id;
int mpi_nhosts;
int mpi_root_id;
char *mpi_hosts;
char *mpi_host[MPI_NPROCS_MAX];
int mpi_host_rank[MPI_NPROCS_MAX];

int main(void)
{
    int iproc;
    char hostname[LINE_MAX];

    mpi_nprocs = 1;
    mpi_id = 0;
    mpi_nhosts = 1;
    mpi_root_id = 0;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &mpi_nprocs);
    BUG(mpi_nprocs > MPI_NPROCS_MAX)
    MPI_Comm_rank(MPI_COMM_WORLD, &mpi_id);
   
    BUG(gethostname(hostname, LINE_MAX) != 0)

    REGISTER_MALLOC(mpi_hosts, char, LINE_MAX * mpi_nprocs)
    for (iproc = 0; iproc < mpi_nprocs; iproc++)
        mpi_host[iproc] = mpi_hosts + iproc * LINE_MAX;
    if (mpi_nprocs == 1)
        strcpy(mpi_host[0], hostname);
    else
        MPI_Allgather(hostname, LINE_MAX, MPI_CHAR,
            mpi_hosts, LINE_MAX, MPI_CHAR, MPI_COMM_WORLD);

    MPI_Barrier(MPI_COMM_WORLD);
    for (iproc = 0; iproc < mpi_nprocs; iproc++)
        mpi_host_rank[iproc] = INVALID;
    mpi_nhosts = 0;

    for (iproc = 0; iproc < mpi_nprocs; iproc++)
    {
        int jproc;

        if (mpi_host_rank[iproc] != INVALID) continue;
        ++mpi_nhosts;
        BUG(mpi_nhosts > mpi_nprocs)
        mpi_host_rank[iproc] = mpi_nhosts - 1;
        for (jproc = iproc + 1; jproc < mpi_nprocs; jproc++)
        {
            if (mpi_host_rank[jproc] != INVALID) continue;
            if (strcasecmp(mpi_host[jproc], mpi_host[iproc]) == 0)
                mpi_host_rank[jproc] = mpi_host_rank[iproc];
        }
    }

    //find specific host if available
    mpi_root_id = 0;
    for (iproc = 0; iproc < mpi_nprocs; iproc++)
    {
        if (strcasecmp(mpi_host[iproc], "nodep140") == 0)
        {
            mpi_root_id = iproc;
            break;
        }
    }

    BUG(mpi_nprocs < 1)
    BUG(mpi_nhosts < 1)

    my_printf("hostname=%s\n", hostname);
    my_printf("mpi_nprocs=%d\n", mpi_nprocs);
    my_printf("mpi_id=%d\n", mpi_id);
    for (iproc = 0; iproc < mpi_nprocs; iproc++)
        my_printf("iproc=%d host=%s\n", iproc, mpi_host[iproc]);
    my_printf("mpi_nhosts=%d\n", mpi_nhosts);
    for (iproc = 0; iproc < mpi_nprocs; iproc++)
        my_printf("iproc=%d host_rank=%d\n", iproc, mpi_host_rank[iproc]);
    my_printf("mpi_root_id=%d host=%s host rank=%d\n",
        mpi_root_id, mpi_host[mpi_root_id], mpi_host_rank[mpi_root_id]);
}