Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] Datatype.Vector in mpijava in openmpi-1.9a1r27380
From: Siegmar Gross (Siegmar.Gross_at_[hidden])
Date: 2012-10-10 05:39:16


Hi,

I have built openmpi-1.9a1r27380 with Java support and try some small
programs. When I try to Send/Recv the columns of a matrix, I don't get
the expected results. I used "offset = 0" instead of "offset = i"
in MPI.COMM_WORLD.Send for the following output, so that all processes
should have received the first column.

tyr java 115 mpijavac ColumnSendRecvMain.java
tyr java 116 mpiexec -np 7 java ColumnSendRecvMain

matrix:

      1.00 2.00 3.00 4.00 5.00 6.00
      7.00 8.00 9.00 10.00 11.00 12.00
     13.00 14.00 15.00 16.00 17.00 18.00
     19.00 20.00 21.00 22.00 23.00 24.00

Column of process 5

Column of process 1

      0.00 3.00 7.00 0.00
      0.00 3.00 7.00 0.00
...

I use the following program.

import mpi.*;

public class ColumnSendRecvMain
{
  static final int P = 4; /* # of rows */
  static final int Q = 6; /* # of columns */
  static final int NUM_ELEM_PER_LINE = 6; /* to print a vector */

  public static void main (String args[]) throws MPIException
  {
    int ntasks, /* number of parallel tasks */
             mytid, /* my task id */
             i, j, /* loop variables */
             tmp; /* temporary value */
    double matrix[][],
             column[];
    Datatype column_t; /* strided vector */

    MPI.Init (args);
    matrix = new double[P][Q];
    column = new double[P];
    mytid = MPI.COMM_WORLD.Rank ();
    ntasks = MPI.COMM_WORLD.Size ();
    /* check that we have the correct number of processes in our
     * universe
     */
    if (mytid == 0)
    {
      if (ntasks != (Q + 1))
      {
        System.err.println ("\n\nI need exactly " + (Q + 1) +
                            " processes.\n\n" +
                            "Usage:\n" +
                            " mpiexec -np " + (Q + 1) +
                            " java <program name>\n");
      }
    }
    if (ntasks != (Q + 1))
    {
      MPI.Finalize ();
      System.exit (0);
    }
    /* Build the new type for a strided vector. */
    column_t = Datatype.Vector (P, 1, Q, MPI.DOUBLE);
    column_t.Commit ();
    if (mytid == 0)
    {
      tmp = 1;
      for (i = 0; i < P; ++i) /* initialize matrix */
      {
        for (j = 0; j < Q; ++j)
        {
          matrix[i][j] = tmp++;
        }
      }
      System.out.println ("\nmatrix:\n"); /* print matrix */
      for (i = 0; i < P; ++i)
      {
        for (j = 0; j < Q; ++j)
        {
          System.out.printf ("%10.2f", matrix[i][j]);
        }
        System.out.println ();
      }
      System.out.println ();
    }
    if (mytid == 0)
    {
      /* send one column to each process */
      for (i = 0; i < Q; ++i)
      {
        MPI.COMM_WORLD.Send (matrix, i, 1, column_t, i + 1, 0);
      }
    }
    else
    {
      MPI.COMM_WORLD.Recv (column, 0, P, MPI.DOUBLE, 0, 0);
      /* Each process prints its column. The output will probably
       * intermingle on the screen so that you must use
       * "-output-filename" in Open MPI.
       */
      System.out.println ("\nColumn of process " + mytid + "\n");
      for (i = 0; i < P; ++i)
      {
        if (((i + 1) % NUM_ELEM_PER_LINE) == 0)
        {
          System.out.printf ("%10.2f\n", column[i]);
        }
        else
        {
          System.out.printf ("%10.2f", column[i]);
        }
      }
      System.out.println ();
    }
    column_t.finalize ();
    MPI.Finalize();
  }
}

In my opinion Datatype.Vector doesn't work as expected. mpiJava doesn't
support something similar to MPI_Type_create_resized so how can I use
column_t in a scatter operation? Will scatter automatically start with
the next element and not with the element following the extent of
column_t? Is the wrong output of my program related to a bug in the
mpiJava operation, so that I must wait for a fix or do I make a mistake
in my program? Thank you very much for any help in advance.

Kind regards

Siegmar