Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] questions to some open problems
From: Ralph Castain (rhc_at_[hidden])
Date: 2012-12-14 15:00:47

Hi Siegmar

On Dec 14, 2012, at 5:54 AM, Siegmar Gross <Siegmar.Gross_at_[hidden]> wrote:

> Hi,
> some weeks ago (mainly in the beginning of October) I reported
> several problems and I would be grateful if you can tell me if
> and probably when somebody will try to solve them.
> 1) I don't get the expected results, when I try to send or scatter
> the columns of a matrix in Java. The received column values have
> nothing to do with the original values, if I use a homogeneous
> environment and the program breaks with "An error occurred in
> MPI_Comm_dup" and "MPI_ERR_INTERN: internal error", if I use
> a heterogeneous environment. I would like to use the Java API.
> 2) I don't get the expected result, when I try to scatter an object
> in Java.

Nothing has happened on these yet

> 3) I still get only a message that all nodes are already filled up
> when I use a "rankfile" and nothing else happens. I would like
> to use a rankfile. You filed a bug fix for it.

I believe rankfile was fixed, at least on the trunk - not sure if it was moved to 1.7. I assume that's the release you are talking about?

> 4) I would like to have "-cpus-per-proc", "-npersocket", etc for
> every set of machines/applications and not globally for all
> machines/applications if I specify several colon-separated sets
> of machines or applications on the command line. You told me that
> it could be done.
> 5) By the way, it seems that the option "-cpus-per-proc" isn't any
> longer supported in openmpi-1.7 and openmpi-1.9. How can I bind a
> multi-threaded process to more than one core in these versions?

I'm afraid I haven't gotten around to working on cpus-per-proc, though I believe npersocket was fixed.

> I can provide my small programs once more if you need them. Thank
> you very much for any answer in advance.
> Kind regards
> Siegmar
> _______________________________________________
> users mailing list
> users_at_[hidden]