Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Loading Open MPI from MPJ Express (Java) fails
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2014-03-28 15:56:49


On Mar 26, 2014, at 2:10 PM, Bibrak Qamar <bibrakc_at_[hidden]> wrote:

> 1) By heterogeneous do you mean Derived Datatypes?
> MPJ Express's buffering layer handles this. It flattens the data into a ByteBuffer. In this way native device doesn't have to worry about Derived Datatypes (those things are handled at top layers). And an interesting thing, intuitively Java users would use the MPI.OBJECT if there is heterogeneous data to be sent (but yes, MPI.OBJECT is a slow case ...)

No, I mean where one MPI process peer has a different data representation than another MPI process peer. E.g., if one process is running on a little endian machine and another is running on a big endian machine.

This is a pretty uncommon configuration, though.

> Currently same goes for user defined Op-functions. Those are handled at the top layers, i.e using MPJ Express's algorithms not native MPI's (but communication is native).

This would seem to be a lot of duplicate code (i.e., down in MPI and up in the java bindings). Plus, it might be problematic for things like one-sided operations...?

Is there an advantage to that -- e.g., is the performance better somehow?

> 2) API changes: Do you envision to document the changes to something like a mpiJava 1.3 specs or something?

Oscar tells me that we have javadocs.

> 3) New Benchmark Results:
> I did the benchmarking again with various configurations:
>
> i) Open MPI 1.7.4 C
>
> ii) MVAPICH2.2 C
>
> iii) MPJ Express (using Open MPI - with arrays)
>
> iv) Open MPI's Java Bindings (with a large user array -- the unoptimized case)
>
> v) Open MPI's Java Bindings (with arrays, where size of the user array is equal to the data point, to be fair)
>
> vi) MPJ Express (using MVAPICH2.2 - with arrays)
>
> vii) Open MPI's Java Bindings (using MPI.new<Type>Buffer, ByteBuffer)
>
> viii) MPJ Express (using Open MPI - with ByteBuffer, this is from the device layer of MPJ Express, this helps see how MPJ Express could perform if in future we add MPI.new<Type>Buffer like functionality)
>
> ix) MPJ Express (using MVAPICH2.2 - with ByteBuffer) --> currently I don't know how it performs better than Open MPI?

This is quite helpful; thanks.

Looks like we really need to implement that optimization for not copying the entire buffer when the send size is < the entire size of the buffer.

I don't understand vii), though -- it looks like the bandwidth is quite low somehow. Hmm.

Also, any idea what causes the MPJ performance degredation between Open MPI and MVAPICH? With the native results, the Open MPI results are a tiny bit higher than MVAPICH, but with the MPJ results, the Open MPI results are quite a bit lower than MVAPICH.

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/