This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Thursday 10 May 2007, Jeff Squyres wrote:
> On May 10, 2007, at 8:08 AM, Peter Kjellstrom wrote:
> > I recently tried ompi on early ConnectX hardware/software.
> > The good news, it works =)
> We've seen some really great 1-switch latency using the early access
> ConnectX hardware. I have a pair of ConnectX's in my MPI development
> cluster at Cisco, but am awaiting various software pieces before I
> can start playing with them.
Yes, I'm impressed too.
> We're also quite excited to add some of the new features of the
> ConnectX hardware (Roland Dreier is working on the verbs interface
> and Mellanox is working on the firmware).
I just switched my testbed from mellanox stack to Rolands mlx4.
> I don't see Mellanox's
> presentation from last week's OpenFabrics Sonoma Workshop on the
> openfabrics.org web site that describes the features; I'll ping them
> and ask where it is.
> > However, ompi needs a chunk of options set to recognize the
> > card so I made a small patch (setting it up like old Arbel
> > style hardware).
> Good point; I can't believe we forgot to commit that... Thanks!
> BTW, you copied from the MTU from Sinai, not Arbel -- is that what
> you meant?
Just me being confused, I did use 2048 (Sinai) but who am I to say what those
figures should finally be set to...
> (FWIW: the internal Mellanox code name for ConnectX is Hermon,
> another mountain in Israel, just like Sinai, Arbel, ...etc.).
- application/pgp-signature attachment: stored