Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] NetBSD OpenMPI - SGE - PETSc - PISM
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-12-17 18:03:30


On Dec 17, 2009, at 5:55 PM, <Kevin.Buckley_at_[hidden]> wrote:

> I am happy to be able to inform you that the problems we were
> seeing would seem to have been arising down at the OpenMPI
> level.

Happy for *them*, at least. ;-)

> If I remove any acknowledgement of IPv6 within the OpenMPI
> code, then both the PETSc examples and PISM application
> have been seen to be running upon my initial 8-processor
> parallel environment when submitted as an Sun Grid Engine
> job.

Ok, that's good.

> I guess this means that the PISM and PETSc guys can "stand easy"
> whilst the OpenMPI community needs to follow up on why there's
> a "addr.sa_len=0" creeping through the interface inspection
> code (upon NetBSD at least) when it passes thru the various
> IPv6 stanzas.

Ok. We're still somewhat at a loss here, because we don't have any NetBSD to test on. :-( We're happy to provide any help that we can, and just like you, we'd love to see this problem resolved -- but NetBSD still isn't on any of our core competency lists. :-(

FWIW, we might want to move this discussion to the devel_at_[hidden] mailing list...

-- 
Jeff Squyres
jsquyres_at_[hidden]