This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Testing v1.7 w/ oshmem I did have a few problems:
Solaris MPI_Init failures that I have yet to triage
However, only the second of these four is oshmem related.
It is worth mentioning that I did have success building on 44 distinct
On Sat, Feb 8, 2014 at 10:09 AM, Ralph Castain <rhc_at_[hidden]> wrote:
> The OSHMEM update is now in the 1.7.5 tarball - I would appreciate it if
> people could exercise the tarball to ensure nothing broke. Note that shmem
> examples are executing, but shmemrun is hanging instead of exiting.
> Mellanox is looking into the problem.
> For now, I just want to verify that MPI operations remain stable.
> On Feb 7, 2014, at 2:09 PM, Paul Hargrove <phhargrove_at_[hidden]> wrote:
> I'll try to test tonight's v1.7 taball for:
> + ia64 atomics (#4174)
> + bad getpwuid (#4164)
> + opalpath_nfs/EPERM (#4125)
> + torque smp (#4227)
> All but torque are fully-automated tests and I need only check my email
> for the results.
> The torque one will require manual job submission.
> On Fri, Feb 7, 2014 at 1:55 PM, Ralph Castain <rhc_at_[hidden]> wrote:
>> Hi folks
>> As you may have noticed, I've been working my way thru the CMR backlog on
>> 1.7.5. A large percentage of them were minor fixes (valgrind warning
>> suppressions, error message typos, etc.), so those went in the first round.
>> Today's round contains more "meaty" things, but I still consider them
>> fairly low risk as the code coverage impacted is contained.
>> I'm going to let this run thru tonight's MTT - if things look okay
>> tomorrow, I will roll the OSHMEM cmr into 1.7.5 over the weekend. This is
>> quite likely to destabilize the branch, so I expect to see breakage in the
>> resulting MTT reports. We'll deal with it as we go.
>> Beyond that, there are still about a dozen CMRs in the system awaiting
>> review. Jeff has the majority, followed by Nathan. If folks could please
>> review them early next week, I would appreciate it.
>> devel mailing list
> Paul H. Hargrove PHHargrove_at_[hidden]
> Future Technologies Group
> Computer and Data Sciences Department Tel: +1-510-495-2352
> Lawrence Berkeley National Laboratory Fax: +1-510-486-6900
> devel mailing list
> devel mailing list
Paul H. Hargrove PHHargrove_at_[hidden]
Future Technologies Group
Computer and Data Sciences Department Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory Fax: +1-510-486-6900