Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Tim Prins (tprins_at_[hidden])
Date: 2006-01-14 19:35:14


Graham,

with the trunk r8695 I no longer get these errors.

Tim

Quoting Graham E Fagg <fagg_at_[hidden]>:

> Hi Tim
> I can get an error but not quite the same as yours. In my case I
> get a
> segfault as someone corrupts the memory attached to a communicator
> (data
> segment). Looks like a possible inplace error. Expect a fix shortly.
>
> G.
>
> On Tue, 10 Jan 2006, Tim Prins wrote:
>
> > Graham,
> >
> > It works properly if I select the basic coll component. Anyways,
> here is
> > the output you requested. The full output is about 140MB, so I
> killed it
> > before it finished...
> >
> > Tim
> >
> > Quoting Graham E Fagg <fagg_at_[hidden]>:
> >
> >> Hi Tim
> >> nope, can you rerun with mpirun -np 4 -mca coll_base_verbose 1
> >> <test>
> >> and email me the output?
> >> Thanks
> >> G
> >> On Tue, 10 Jan 2006, Tim Prins wrote:
> >>
> >>> Hi everyone,
> >>>
> >>> I have been playing around with Open-MPI, using it as a test bed
> >> for
> >>> another project I am working on, and have found that on the
> intel
> >> test
> >>> suite, ompi is failing the MPI_Allreduce_user_c,
> >>> MPI_Reduce_scatter_user_c, and MPI_Reduce_user_c tests (it
> prints
> >>> something like MPITEST error (2): i=0, int value=4, expected 1,
> >> etc).
> >>> Are these known error?
> >>>
> >>> BTW, this is on a x86_64 linux box running 4 processes locally,
> >> running
> >>> the trunk svn version 8667, with no additional mca parameters
> set.
> >>>
> >>> Thanks,
> >>>
> >>> TIm
> >>> _______________________________________________
> >>> devel mailing list
> >>> devel_at_[hidden]
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> >>>
> >>
> >>
> >> Thanks,
> >> Graham.
> >>
> ----------------------------------------------------------------------
> >> Dr Graham E. Fagg | Distributed, Parallel and
> Meta-Computing
> >> Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open
> MPI
> >> Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
> >> University of Tennessee | Knoxville, Tennessee, USA. TN
> 37996-3450
> >> Email: fagg_at_[hidden] | Phone:+1(865)974-5790 |
> >> Fax:+1(865)974-8296
> >> Broken complex systems are always derived from working simple
> >> systems
> >>
> ----------------------------------------------------------------------
> >> _______________________________________________
> >> devel mailing list
> >> devel_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> >>
> >
> >
> >
>
>
> Thanks,
> Graham.
> ----------------------------------------------------------------------
> Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
> Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
> Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
> University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
> Email: fagg_at_[hidden] | Phone:+1(865)974-5790 |
> Fax:+1(865)974-8296
> Broken complex systems are always derived from working simple
> systems
> ----------------------------------------------------------------------
>