Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] [Fwd: multi-threaded test]
From: Ralph Castain (rhc_at_[hidden])
Date: 2011-03-10 18:59:22


Can't speak to the MPI layer, but you definitely cannot hardwire thread support to "off" for ORTE.

On Mar 10, 2011, at 10:57 AM, George Bosilca wrote:

>
> On Mar 10, 2011, at 11:23 , Eugene Loh wrote:
>
>> Any comments on this?
>
> Good luck?
>
> george.
>
>
>> We wanted to clean up MPI_THREAD_MULTIPLE support in the trunk and port these changes back to 1.5.x, but it's unclear to me what our expectations should be about any MPI_THREAD_MULTIPLE test succeeding. How do we assess (test) our changes? Or, should we just hardwire thread support to be off, as we have done with progress threads?
>>
>> -------- Original Message --------
>> Subject: [OMPI devel] multi-threaded test
>> Date: Tue, 08 Mar 2011 11:24:20 -0800
>> From: Eugene Loh <eugene.loh_at_[hidden]>
>> To: Open MPI Developers <devel_at_[hidden]>
>>
>> I've been assigned CMR 2728, which is to apply some thread-support
>> changes to 1.5.x. The trac ticket has amusing language about "needs
>> testing". I'm not sure what that means. We rather consistently say
>> that we don't promise anything with regards to true thread support. We
>> specifically say certain BTLs are off limits and we say things are
>> poorly tested and can be expected to break. Given all that, what does
>> it mean to test thread support in OMPI?
>>
>> One option, specifically in the context of this CMR, is to test only
>> configuration options and so on. I've done this.
>>
>> Another possibility is to confirm that simple run-time tests of
>> multi-threaded message passing succeed. I'm having trouble with this.
>>
>> Attached is a simple test. It passes over sm but fails over TCP. (One
>> or both of the initial messages is not received.)
>>
>> How high should I set my sights on this?
>>
>>
>> #include <stdio.h>
>> #include <omp.h>
>> #include <mpi.h>
>> #include <string.h> /* memset */
>>
>>
>> #define N 10000
>> int main(int argc, char **argv) {
>> int np, me, buf[2][N], provided;
>>
>> /* init some stuff */
>> MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
>> MPI_Comm_size(MPI_COMM_WORLD,&np);
>> MPI_Comm_rank(MPI_COMM_WORLD,&me);
>> if ( provided < MPI_THREAD_MULTIPLE ) MPI_Abort(MPI_COMM_WORLD,-1);
>>
>> /* initialize the buffers */
>> memset(buf[0], 0, N * sizeof(int));
>> memset(buf[1], 0, N * sizeof(int));
>>
>> /* test */
>> #pragma omp parallel num_threads(2)
>> {
>> int id = omp_get_thread_num();
>> MPI_Status st;
>> printf("%d %d in parallel region\n", me, id); fflush(stdout);
>>
>> /* pingpong */
>> if ( me == 0 ) {
>> MPI_Send(buf[id],N,MPI_INT,1,7+id,MPI_COMM_WORLD ); printf("%d %d sent\n",me,id); fflush(stdout);
>> MPI_Recv(buf[id],N,MPI_INT,1,7+id,MPI_COMM_WORLD,&st); printf("%d %d recd\n",me,id); fflush(stdout);
>> } else {
>> MPI_Recv(buf[id],N,MPI_INT,0,7+id,MPI_COMM_WORLD,&st); printf("%d %d recd\n",me,id); fflush(stdout);
>> MPI_Send(buf[id],N,MPI_INT,0,7+id,MPI_COMM_WORLD ); printf("%d %d sent\n",me,id); fflush(stdout);
>> }
>> }
>>
>> MPI_Finalize();
>>
>> return 0;
>> }
>>
>>
>> #!/bin/csh
>>
>> mpicc -xopenmp -m64 -O5 main.c
>>
>> mpirun -np 2 --mca btl self,sm ./a.out
>> mpirun -np 2 --mca btl self,tcp ./a.out
>>
>>
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
> George Bosilca
> Research Assistant Professor
> Innovative Computing Laboratory
> Department of Electrical Engineering and Computer Science
> University of Tennessee, Knoxville
> http://web.eecs.utk.edu/~bosilca/
>
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel