Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] users Digest, Vol 1674, Issue 1
From: lyb (linyb79_at_[hidden])
Date: 2010-09-06 01:40:51


Thanks for your answer, but I test with MPICH2, it doesn't have this
fault. Why?
> Message: 9
> Date: Wed, 1 Sep 2010 20:14:44 -0600
> From: Ralph Castain<rhc_at_[hidden]>
> Subject: Re: [OMPI users] MPI_Comm_accept and MPI_Comm_connect both
> use 100% one cpu core. Is it a bug?
> To: Open MPI Users<users_at_[hidden]>
> Message-ID:<4E4BC153-B4E3-43E2-B980-904DABE78B4E_at_[hidden]>
> Content-Type: text/plain; charset="us-ascii"
>
> It's not a bug - that is normal behavior. The processes are polling hard to establish the connections as quickly as possible.
>
>
> On Sep 1, 2010, at 7:24 PM, lyb wrote:
>
>
>> > Hi, All,
>> >
>> > I tested two sample applications on Windows 2003 Server, one use MPI_Comm_accept and other use MPI_Comm_connect,
>> > when run into MPI_Comm_accept or MPI_Comm_connect, the application use 100% one cpu core. Is it a bug or some wrong?
>> >
>> > I tested with three version including Version 1.4 (stable), Version 1.5 (prerelease) and trunk 23706 version.
>> >
>> > ...
>> > MPI_Open_port(MPI_INFO_NULL, port);
>> > MPI_Comm_accept( port, MPI_INFO_NULL, 0, MPI_COMM_WORLD,&client );
>> > ...
>> >
>> > ...
>> > MPI_Comm_connect( port, MPI_INFO_NULL, 0, MPI_COMM_WORLD,&server );
>> > ...
>> >
>> > thanks a lot.
>> >
>> > lyb
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden]
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>