Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] openmpi tar.gz for 1.6.1 or 1.6.2
From: Ralph Castain (rhc_at_[hidden])
Date: 2012-07-16 17:46:55


I gather there are two sockets on this node? So the second cmd line is equivalent to leaving "num-sockets" off of the cmd line?

I haven't tried what you are doing, so it is quite possible this is a bug.

On Jul 16, 2012, at 1:49 PM, Anne M. Hammond wrote:

> Thanks!
>
> Built the latest snapshot. Still getting an error when trying to run on only
> one socket (see below): Is there a workaround?
>
> [hammond_at_node65 bin]$ ./mpirun -np 4 --num-sockets 1 --npersocket 4 hostname
> --------------------------------------------------------------------------
> An invalid physical processor ID was returned when attempting to bind
> an MPI process to a unique processor.
>
> This usually means that you requested binding to more processors than
> exist (e.g., trying to bind N MPI processes to M processors, where N >
> M). Double check that you have enough unique processors for all the
> MPI processes that you are launching on this host.
>
> You job will now abort.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun was unable to start the specified application as it encountered an error:
>
> Error name: Fatal
> Node: node65.cl.corp.com
>
> when attempting to start process rank 0.
> --------------------------------------------------------------------------
> 4 total processes failed to start
>
>
> [hammond_at_node65 bin]$ ./mpirun -np 4 --num-sockets 2 --npersocket 4 hostname
> node65.cl.corp.com
> node65.cl.corp.com
> node65.cl.corp.com
> node65.cl.corp.com
> [hammond_at_node65 bin]$
>
>
>
>
> On Jul 16, 2012, at 12:56 PM, Ralph Castain wrote:
>
>> Jeff is at the MPI Forum this week, so his answers will be delayed. Last I heard, it was close, but no specific date has been set.
>>
>>
>> On Jul 16, 2012, at 11:49 AM, Michael E. Thomadakis wrote:
>>
>>> When is the expected date for the official 1.6.1 (or 1.6.2 ?) to be available ?
>>>
>>> mike
>>>
>>> On 07/16/2012 01:44 PM, Ralph Castain wrote:
>>>> You can get it here:
>>>>
>>>> http://www.open-mpi.org/nightly/v1.6/
>>>>
>>>> On Jul 16, 2012, at 10:22 AM, Anne M. Hammond wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> For benchmarking, we would like to use openmpi with
>>>>> --num-sockets 1
>>>>>
>>>>> This fails in 1.6, but Bug Report #3119 indicates it is changed in
>>>>> 1.6.1.
>>>>>
>>>>> Is 1.6.1 or 1.6.2 available in tar.gz form?
>>>>>
>>>>> Thanks!
>>>>> Anne
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users_at_[hidden]
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> Anne M. Hammond - Systems / Network Administration - Tech-X Corp
> hammond_at_txcorp.com 720-974-1840
>
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users