Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-09-05 21:04:51


Greg: sorry for the delay in replying...

I am not the authority on this stuff; can George / Brian / Terry /
Brad / Gleb reply on this issue?

Thanks.

On Aug 28, 2007, at 12:57 PM, Greg Watson wrote:

>> Note that this is *NOT* well tested. There is work going on right
>> now to make the OMPI layer be able to support MPI_THREAD_MULTIPLE
>> (support was designed in from the beginning, but we haven't ever done
>> any kind of comprehensive testing/stressing of multi-thread support
>> such that it is pretty much guaranteed not to work), but it is
>> occurring on the trunk (i.e., what will eventually become v1.3) --
>> not the v1.2 branch.
>>
>>> The interfaces I'm calling are:
>>>
>>> opal_event_loop()
>>
>> Brian or George will have to answer about that one...
>>
>>> opal_path_findv()
>>
>> This guy should be multi-thread safe (disclaimer: haven't tested it
>> myself); it doesn't rely on any global state.
>>
>>> orte_init()
>>> orte_ns.create_process_name()
>>> orte_iof.iof_subscribe()
>>> orte_iof.iof_unsubscribe()
>>> orte_schema.get_job_segment_name()
>>> orte_gpr.get()
>>> orte_dss.get()
>>> orte_rml.send_buffer()
>>> orte_rmgr.spawn_job()
>>> orte_pls.terminate_job()
>>> orte_rds.query()
>>> orte_smr.job_stage_gate_subscribe()
>>> orte_rmgr.get_vpid_range()
>>
>> Note that all of ORTE is *NOT* thread safe, nor is it planned to be
>> (it just seemed way more trouble than it was worth). You need to
>> serialize access to it.
>
> Does that mean just calling OPAL_THREAD_LOCK() and OPAL_THREAD_UNLOCK
> () around each?

-- 
Jeff Squyres
Cisco Systems