Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] sharing memory between processes
From: jody (jody.xha_at_[hidden])
Date: 2009-04-28 10:46:30

Hi Barnabas

As far as i know, Open-MPI is not a shared memory system.

Using Open-MPI to attack your problem on N processors, i would poceed
as follows:
- processor 0 reads the table and then splits it into N parts
- processor 0 sends Table_i to processor i (for all i > 0) using MPI functions
- if processor k needs info from Table_j, processor k sends a request
to processor j
- processor j sends requested information to processor k

Of course you'd need some scheme to find out on which Table_i a particular
entry can be found.

This solution will work as well if your processors sit on several hosts.


On Tue, Apr 28, 2009 at 3:28 PM, Barnabas Debreczeni <keo_at_[hidden]> wrote:
> Hi!
> I am new to this list and to parallel programming in general. I am
> writing a trading simulator for the forex market and I am using
> genetic algorithms to breed trading parameters.
> I am using PGAPack as a GA library, and it uses MPI to parallelize
> optimization runs. This is how I got to Open MPI.
> I am stuck at some point mainly because my lack of parallel
> programming knowledge. What I'd like to achieve is: (I am doing it the
> serial way right now)
> - Load price data from files, and compute a few tables (right now this
> takes up ~4 GB of memory)
> - Repeat...
> -- Create new offsprings in the master process for the GA
> -- Evaluate them in parallel (on 4 local CPUs but maybe more on LAN if
> i need it)
> - Until I get a satisfactory result.
> My problem is, I'd like to share that 2 GB table (computed once at the
> beginning, and is read-only after) between processes so I don't have
> to use up 16 gigs of memory.
> How do you share data between processes locally?
> Later I will need to use other hosts too in the calculation. Will the
> slaves on other hosts need to calculate their own tables on go on from
> there and share them locally, or can I share these tables on the
> master host with them?
> How do you usually solve these kinds of problems?
> Can you point me to some docs or keywords what should I learn about?
> Thank you very much.
> Regards,
> Barnabas
> _______________________________________________
> users mailing list
> users_at_[hidden]