Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Brock Palen (brockp_at_[hidden])
Date: 2006-12-11 14:34:59


Yes it is a PPC based system. The machines are duel G5 with 1 gig of
ram. I am only running 1 thread per cpu, (not over allocating). It
is not a maximized run, when running i see 500MB free on the nodes.
Each thread uses ~110MB.

I could not answer wether or not OSX and PPC 970FX have a MMU though
im sure it would dont know why not, but thats speculation on my
part. Also I have no idea what the memory window question is, i will
look it up on google.

aon075:~ root# dmesg | grep GM
GM: Board at bus 6 slot 3 now attaching
GM: driver version 2.0.21_MacOSX_rc20050429075134PDT gallatin_at_g4:/tmp/
gm-2.0.21_MacOSX Fri Apr 29 11:03:48 EDT 2005
GM: gm_register_memory will be able to lock 96000 pages (375 MBytes)
GM: Unit 0 IP interface attach ok
GM: Unit 0 Loaded ok

Brock Palen
Center for Advanced Computing
brockp_at_[hidden]
(734)936-1985

On Dec 11, 2006, at 2:20 PM, Reese Faucette wrote:

>> I have tried moving around machines that the run is done on to the
>> same result in multiple places.
>> The error is:
>>
>> [aon049.engin.umich.edu:21866] [mpool_gm_module.c:100] error(8)
>> registering gm memory
>
> This is on a PPC-based OSX system? How many MPI processes per node
> are you
> starting? And I assume this is a pretty maximallly sized HPL run
> for the
> nodes' memory? And this system has an IOMMU, yes? Do you know how
> big its
> memory window is?
>
> Could you send me the output of "dmesg | grep GM" after loading
> GM? We're
> looking for a line of the form:
> GM: gm_register_memory will be able to lock XXX pages (YYY MBytes)
>
> thanks,
> -r
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>