This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Thanks for info. I was thinking it could be some wrong interpretation of per cpu core count.
I will try newer library.
> Od: "Brice Goglin"
> Komu: Open MPI Users
> DÃ¡tum: 13.09.2011 13:28
> Predmet: Re: [OMPI users] #cpus/socket
Le 13/09/2011 18:59, Peter KjellstrÃ¶m a Ã©crit :
> On Tuesday, September 13, 2011 09:07:32 AM nn3003 wrote:
>> Hello !
>> I am running wrf model on 4x AMD 6172 which is 12 core CPU. I use OpenMPI
>> 1.4.3 and libgomp 4.3.4. I have binaries compiled for shared-memory and
>> distributed-memory (OpenMP and OpenMPI) I use following command
>> mpirun -np 4 --cpus-per-proc 6 --report-bindings --bysocket wrf.exe
>> It works ok and in top i see there are 4 wrf.exe and each has 6 threads on
>> cpu0-5 12-17 24-29 36-41 However, if I want to run 8 or more e.g.
>> mpirun -np 4 --cpus-per-proc 12 --report-bindings --bysocket wrf.exe
>> I get error
>> Your job has requested more cpus per process(rank) than there
>> are cpus in a socket:
>> Â Cpus/rank: 8
>> Â #cpus/socket: 6
>> Why is that ? There are 12 cores per socket in AMD 6172.
> In reality a 12 core Magnycours is two 6 core dies on a socket. I'm guessing
> that the topology code sees your 4x 12 core as a 8x 6 core.
plpa-info reports 4*6cores:
Â Number of processor sockets: 4
Â Number of processors online: 48
Â Number of processors offline: 0 (no topology information available)
Â Socket 0 (ID 0): 6 cores (max core ID: 5)
Â Socket 1 (ID 1): 6 cores (max core ID: 5)
Â Socket 2 (ID 2): 6 cores (max core ID: 5)
Â Socket 3 (ID 3): 6 cores (max core ID: 5)
This should be fixed with Open MPI 1.5.2+ with hwloc.
users mailing list