-----BEGIN PGP SIGNED MESSAGE-----
On 03/05/13 10:47, Ralph Castain wrote:
> We had something similar at one time - I developed it for the
> Roadrunner cluster so you could run MPI tasks on the GPUs. Worked
> well, but eventually fell into disrepair due to lack of use.
OK, interesting! RR was Cell rather than GPU though wasn't it?
> In this case, I suspect it will be much easier to do as the Phis
> appear to be a lot more visible to the host than the GPU did on RR.
> Looking at the documentation, the Phis just sit directly on the
> PCIe bus, so they should look just like any other processor,
Yup, they show up in lspci:
[root_at_barcoo061 ~]# lspci -d 8086:2250
2a:00.0 Co-processor: Intel Corporation Device 2250 (rev 11)
90:00.0 Co-processor: Intel Corporation Device 2250 (rev 11)
> and they are Xeon binary compatible - so there is no issue with
> tracking which binary to run on which processor.
Sadly they're not binary compatible, you have to cross-compile for
them (or compile on the Phi itself).
I haven't got any further than have xCAT install the (rebuilt) kernel
module so far, so I can't log into them yet.
> Brice: do the Phis appear in the hwloc topology object?
They appear in lstopo as mic0 and mic1.
> Chris: can you run lstopo on one of the nodes and send me the
> output (off-list)?
One of the hosts? Not a problem, will do.
All the best!
Christopher Samuel Senior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-----END PGP SIGNATURE-----