This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I wonder if hwloc will be able to show this. Let's hope this kind of topo stuff is exported in /sys...
Intel pulls up its SoCs, reveals 'integrated' memory on CPUs
Disaster recovery protection level self-assessment<http://go.theregister.com/tl/981/-3067/whitepaper-disaster-recovery-self-assessment.pdf?td=wptl981tp>
Intel said it was working on stacking a layer of memory on its Xeon processors to run memory-bound workloads faster.
It said this in a pitch at the Denver-based Supercomputing Conference (SC13) which is running from 17 to 22 Nov.
<a href="http://pubads.g.doubleclick.net/gampad/jump?iu=/6978/reg_datacentre/hpc&sz=300x250|300x600&tile=3&c=33Uo4wLqwQrMoAABQRlYMAAAJn&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" target="_blank"> <img src="http://pubads.g.doubleclick.net/gampad/ad?iu=/6978/reg_datacentre/hpc&sz=300x250|300x600&tile=3&c=33Uo4wLqwQrMoAABQRlYMAAAJn&t=ct%3Dns%26unitnum%3D3%26unitname%3Dwww_top_mpu%26pos%3Dtop%26test%3D0" alt=""></a>
According to an EE Times report<http://www.eetimes.com/document.asp?doc_id=1320146&>, Intel's Rajeeb Hazra, a VP and general manager of its data centre group, said Intel would customise high-end Xeon processors and Xeon Phi co-processors by closely integrating memory, both by adding memory dies to a processor package and, at a later date, integrating layers of memory dies into the processor along with optical fabrics and switches.
Hazra mentioned the general memory stack idea in a 22 July presentation (PDF)<http://files.shareholder.com/downloads/INTC/2801699407x0x684654/E4A31E2D-0B08-4542-852B-3D0CC4C678A0/130722_High-Performance-Computing_Hazra.pdf> and here's a slide from it:
[Xeon plus stack of memory]
He also told press round table attendees at the conference that the Knights Landing<http://www.theregister.co.uk/2013/11/19/intel_says_bootable_knights_landing_cpu_will_be_a_game_changer/> next-generation Xeon Phi c0-processor, with tens of cores, would have integrated memory. The concept of stacking memory dies in Xeon processor packages has come out into the open as well.
Having memory dies with the processor in a 3D package is classed by Intel as Near Memory and contrasts with DDR DRAM - Far Memory. Near memory provides faster data access.
Hamza said: "We are looking at various new classes of integrations, from integrating portions of the interconnect as well as next-generation storage and memory much more intimately onto the processor die."
The memory address space in the dies could be treated as cache or as a flat memory space or as a combination of the two. Applications would need to be altered to use such a flat memory space adjacent to the CPU and separate from the normal DRAM memory.
The amount of in-package memory would be limited by real-estate limits, the physical space inside the package, and we shouldn't expect such Near Memory to replace or substitute for Far Memory.
The in-package memory stacking would be for specific, presumably large scale, customers - Google, Facebook or Amazon-like - and therefore run counter to general X86 standards. There would also need to be data moving or tiering software to transfer data from Far Memory into Near Memory and vice versa. Â®
Disaster recovery protection level self-assessment<http://go.theregister.com/tl/981/-3067/whitepaper-disaster-recovery-self-assessment.pdf?td=wptl981bt>
Sent from my phone. No type good.