what just happened At this year’s Hot Chips conference in Palo Alto, California, Intel revealed a lot of information about its 2024 Xeon CPUs for data centers. The company also explained how it is using AI technology to improve the energy efficiency of its next-generation desktop processor line.
Starting with the Xeon CPUs, Intel explained how workloads in modern data centers are becoming more specific with each passing year, requiring different types of chips for different types of tasks. For example, some workloads like AI processing require high-performance cores, while others may require tradeoffs between core performance and core density.
In order to meet the different requirements in the industry, Intel’s Xeon CPUs 2024 will be available in two separate product series with performance cores (P cores) and efficient cores (E cores). While the former is codenamed Granite Rapids and will be optimized for compute-intensive and AI applications, the latter will be part of the Sierra Forest family for high-density and scalable workloads. However, both product ranges will use the same platform and software stack, which means that they are compatible with each other.
Intel also announced that the 2024 Xeon platform will leverage modular SoCs for greater scalability and flexibility. The company believes the new design will meet the processing and power efficiency requirements for AI, cloud, and enterprise installations. It also offers support for up to 12-channel DDR/MCR memory (1-2DPC) and up to 136 PCIe 5.0 lanes with CXL 2.0 and up to six UPI links.
Intel also claimed that the Granite Rapids family brings increased memory bandwidth, core count, and cache for compute-intensive workloads, thereby delivering two to three times better performance for mixed AI applications compared to the Sapphire Rapids family. Intel states that Sierra Forest will offer 2.5x better rack density and 2.4x better performance per watt compared to Sapphire Rapids. Both will include multiple SKUs with variable core counts and TDP.
Finally, Intel also announced that the Meteor Lake family will use AI for power management to make them more efficient than their predecessors. As reported According to PC World, the chips will feature a VPU (Visual Processing Unit) that will not only optimize AI workloads, but also help improve laptop battery life. According to the company, the AI in the new chips will be able to decide when to switch between high-power (power) and low-power ( idle) should be changed ,’ or DVPS.