After dealing with multiple delays, Intel is finally marking the launch of its 4th-Gen Xeon Scalable processors, and the x86 giant is hoping it can distract you long enough from its increasingly capable rivals with accelerators galore and self-anointed superlatives for the new server chips.
The Silicon Valley-based chipmaker announced the launch of the new Xeons, code-named “Sapphire Rapids,” on Tuesday. It marks the second consecutive datacenter CPU released by Intel that was delayed multiple times, showing the company is still playing catch-up while barreling forward with a comeback plan architected by its CEO of nearly two years, Intel veteran Pat Gelsinger.
Remember that Intel’s 3rd-Gen Xeon, also known as Ice Lake, launched nearly two years ago after the chip’s 10nm node suffered multiple delays over several years due to manufacturing issues. Those delays forced Intel to release server CPUs based on its older 14nm process prior to Ice Lake.
After sorting out issues with the 10nm node, Intel moved onto a more advanced version of the process and renamed it to Intel 7 last year because process naming has increasingly fallen out of step with the actual size of transistors on the chips and Intel felt its souped-up 10nm was more comparable to the 7nm capabilities of its Asian chip-making rivals, TSMC and Samsung.
The Intel 7 node entered volume manufacturing in 2021 with Intel’s 12th-Gen Core processors for PCs, and now it’s being used to fabricate the latest Xeons. However, the new Sapphire Rapids chips are only arriving after the company delayed production multiple times over a year or so because it said the CPUs needed additional “platform and product validation time.”
About those superlatives…
Intel is hoping its new Xeons can stand out with a handful of superlatives it has attached to the CPUs. According to the company, Sapphire Rapids:
- has the “the most built-in accelerators”
- is “the most sustainable Xeon ever”
- is “the most tested and battle-hardened CPU that we’ve ever delivered in the industry”
- has “the most effective ubiquitous way to do [AI] training via fine tuning and transfer learning”
The chipmaker also touted that Xeon is the “most scalable and flexible architecture on the planet” and that the improved Intel Software Guard Extensions in Sapphire Rapids is the “most researched, updated, and deployed confidential computing technology in data centers on the market today.”
What Intel didn’t mention is that x86 rival AMD beat the company to the punch in bringing DDR5, PCIe 5.0, and Compute Express Link (CXL) technologies to the datacenter with its new Epyc “Genoa” chips last fall. For a while, Intel was hoping to get there first with Sapphire Rapids, but then delays got in the way.
Besides supporting DDR5, PCIe 5.0, and CXL 1.1, the new Xeon generation comes with up to 60 cores — fewer than the 96 maximum offered by AMD’s Genoa — and it consists of 52 different models that cover six different server segments as part of Intel’s “workload-driven” approach.
The first segment is one that will concern the broadest constituency of datacenter users and operators: general-purpose computing. Within this segment, Intel has a wide array of dual-socket options that vary in core counts, clock speeds, and other features, in addition to a few single-socket models, one variant designed for long-term use for IoT use cases, and, for the first time, two liquid-cooled SKUs.
The rest of the new Xeons are optimized for the following areas:
- virtualization, analytics, and in-memory databases
- 5G and networking
- software-as-a-service, infrastructure-as-a-service, and media
- storage and hyperconverged infrastructure
- high-performance computing
Most CPUs will be available as part of Intel’s new software-defined silicon service, known as “Intel On Demand,” which lets organizations pay money to enable certain Xeon features as an alternative to buying the chip with all capabilities unlocked.
Intel has already talked a great deal about its HPC-optimized Xeon chips, which have large stacks of high-bandwidth memory and are known as the Xeon CPU Max Series.
At a high level, Intel claims that, compared to the previous generation, 4th-Gen Xeon provides a 53 percent improvement in general-purpose computing; up to 10x faster AI inference and training performance; as much as a 2x boost for networking, storage, and telecom; up to 3x faster work for data analytics; and a 3.7x boost for HPC.
The broad applicability of Sapphire Rapids is why Intel is claiming it as the “most scalable and flexible architecture on the planet,” according to Intel exec Lisa Spelman.
“It can serve every single workload again, from the edge to the cloud and back and in between. And there’s nothing that it can’t handle competently,” said Spelman, who is corporate vice president and general manager of Intel Xeon products.
Bells and whistles galore… for the environment?
While AMD may have it beat on maximum core counts, Intel is claiming the 13 accelerator engines built into the new Xeons are a worthy alternative, offering a “more efficient way to achieve higher performance than growing the CPU core count.”
These accelerators include:
- Intel Advanced Matrix Extensions, which is meant to improve the performance of training small and medium deep learning models like natural language processing and recommendation systems.
- Intel Quick Assist Technology, which can accelerate data encryption and compression workloads by offloading through an offload mechanism.
- Intel Data Streaming Accelerator, which aims to eliminate bottlenecks in data movements between the CPU cores, memory, caches, attached storage, and networked storage devices.
- Intel Dynamic Load Balancer, a hardware-managed load balancing system in the processor that is designed for telecom applications.
- Intel In-Memory Analytics Accelerator, which is meant to speed up compression and decompression for big data applications and in-memory analytic databases.
- Intel Advanced Vector Extensions 512, which provides optimizations meant to improve the performance of demanding workloads like scientific simulations, financial analytics, and 3D modeling.
- Intel Crypto Acceleration, which is designed to improve the performance of “encryption-sensitive workloads” such as firewalls, VPNs, and SSL web servers.
- Intel Software Guard Extensions, a security feature that is meant to protect data in encryption portions of the CPU’s memory.
The other accelerators consist of Intel Speed Select Technology, Intel Data Direct I/O Technology, and two other security features: Intel Trust Domain Extension and Intel Control-Flow Enforcement Technology.
Intel is claiming that Sapphire Rapids’ use of the accelerators, combined with software optimizations, “help improve power efficiency across AI, data analytics, networking, and storage.” This, according to the chipmaker, can improve performance per watt efficiency in such workloads by 2.9 times on average compared to the previous generation of Xeon CPUs.
- AMD’s Epyc 4 will likely beat Intel Sapphire Rapids to market
- AMD follows Intel’s lead with alphanumeric soup of new Ryzens
- Intel: Please buy these new 13th-Gen CPUs, now with 24 cores
- It’s time to retire ‘edge’ from our IT vocabulary
This is part of why Intel is painting its new server chips as the most sustainable Xeons yet. What also makes Sapphire Rapids purportedly more environmentally friendly than other server chips is that it has a new optimized power mode, reachable in the BIOS, that enables up to 20 percent in power savings per CPU socket while only reducing performance for certain workloads by 5 percent.
The semiconductor giant claims the new Xeons are also manufactured with 90-100 percent renewable energy and “state-of-the-art water reclamation facilities.” Intel has been happy to tout recently that it has achieved “net positive water” at manufacturing sites in the US, India, and Costa Rica, but we have to remember that doesn’t mean it’s going green everywhere. ®
The post After big delays, Sapphire Rapids arrives, full of accelerators and superlatives first appeared on www.theregister.com