World's fastest AI chip features 4 Trillion transistors and 900,000 AI cores

Cerebras Systems has launched the Wafer Scale Engine 3 (WSE-3), the world’s fastest AI chip, featuring 4 trillion transistors and 900,000 AI cores. The WSE-3, built on a 5 nm process, powers the Cerebras CS-3 AI supercomputer, which is capable of 125 petaflops of peak AI performance. This new chip is designed to train large AI models efficiently, supporting models up to 24 trillion parameters without the need for partitioning, thus simplifying the training process.

This AI chip is a powerhouse, boasting 4 trillion transistors and 900,000 AI cores. It’s the core of the Cerebras CS-3 AI supercomputer, which delivers an astonishing 125 petaflops of peak AI performance. This chip is set to transform how large AI models are trained, handling up to 24 trillion parameters with ease. Third Generation 5nm Wafer Scale Engine (WSE-3) Powers Industry’s Most Scalable AI Supercomputers, Up To 256 exaFLOPs via 2048 Nodes.

“When we started on this journey eight years ago, everyone said wafer-scale processors were a pipe dream. We could not be more proud to be introducing the third-generation of our groundbreaking water scale AI chip,” said Andrew Feldman, CEO and co-founder of Cerebras.“ WSE-3 is the fastest AI chip in the world, purpose-built for the latest cutting-edge AI workfrom mixture of experts to 24 trillion parameter models. We are thrilled for bring WSE-3 and CS-3 to market to help solve today’s biggest AI challenges.”

The WSE-3 is built using advanced 5 nm process technology, which has allowed for the integration of 44 GB of on-chip SRAM. But it doesn’t stop there; you can expand the chip’s memory externally up to a massive 1.2 petabytes. This means that even tasks that require a lot of data can be processed without a hitch. The chip’s design is highly scalable, allowing you to connect up to 2048 CS-3 systems. This makes it versatile for various uses, from businesses to large-scale computing environments.

Cerebras unveils world’s fastest AI chip

Cerebras hasn’t just focused on raw performance; they’ve also made sure their technology is user-friendly. The Cerebras Software Framework now supports PyTorch 2.0, which simplifies programming large language models (LLMs). This means developers can do more with less code, cutting down on complexity and speeding up the time it takes to develop new applications. The WSE-3 also introduces hardware acceleration for dynamic and unstructured sparsity, which could potentially make training times up to eight times faster.


  • 4 trillion transistors
  • 900,000 AI cores
  • 125 petaflops of peak AI performance
  • 44GB on-chip SRAM
  • 5nm TSMC process
  • External memory: 1.5TB, 12TB, or 1.2PB
  • Trains AI models up to 24 trillion parameters
  • Cluster size of up to 2048 CS-3 systems

In the world of computing, being energy efficient is crucial. Impressively, the WSE-3 has doubled the performance of its predecessor while keeping power consumption the same. This is vital because it means we can continue to push the boundaries of AI without blowing our energy budgets.

The impact of the WSE-3 and the CS-3 AI supercomputer is already being felt across different industries. Cerebras has a significant backlog of orders from sectors like business, government, and international cloud services. The technology plays a key role in partnerships with leading institutions such as Argonne National Laboratory and Mayo Clinic, aiding AI research and improving patient care.

Looking ahead, Cerebras has plans to collaborate with G42 to build some of the world’s largest AI supercomputers. One project in the pipeline, the Condor Galaxy 3, is set to deliver an incredible 8 exaFLOPs of AI compute, showcasing the immense potential of the WSE-3.

The Wafer Scale Engine 3 from Cerebras is a major step forward in AI technology. With its unmatched computational power, scalability, and energy-efficient performance, along with the support of an advanced software framework, it’s an indispensable tool for anyone looking to harness the full power of AI. As Cerebras continues to push the envelope, the future of AI development and application looks more promising than ever.

Filed Under: Technology News, Top News

Latest Geeky Gadgets Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

[ For more curated Computing news, check out the main news page here]

The post World’s fastest AI chip features 4 Trillion transistors and 900,000 AI cores first appeared on

New reasons to get excited everyday.

Get the latest tech news delivered right in your mailbox

You may also like

Notify of
Inline Feedbacks
View all comments

More in computing