From www.techradar.com

D-Matrix's Corsair C8 card

(Image credit: D-matrix)

D-Matrix’s unique compute platform, known as the Corsair C8, can stake a huge claim to have displaced Nvidia’s industry-leading H100 GPU – at least according to some staggering test results the startup has published. 

Designed specifically for generative AI workloads, the Corsair C8 differs from GPUs in that it uses d-Matrix’s unique digital-in-memory computer (DIMC) architecture. 

The result? A nine-times increase in throughput versus the industry-leading Nvidia H100, and a 27-times increase versus the A100.

Corsair C8 power

The startup is one of the most hotly followed in Silicon Valley, raising $110 million from investors in its latest funding round, including funding from Microsoft. This came alongside a $44 million investment round from backers including Microsoft, SK Hynix, and others, in April 2022.

Its flagship Corsair C8 card includes 2,048 DIMC cores with 130 billion transistors and 256GB LPDDR5 RAM. It can boast 2,400 to 9,600 TFLOPS of computing performance, and has a chip-to-chip bandwidth of 1TB/s 

These unique cards can produce up to 20 times high throughput for generative inference on large language models (LLMS), up to 20 times lower inference latency for LLMs, and up to 30 times cost savings when compared with traditional GPUs.

With generative AI rapidly expanding, the industry is locked in a race to build increasingly powerful hardware to power future generations of the technology. 

The leading components are GPUs and, more specifically, Nvidia’s A100 and newer H100 units. But GPUs aren’t optimized for LLM inference, according to d-Matrix, and too many GPUs are needed to handle AI workloads, leading to excessive energy consumption. 

This is because the bandwidth demands of running AI inference lead to GPUs spending a lot of time idle, waiting for data to come in from DRAM. Moving data out of DRAM also means higher energy consumption alongside reduced throughput and added latency. This means cooling demands are then heightened. 

The solution, this firm claims, is its specialized DIMC architecture that mitigates many of the issues in GPUs. D-Matrix claims its solution can reduce costs by 10 to 20 times – and in some cases as much as 60 times. 

Beyond d-Matrix’s technology, other players are beginning to emerge in the race to outpace Nvidia’s H100. IBM presented a new analog AI chip in August that mimics the human brain and can perform up to 14 times more efficiently.

More from TechRadar Pro

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Keumars Afifi-Sabet is the Features Editor for ITProCloudPro and ChannelPro. He oversees the commissioning and publication of in-depth and long-form features across all three sites, including opinion articles and case studies. He also occasionally contributes his thoughts to the IT Pro Podcast, and flexes his ten years of writing experience in producing content for a variety of publications including TechRadar Pro and TheWeek.co.uk. Keumars joined IT Pro as a staff writer in April 2018, and has expertise in a variety of areas including AI, cyber security, cloud computing, and digital transformation, as well as public policy and legislation.

[ For more curated Computing news, check out the main news page here]

The post Microsoft-backed AI startup beats Nvidia H100 on key tests with GPU-like card equipped with 256GB RAM first appeared on www.techradar.com

New reasons to get excited everyday.



Get the latest tech news delivered right in your mailbox

You may also like

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

More in computing