From www.tomshardware.com
Accroding to Sharon Zhou, chief executive of LaminiAI, AMD has begun to ship its Instinct MI300X GPUs for artificial intelligence (AI) and high-performance computing (HPC) applications. As the ‘LamniAI’ name implies, the company is set to use AMD’s Instinct MI300X accelerators to run large language models (LLMs) for enterprises.
While AMD has been shipping its Instinct MI300-series products to its supercomputer customers for a while now and expects the series to become its fastest product to $1 billion in sales in history, it looks like AMD has also initiated shipments of its Instinct MI300X GPUs. LaminiAI has partnered with AMD for a while, so it certainly has priority access to the company’s hardware. Still, nonetheless, this is an important milestone for AMD as this is the first time we have learned about volume shipments of MI300X. Indeed, the post indicated that LaminiAI had gotten multiple Instinct MI300X-based machines with eight accelerators apiece (8-way).
“The first AMD MI300X live in production,” Zhou wrote. “Like freshly baked bread, 8x MI300X is online. If you are building on open LLMs and you are blocked on compute, let me know. Everyone should have access to this wizard technology called LLMs. That is to say, the next batch of LaminiAI LLM pods are here.”
A screenshot published by Zhou shows that an 8-way AMD Instinct MI300X is in operation. Meanwhile, the power consumption listed in the system screenshot indicates the GPUs are probably idling for the pic — they surely aren’t running demanding compute workloads.
AMD’s Instinct MI300X is a brother of the company’s Instinct MI300A, the industry’s first data center-grade accelerated processing unit featuring both general-purpose x86 CPU cores and CDNA 3-based highly parallel compute processors for AI and HPC workloads.
Unlike the Instinct MI300A, the Instinct MI300X lacks x86 CPU cores but has more CDNA 3 chiplets (for 304 compute units in total, which is significantly higher than 228 CUs in the MI300A) and therefore offers higher compute performance. Meanwhile, an Instinct MI300X carries 192 GB of HBM3 memory (at a peak bandwidth of 5.3 TB/s).
Based on performance numbers demonstrated by AMD, the Instinct MI300X outperforms Nvidia’s H100 80GB, which is available already and is massively deployed by hyperscalers like Google, Meta (Facebook), and Microsoft. The Instinct MI300X is probably also a formidable competitor to Nvidia’s H200 141GB GPU, which is yet to hit the market.
According to previous reports, Meta and Microsoft are procuring AMD’s Instinct MI300-series products in large volumes. Yet again, LaminiAI is the first company to confirm using Instinct MI300X accelerators in production.
[ For more curated Computing news, check out the main news page here]
The post AMD’s customers begin receiving the first Instinct MI300X AI GPUs — the company’s toughest competitor to Nvidia’s AI dominance is now shipping | Tom’s Hardware first appeared on www.tomshardware.com