The recent release of the MLPerf training v3.1 benchmark results by MLCommons has brought Intel’s Gaudi 2 accelerators and 4th Gen Intel Xeon Scalable processors into focus. These results highlight a significant improvement in AI training performance, with Intel’s Gaudi 2 showing a 2x performance increase when using the FP8 data type on the v3.1 training GPT-3 benchmark. This notable performance boost has drawn attention to the potential of Intel’s hardware in the AI industry.
Intel’s Gaudi 2 accelerators and 4th Gen Intel Xeon Scalable processors, equipped with Intel Advanced Matrix Extensions (Intel AMX), are specifically designed to meet the growing demands of AI computing. The Gaudi 2 has emerged as a strong competitor to NVIDIA’s H100, offering significant price-performance benefits that make it an appealing choice for those in the AI industry.
The 4th Gen Intel Xeon Scalable processors, as versatile CPUs, have shown strong performance in the MLPerf benchmarks. Intel remains the only CPU vendor to submit MLPerf results, emphasizing its commitment to competitive AI solutions and its ambition to stay at the forefront of AI hardware development.
Other articles you may find of interest on the subject of Intel and AI technologies :
- HP ProDesk 600G1 Desktop | Quad Core Intel i5 (3.2GHz) | 8GB
- Running Llama 2 13B on an Intel ARC GPU, iGPU and CPU
- Intel unveils first glass substrates for next-generation compute
- Figure humanoid robotics receives investment from Intel
- Intel FPGA portfolio with Next-Gen Agilex Series announced
- Intel AI PC Acceleration Program launched
The Role of FP8 Software in the Performance Leap
The use of the FP8 data type has been crucial in the performance leap of Intel’s Gaudi 2. This data type, along with software updates and optimizations, has enabled the Gaudi 2 to cut the time-to-train by more than half compared to the June MLPerf benchmark. This significant reduction in training time showcases the efficiency and power of Intel’s hardware.
Understanding AI Models and Benchmarking
The MLPerf training v3.1 benchmark results were obtained from a range of AI models, including the Stable Diffusion multi-modal model, RESNet50, RetinaNet, BERT, and DLRM dcnv2. These models, in various hardware setups, were thoroughly tested to assess their performance. The results of these tests offer valuable insights into the performance of different hardware setups and their impact on AI model performance.
Intel’s Commitment to AI
With ongoing software enhancements and the potential of data types like FP8, Intel expects further progress in AI performance results in future MLPerf benchmarks. The company’s steadfast commitment to AI hardware and software development is evident in its consistent benchmark submissions and the impressive performance of its products. Intel’s dedication to advancing AI technology is clear, and the company is set to continue making significant progress in this field.
The MLPerf training v3.1 benchmark results have emphasized the impressive performance of Intel’s Gaudi 2 accelerators and 4th Gen Intel Xeon Scalable processors. The use of the FP8 data type and software optimizations have played a significant role in this performance leap, positioning Intel as a strong player in the AI hardware market. These results underscore Intel’s commitment to delivering high-performance AI solutions and its ability to compete effectively in the rapidly evolving AI hardware landscape.
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.