Meta Extends Custom Chips Deal With Broadcom to Power AI Ambitions


Published: 27 Apr 2026

Author: Gautam Mahajan

Share : linkedin twitter facebook

April 15, 2026, Meta Platforms has expanded its partnership with Broadcom to develop multiple generations of custom artificial intelligence processors, reinforcing its long-term strategy to scale high-performance computing infrastructure. The extended agreement runs through 2029 and includes an initial commitment exceeding one gigawatt of computing capacity, reflecting the growing energy and infrastructure demands of AI workloads.

The collaboration highlights a broader industry shift, as major technology firms increasingly design in-house chips to reduce reliance on third-party providers such as Nvidia. Custom silicon development allows companies to optimize performance, manage costs, and tailor hardware for specific AI applications, including training and inference. As demand for generative AI accelerates, such strategies are becoming central to building scalable and efficient computing ecosystems.

Meta

Under the agreement, Broadcom will support Meta in developing specialized processors while also contributing networking technologies, including advanced Ethernet solutions, to connect expanding AI clusters. These clusters are essential for handling large-scale data processing and enabling real-time AI-driven features across platforms. As part of the strategic shift, Broadcom CEO Hock Tan will transition from Meta’s board to an advisory role focused on the company’s custom chip roadmap.

Meta’s in-house chip program, known as the Meta Training and Inference Accelerator (MTIA), continues to evolve with multiple generations planned through 2027. The latest chip, MTIA 300, is already supporting recommendation and ranking systems, while upcoming versions are expected to focus on inference capabilities, enabling faster and more efficient AI responses. The company has also outlined a broader multi-gigawatt infrastructure rollout aimed at supporting long-term AI ambitions, including advanced personalization technologies at scale.

The announcement underscores the growing importance of high-performance computing in enabling next-generation AI applications. As companies invest heavily in custom hardware and infrastructure, the competitive landscape is shifting toward vertically integrated solutions that combine chips, software, and networking capabilities.

According to Precedence Research, the high performance computing for AI market size accounted for USD 17.30 billion in 2025 and is projected to grow from USD 22.21 billion in 2026 to approximately USD 210.72 billion by 2035, expanding at a CAGR of 28.40% from 2026 to 2035 as demand surges for advanced AI workloads, custom silicon development, and large-scale data center infrastructure to support generative AI and machine learning applications.

A recent report by Precedence Research further highlights that the high performance computing for AI market is benefiting from rapid advancements in GPU and custom chip technologies, increasing investments by hyperscale companies, and the rising need for energy-efficient, high-capacity computing systems to manage complex AI models and data-intensive applications.

Latest News