Meta Deepens Broadcom Partnership To Build Custom AI Chips Through 2029

Share it:

Meta’s partnership with Broadcom AI chips is set to expand significantly, as Meta extends its collaboration with Broadcom to develop multiple generations of custom artificial intelligence processors through the end of the decade.

The agreement, which runs until 2029, underscores Meta’s push to secure long-term computing capacity as it scales AI across its platforms. The company has committed to an initial deployment exceeding one gigawatt of computing power, a scale that reflects the growing infrastructure demands of generative AI systems.

As part of the expanded arrangement, Broadcom CEO Hock Tan will step down from Meta’s board and transition into an advisory role focused on the company’s custom chip strategy, signaling a deeper operational alignment between the two firms.

The move comes as major technology companies increasingly invest in in-house chip design to reduce dependence on external suppliers such as Nvidia, whose high-performance AI processors have become both critical and costly amid surging demand.

For Broadcom, the partnership reinforces its position as a key beneficiary of the AI boom. The company has carved out a niche in developing custom silicon and providing networking infrastructure, enabling large-scale AI deployments for hyperscale clients.

Meta, for its part, is accelerating its chip roadmap. Its Meta Training and Inference Accelerator (MTIA) program has already produced the MTIA 300 chip, currently used to power ranking and recommendation systems across its platforms. The company plans to roll out additional generations through 2027, with a growing focus on inference workloads, where AI systems process user queries in real time.

The collaboration also extends beyond processors. Broadcom’s Ethernet networking technology will play a central role in connecting Meta’s expanding AI data center clusters, highlighting the importance of high-speed interconnects in scaling machine learning infrastructure.

Chief Executive Mark Zuckerberg framed the investment as foundational to Meta’s long-term AI ambitions, describing it as part of a broader effort to build the computing backbone required to deliver advanced AI capabilities at a global scale.

The strategy reflects a wider industry shift. Companies, including Google and Amazon, are similarly designing proprietary chips to optimize performance and control costs, reshaping the competitive landscape in semiconductors and cloud infrastructure.

Market reaction to the announcement was measured. Broadcom shares rose modestly in extended trading, while Meta’s stock remained largely unchanged, suggesting that investors had already priced in continued AI-driven expansion.

Separately, Meta said board member Tracey Travis will not seek re-election at the company’s upcoming annual shareholder meeting, marking another change in its governance structure.

As AI adoption accelerates, partnerships like this are becoming central to how large technology firms secure the infrastructure needed to compete, shifting the industry toward vertically integrated models that combine software, hardware, and network capabilities.