Meta, the tech giant behind Facebook, Instagram, and WhatsApp, has taken a significant step toward reducing its reliance on Nvidia and cutting AI infrastructure costs. The company has begun testing its first in-house artificial intelligence (AI) training chip, marking a major milestone in its efforts to develop custom silicon.
Meta’s Custom AI Chip: A Game-Changer?
According to inside sources, Meta has deployed a small batch of its AI training chip for initial testing. If successful, the company plans to scale up production for widespread use. This move aligns with Meta’s broader strategy of reducing expenses, as AI-related infrastructure remains one of its largest financial burdens.
For 2025, Meta has projected total expenses between $114 billion and $119 billion, with up to $65 billion allocated to capital expenditure, largely driven by investments in AI. Developing its own AI chips could help the company significantly cut these costs over time.
One source revealed that Meta’s chip is a dedicated AI accelerator, meaning it is specifically designed to handle AI-related tasks rather than serving multiple computing purposes. This makes it more power-efficient compared to traditional graphics processing units (GPUs), which are commonly used for AI workloads.
TSMC Partners with Meta for Chip Production
Meta is reportedly working with Taiwan Semiconductor Manufacturing Company (TSMC) to produce the AI chip. The testing phase began after the company successfully completed its first “tape-out”—a critical step in chip development where an initial design is sent for manufacturing.
The process of creating AI chips is highly complex and expensive, with a single tape-out costing tens of millions of dollars and taking three to six months. If any issues arise during testing, Meta may have to repeat the process, delaying its AI ambitions further.
Meta’s AI Chip Development: A Rocky Start
Meta’s AI chip initiative is part of its Meta Training and Inference Accelerator (MTIA) program. However, the company has faced setbacks in the past. Previously, Meta had to scrap an in-house AI chip after a failed test deployment. Instead, it opted to purchase billions of dollars’ worth of Nvidia GPUs in 2022.
Despite these challenges, Meta has successfully integrated a first-generation inference chip into its recommendation systems, which power the content ranking algorithms on Facebook and Instagram. The company now aims to use its new training chip for similar applications, eventually expanding to generative AI products, such as its chatbot, Meta AI.
A Step Toward AI Independence?
Meta’s Chief Product Officer, Chris Cox, recently stated that the company is taking a “walk, crawl, run” approach to AI chip development. While its first-generation inference chip has been a success, training AI models requires significantly more computing power.

Meta executives have set a goal of using their own AI training chips by 2026, starting with recommendation algorithms before expanding into broader AI applications.
This effort comes amid growing concerns in the AI community about the long-term viability of simply scaling up large language models using more GPUs. Some researchers argue that increasing computing power alone may not yield the same level of AI improvements as before.
What’s Next for Meta’s AI Ambitions?
Meta remains one of Nvidia’s biggest customers, using its GPUs for training and running AI models across its platforms. However, if Meta’s in-house chip proves successful, it could signal a shift in the AI hardware landscape—potentially challenging Nvidia’s dominance in the industry.
Whether Meta can overcome the hurdles of AI chip development remains to be seen, but this latest move marks a bold step toward greater self-reliance in AI innovation.