In a significant technological advancement, DeepSeek’s innovative artificial intelligence (AI) models are providing Chinese chipmakers, including industry leader Huawei, with a competitive edge against dominant U.S. processors.
Historically, companies like Huawei have faced challenges in developing high-end chips capable of rivaling those from U.S. firms such as Nvidia, particularly in the realm of training AI models—a process that involves feeding data to algorithms to enhance their decision-making accuracy.
DeepSeek’s models, however, emphasize “inference”—the phase where an AI model generates conclusions—by optimizing computational efficiency rather than relying solely on sheer processing power.
This approach is anticipated to narrow the performance gap between Chinese-made AI processors and their more powerful U.S. counterparts.
In recent weeks, several Chinese AI chipmakers, including Huawei, Hygon, Tencent-backed EnFlame, Tsingmicro, and Moore Threads, have announced that their products will support DeepSeek’s models, though specific details remain scarce. Industry experts predict that the open-source nature of DeepSeek’s models, coupled with their low associated costs, will accelerate AI adoption and the development of practical applications within China. This momentum could assist Chinese firms in navigating U.S. export restrictions on advanced AI chips.
Prior to DeepSeek’s emergence, products like Huawei’s Ascend 910B were recognized by clients such as ByteDance for their suitability in less computationally intensive inference tasks—the stage following training where AI models make predictions or perform functions like those seen in chatbots. Across China, numerous companies, spanning industries from automotive to telecommunications, have announced plans to integrate DeepSeek’s models into their products and operations.
Lian Jye Su, a chief analyst at tech research firm Omdia, noted, “This development aligns closely with the capabilities of Chinese AI chipset vendors. Chinese AI chipsets struggle to compete with Nvidia’s GPU in AI training, but AI inference workloads are more forgiving and require a deeper local and industry-specific understanding.”
Despite these advancements, Nvidia continues to dominate the AI chip market. Bernstein analyst Lin Qingyuan observed that while Chinese AI chips are cost-competitive for inference tasks, their appeal remains largely within the Chinese market, as Nvidia’s chips still outperform even in inference applications.
U.S. export restrictions currently prohibit the sale of Nvidia’s most advanced AI training chips to China. However, the company is permitted to sell less powerful training chips that Chinese customers can utilize for inference tasks.

Nvidia recently highlighted in a blog post that inference times are increasing due to a new scaling law, suggesting that its chips will be essential to enhance the utility of DeepSeek and other “reasoning” models.
Beyond raw computing power, Nvidia’s CUDA—a parallel computing platform enabling software developers to use Nvidia GPUs for general-purpose computing, not limited to AI or graphics—has become a pivotal element of its market dominance.
Many Chinese AI chip companies have traditionally avoided directly challenging Nvidia by ensuring their chips are compatible with CUDA. Huawei, however, has been more assertive, offering a CUDA alternative called Compute Architecture for Neural Networks (CANN). Despite this, experts note that persuading developers to transition from CUDA presents challenges.
Omdia’s Su remarked, “Software performance of Chinese AI chip firms is also lacking at this stage. CUDA has a rich library and a diverse range of software capabilities, which require significant long-term investment.”