Analysis of the Impact of Sales Restrictions on NVIDIA H200 Chips to China
Unlock More Features
Login to access AI-powered analysis, deep research reports and more advanced features

About us: Ginlix AI is the AI Investment Copilot powered by real data, bridging advanced AI with professional financial databases to provide verifiable, truth-based answers. Please use the chat box below to ask any financial question.
Related Stocks
Based on the latest information I have collected, I will systematically analyze the impact of restricted sales of NVIDIA H200 chips to China on its Chinese data center business.
On December 8, 2025, U.S. President Donald Trump announced that NVIDIA would be allowed to export H200 AI chips to China, but this “lifting of restrictions” comes with strict conditions [1][2]. According to the policy, NVIDIA is required to turn over 25% of the relevant chip sales revenue to the U.S. government, and more advanced Blackwell architecture chips and the upcoming Rubin chips are not included in the approved scope [1]. This policy marks a major adjustment in U.S. AI chip export controls to China, shifting from a complete blockade to a “conditional approval” model.
From the timeline, U.S. chip controls on China have undergone multiple tightenings and adjustments: In October 2022, the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) issued the “October 7 Rules,” restricting China’s access to advanced process chips below 14nm and high-end AI computing chips; NVIDIA subsequently launched customized chips such as the A800 and H800 to respond [1]. In October 2023, BIS further updated export regulations, including customized models such as the A800 and H800 in the restricted scope, and NVIDIA subsequently launched the H20 chip positioned for AI inference scenarios [1]. In April 2025, the U.S. government required licenses for H20 exports to China; H20 sales resumed in July; in August, NVIDIA notified the supply chain to stop H20 production and packaging [1]. The lifting of restrictions on the H200 means NVIDIA can resume selling its previous-generation flagship chip to China, but it will never be able to provide the most advanced architecture technologies.
NVIDIA CEO Jensen Huang responded to this issue in a pragmatic manner. In a public appearance in January 2026, he stated that the company has activated the supply chain, the H200 is in production, and the final details of the export license are being finalized with the U.S. government [3][4]. He frankly said, “I only await purchase orders. When the purchase orders come, everything will be clear.”
Regarding the H200’s competitiveness in the Chinese market, Jensen Huang frankly stated, “The H200 still has competitiveness in the market, but it will not be competitive forever. So we hope we can launch other competitive products in the future.” [3]
Notably, Jensen Huang highly recognized Chinese competitors. He pointed out, “The group of Chinese entrepreneurs, engineers, technical experts, and AI researchers is world-class,” and “China’s ecosystem is developing very rapidly, engineers work very hard, and they are very entrepreneurial.” [3] He even bluntly said, “Chinese companies are so strong; we have to bring out the ‘real deal’.” [4] This statement not only reflects NVIDIA’s clear understanding of the complexity of the Chinese market but also its vigilance against the rapid rise of local Chinese competitors.
Continued geopolitical factors have fundamentally changed NVIDIA’s position in China’s AI chip market. According to data from multiple institutions, in 2022, NVIDIA’s share of China’s AI accelerator chip market was as high as approximately 95%, almost forming a monopoly pattern of “dominance by a single player” [5][6]. However, this dominant position has rapidly collapsed over the past three years: by 2025, NVIDIA’s share of China’s accelerator chip market has dropped to approximately 50%-62% [7][8].
According to a report released by International Data Corporation (IDC) in October 2025, the scale of China’s accelerator server market reached $16 billion in the first half of 2025, more than doubling compared to the first half of 2024; the scale of China’s accelerator chip market exceeded 1.9 million units in the first half of the year [7]. In this market, NVIDIA accounts for approximately 62% of the market share, while domestic chips account for approximately 35% [7]. Data from TrendForce is more aggressive, showing that in the fourth quarter of 2024, the share of domestic AI chips in China’s data centers historically exceeded 50% [6]. This shift marks the complete end of the era of “dominance by a single player” in the Chinese market.
NVIDIA’s Chinese data center business is facing multiple structural challenges. First is the issue of delayed procurement decisions caused by policy uncertainty. The repeated adjustments to U.S. export control policies make it difficult for Chinese customers to conduct long-term planning, and more and more enterprises are beginning to seek independently controllable alternatives. Second is the ongoing controversy over “security backdoors”. Since the second half of 2025, NVIDIA has faced controversies over chip “security backdoors” in China, further narrowing its market space [5].
In terms of customer structure, NVIDIA’s main customers in China include large cloud service providers, internet companies, and research institutions. However, amid the wave of domestic substitution, these customers are accelerating their shift to local suppliers. According to Caijing reports, leading enterprises including China Mobile, China Telecom, and Ant Group have all deployed 10,000-card clusters using domestic AI chips [7]. Internet giants such as Baidu, Alibaba, Tencent, and ByteDance are also actively testing and adopting domestic chips, forming a continuous erosion of NVIDIA’s market share.
Although the U.S. government has approved the export of the H200 to China, its market prospects remain highly uncertain. The New York Times analyzed that the H20 chip, as a “customized neutered” chip designed specifically for the Chinese market, is the most advanced model currently approved for export by Washington, but more and more Chinese buyers are unwilling to pay for it [4]. David Sacks, the White House’s AI chief, also told Bloomberg that China has rejected U.S. H200 chips, “Clearly they don’t want these. I think the reason is that they want to achieve semiconductor independence.” [4]
From a commercial perspective, the sale of the H200 also requires a 25% revenue share to be paid to the U.S. government, which greatly reduces its cost-performance advantage [1][2]. A person from the industry chain analyzed to a Caijing News Agency reporter, “The H200’s performance is just at the sweet spot of ‘usable but not the most advanced’, which is actually a continuation of the ‘boiling frog in warm water’ strategy,” attempting to delay China’s domestic substitution process through dumping [2]. However, driven by both the rapid progress of domestic chips and the independent controllability strategy, Chinese customers’ interest in “neutered” overseas chips is significantly declining.
Huawei’s Ascend series chips have become the leader in China’s local AI chip market. The Huawei Ascend 910 series is currently the most widely used domestic AI chip, adopted by leading enterprises including the three major telecom operators, ByteDance, Alibaba, Tencent, Baidu, and Ant Group [7]. In the first half of 2025, Huawei used the Ascend 910 series to train the 135-billion-parameter PanGu Ultra and the 718-billion-parameter PanGu Ultra MoE, proving its capability in large model training [7].
To address the shortcoming of relatively insufficient single-card performance, Huawei adopted systems engineering methods, investing a research and development team of over 10,000 people, and mobilizing multiple teams including Huawei Cloud, Computing Product Line, HiSilicon, 2012 Laboratory, Data Communications Product Line, and Optical Product Line to collaborate on research [7]. Huawei’s three-year roadmap for Ascend, announced in September 2025, shows that the company will successively launch the Ascend 950, 960, and 970 series to achieve continuous doubling of computing power [5].
More strategically significant is Huawei’s launch of the “Super Node” cluster architecture. This system-level architectural innovation targets the NVIDIA ecosystem, focusing on building a new paradigm for AI computing power infrastructure independent of CUDA [5]. Huawei is also deeply integrated with domestic large models such as DeepSeek, fully opening the Lingqu 2.0 protocol and Super Node reference architecture, and open-sourcing the hardware enablement suite CANN, aiming to lead an ecological breakthrough [5]. According to data from industry analysis agency TrendForce, Huawei’s AI chip market share has reached 40% [6].
Cambricon achieved explosive growth in 2025, once becoming the “new stock king” of the A-share market. As of the third quarter of 2025, Cambricon’s revenue soared nearly 24 times year-on-year, and it achieved profitability for the first time [5][9]. In the capital market, Cambricon’s stock price has skyrocketed nearly 30 times over the past three years, once surpassing Moutai to earn the title of “Han King” [9].
Cambricon’s success reflects the capital market’s re-evaluation of the value of domestic AI chips. As a leading domestic AI chip designer, Cambricon’s SiYuan series has built a complete product matrix for cloud, edge, and end devices [6]. A test conducted by an intelligent computing technical person from a local state-owned enterprise in December 2025 showed that the Baidu Kunlun Chip P800 and Alibaba PPU have better token throughput efficiency than the NVIDIA H20 when running adapted and optimized models such as DeepSeek-R1 and Alibaba Tongyi Qianwen [7].
As Chinese internet giants, Baidu and Alibaba are also actively promoting the self-development of AI chips. In February 2025, Baidu launched a 10,000-card cluster of Kunlun Chip P800, and plans to launch a 30,000-card cluster in the future [7]. A senior technical person familiar with the product introduced that after nearly two years of continuous iteration, in typical training scenarios, the model training accuracy alignment issue of the Kunlun Chip P800 is no longer a major obstacle, and it also has certain cost-performance advantages when training small and medium-parameter models [7].
Alibaba continues to make efforts in the PPU (Processor Unit) field. Test data shows that Alibaba’s PPU also has excellent performance in inference scenarios, not inferior to the NVIDIA H20 [7]. The self-developed chips of these two internet giants not only serve their own business needs but are also providing computing power services to external customers, forming a new force competing with NVIDIA.
In addition to Huawei, Cambricon, Baidu, and Alibaba, many other players have emerged in China’s AI chip market, forming a diversified competitive ecosystem. As a pioneer in general-purpose GPU research and development, Moore Threads’ MTT S series GPUs support mainstream AI frameworks and have been included in the procurement lists of multiple government and enterprise data centers [8]. Hygon Information’s DCU series is making efforts in the general computing field, gradually breaking NVIDIA’s monopoly in certain scenarios [6]. H3C, a brand under Unisplendour, ranks second in domestic server market share, and its 800G switch shipments soared 27 times year-on-year in 2025, providing core intelligent computing network equipment for Alibaba Cloud [8].
According to IDC data, domestic AI chips include more than ten brands such as Huawei Ascend, Baidu Kunlun Chip, Alibaba PPU, and Cambricon [7]. The market is gradually evolving from a single leader’s monopoly to a pattern of coexistence at multiple levels, “Multiple parallel developing segmented tracks are gradually taking shape around inference computing power, industry customization, and domestic substitution.” [2]
Although domestic AI chips have made significant progress, there is still a gap in absolute performance compared to NVIDIA. A senior Huawei technical person frankly said that although the Ascend series “still lags behind the U.S. by one generation in terms of single chips,” “through methods such as stacking and clustering, the computing results are comparable to the most advanced level.” [9] Ren Zhengfei expressed a similar view in an interview with People’s Daily, emphasizing the importance of systems thinking and engineering capability innovation [9].
At the system integration level, NVIDIA’s NVLink high-speed interconnect technology and CUDA software ecosystem still have significant advantages. Domestic chips still need continuous optimization in aspects such as multi-machine and multi-card clustering capabilities, software stack stability, and support for high-intensity parallel training [2]. However, the emergence of new models such as DeepSeek is changing this pattern—through technological optimizations such as MoE (Mixture of Experts), mixed precision, and efficient interconnection, some domestic chips have demonstrated competitive advantages in inference scenarios.
The software ecosystem is one of the biggest challenges facing domestic AI chips. NVIDIA’s CUDA ecosystem, built through years of accumulation, has become the preferred platform for AI developers, binding a large amount of developer resources and application code. For domestic chips to achieve true substitution, they must establish an independent software ecosystem, which requires long-term investment and cultivation.
Huawei’s strategy is to build an ecosystem through open source and openness. Huawei fully opens the Lingqu 2.0 protocol and Super Node reference architecture, and open-sources the hardware enablement suite CANN, aiming to reduce developers’ migration costs [5]. However, a person from an A-share listed AI chip manufacturer pointed out, “In the past, we held a hammer to find a nail; we made the chip first and then looked for scenarios. Now the nail determines what the hammer looks like.” [2] This change means that chip design must be deeply coupled with application scenarios, and pure hardware performance indicators are no longer the only determining factor.
The emergence of DeepSeek has had a profound impact on the competitive pattern of AI chips. With a computing power cost of less than 1/20 of OpenAI’s, DeepSeek proved that application innovation can make up for computing power gaps, shaking the asset cost logic of “miracles through brute force.” [9]
This transformation has special significance for domestic chips: the inference link has relatively low requirements for computing power, providing a space for mid-to-low-end domestic AI chips to play their roles.
“It is difficult to use a resource pool composed of NVIDIA and domestic chips simultaneously in ‘training’ scenarios, but ‘inference’ scenarios can be flexibly deployed.” [6] As the structure of computing power demand for AI applications shifts from “training” to “inference,” domestic chips have gained more market opportunities.
The founder of an AI infrastructure startup predicted that in 2026, the growth rate of inference computing power will be significantly faster than that of training computing power. In this battlefield, whoever can provide a solution with lower cost and a more stable software stack will win the “big granaries” such as government affairs, finance, and industry. [2]
China’s AI chip market contains huge growth potential. According to Frost & Sullivan’s forecast, from 2025 to 2029, the compound annual growth rate of China’s AI chip market will reach 53.7%, and the market size will surge from RMB 142.537 billion in 2024 to RMB 1.34 trillion in 2029 [5]. This growth mainly comes from the rapid penetration of AI applications in various industries and the continuous advancement of intelligent computing center construction.
Morgan Stanley estimated in its June 2025 report that China’s self-sufficiency rate in AI GPUs will rise from 34% in 2024 to 82% in 2027 [9]. Estimates from the Institute of Computing Technology, Chinese Academy of Sciences show that the market share of domestic chips is expected to exceed 45% in 2027 [5]. These forecasts indicate that domestic substitution will continue to accelerate in the coming years, and the market share of international manufacturers such as NVIDIA in China will face further compression.
Facing the complex situation in the Chinese market, NVIDIA is adopting a pragmatic response strategy. Jensen Huang’s statements show that NVIDIA is not seeking large-scale marketing or public relations activities in the Chinese market, but is instead waiting for purchase orders to arrive [3][4]. This attitude of “not expecting any press releases or grand ceremonies” is not only an adaptation to the current political environment but also reflects adjustments to expectation management for the Chinese market.
From a product strategy perspective, NVIDIA maintains its presence in the Chinese market by launching “mid-to-high-end” products such as the H200, while reserving the most advanced Blackwell and Rubin architectures for other markets [1]. The effect of this strategy remains to be seen, as Chinese customers are accelerating their shift to domestic alternatives, and “the 25% sales share has greatly reduced the H200’s cost-performance advantage from a commercial perspective.” [1]
Domestic AI chips are moving from the “usable” to the “user-friendly” stage. 2025 is not simply a “substitution year,” but a “structural shaping year” for the market [2]. The competitive pattern of AI chips is gradually evolving from a single leader’s monopoly to a pattern of coexistence at multiple levels. Manufacturers at different levels are beginning to establish relatively clear and stable market positions in their respective adapted application scenarios.
The key to future competition lies in the overall contest of “chip + system + software,” rather than simply competing on process technology and peak computing power [2]. The stable supply and delivery of “peripheral supporting facilities” such as HBM (High Bandwidth Memory), copper cable interconnection, and advanced packaging will change from bonus items to entry tickets. At the same time, customer demand is shifting from “buying chips” to “buying computing power services,” which will promote domestic chip manufacturers to extend towards system integration and servitization.
The evolution of sales restrictions on NVIDIA H200 chips to China essentially reflects the complexity and dynamics of Sino-U.S. technological competition. From NVIDIA’s perspective, the “conditional lifting of restrictions” on the H200 allows it to maintain a certain presence in the Chinese market, but the 25% revenue share, the ban on more advanced architectures, and the decline in Chinese customers’ procurement willingness all constitute substantive challenges. NVIDIA’s market share in China’s data center market has dropped from 95% to the 50%-60% range, and this trend is expected to continue.
From the perspective of the Chinese market, export controls have objectively accelerated the domestic substitution process. Domestic players such as Huawei Ascend, Cambricon, and Baidu Kunlun Chip have established complete product matrices and demonstrated competitiveness in inference scenarios and some training scenarios. China’s AI chip market is evolving from a single leader’s monopoly to a competitive ecosystem of coexisting multiple levels, and the localization rate of AI in national intelligent computing centers and the independent innovation in information technology field has exceeded 90%.
Looking to the future, China’s AI chip market will maintain rapid growth, and domestic substitution will continue to deepen driven by policy support, technological progress, and market demand. For NVIDIA, the importance of the Chinese market is declining, but its impact on the global AI industry pattern will continue. Whether domestic chips can seize this historical opportunity to achieve the leap from “usable” to “user-friendly” to “preferred” will determine China’s strategic position in global AI competition.
[1] AJ Securities - Research Report on NVIDIA H200 Chips (https://pdf.dfcfw.com/pdf/H3_AP202512161801681438_1.pdf)
[2] Caijing News Agency - Domestic AI Chips Bid Farewell to the “Wild West Era” (https://news.futunn.com/post/66740907/domestically-produced-ai-chips-bid-farewell-to-the-rough-and)
[3] 36Kr - Jensen Huang Addresses “China’s NVIDIA” (https://m.36kr.com/p/3628833582449925)
[4] Guancha.cn - Jensen Huang: Chinese Companies Are So Strong; We Have to Bring Out the “Real Deal” (https://www.guancha.cn/internation/2026_01_07_802992.shtml)
[5] The Paper - AI Chips in 2025: Giants Fight Fiercely, Power Restructuring (https://m.thepaper.cn/newsDetail_forward_32307594)
[6] The Paper - Three Days After “China’s NVIDIA” Goes Public, NVIDIA H200’s Restrictions Are Lifted (https://m.thepaper.cn/newsDetail_forward_32144468)
[7] Wall Street CN - What Makes China’s Computing Power Strong? (https://wallstreetcn.com/articles/3762510)
[8] Tencent News - Accelerated Domestic Substitution: Inventory of Core Enterprises in the AI Computing Power Industry Chain (https://news.qq.com/rain/a/20260105A03SCK00)
[9] Southern Window - Cambricon’s Explosive Growth: An Era Is Beginning (https://www.nfcmag.com/article/9475.html)
Insights are generated using AI models and historical data for informational purposes only. They do not constitute investment advice or recommendations. Past performance is not indicative of future results.
About us: Ginlix AI is the AI Investment Copilot powered by real data, bridging advanced AI with professional financial databases to provide verifiable, truth-based answers. Please use the chat box below to ask any financial question.
