While the spotlight remains on US software giants, a secretive clutch of Asian hardware manufacturers is quietly powering the next wave of artificial intelligence development, with profound implications for global supply chains and geopolitics.
While investor attention has concentrated on the so‑called Mag 7, a different group of firms is quietly shaping the future of artificial intelligence by building the hardware that makes large models possible. According to the o...
Continue Reading This Article
Enjoy this article as well as all of our content, including reports, news, tips and more.
By registering or signing into your SRM Today account, you agree to SRM Today's Terms of Use and consent to the processing of your personal information as described in our Privacy Policy.
Ninety One argues the hardware cycle will not end in a dramatic bust but in a recalibration that resets unit economics and reprices risk. The essential point is structural: leading frontier models may be trained in the United States, but the equipment they run on is fabricated, packaged and integrated across Taiwan, South Korea and parts of Southeast Asia. That regional ecosystem, anchored by deep engineering know‑how, dense supplier networks and high R&D intensity, remains central to global AI capacity.
At the core are three firms that determine the pace of advancement. TSMC sets the ceiling for compute: as the world’s largest neutral foundry it fabricates the logic dies that sit at the heart of modern accelerators. The company’s neutral foundry model, tight process discipline and an ecosystem linking Japanese materials, Dutch lithography tools and Taiwanese engineering create a manufacturing platform customers trust. Recent developments illustrate both demand and geopolitical complexity: TSMC has expanded manufacturing footprints outside Taiwan, including a high‑profile Arizona facility that produced Nvidia’s first U.S. Blackwell wafer, yet its planned expansion in Japan shows signs of timetable reassessment as the company aligns capacity with customer demand. Industry reporting suggests the Kumamoto site and other overseas projects plug into local ecosystems, but some construction timelines have been adjusted.
Memory sits alongside compute as a limiting factor for AI throughput. SK Hynix and Samsung are among a small number of suppliers able to deliver the high‑bandwidth memory (HBM) modules that sit beside GPU dies and feed them at required speeds. According to industry accounts, cloud projects and strategic supply agreements have already allocated much of next year’s HBM output, and buyers from consumer device makers to cloud providers are feeling the squeeze: DRAM and NAND prices have risen unusually sharply, reflecting infrastructure demand for training and large‑scale data movement.
Beyond chips and memory, system integration and data movement matter. Accton supplies the high‑speed switches that link thousands of accelerators inside hyperscale data centres as networks move from 400‑gigabit toward 800‑gigabit and 1.6‑terabit designs. ASE brings GPUs and HBM together through advanced packaging and co‑design partnerships with major customers, a capability that gives it early visibility on next‑generation architectures. Power and thermal management , the other side of the physics problem , have been fast pivot points: Delta Electronics re‑engineered power systems when server power requirements rose sharply for a leading accelerator, illustrating how quickly suppliers must adapt to new thermal and power envelopes.
The region’s supply chain depth also includes upstream materials and chemicals. Anji Microelectronics, for example, supplies slurries and wet chemicals used in advanced‑node fabrication and is gaining share as China expands domestic capacity, adding strategic resilience to the regional materials base.
Recent reporting underscores how demand is translating into macroeconomic effects and state policy. Taiwan’s export data shows a sharp surge driven by semiconductors and high‑performance computing, with November exports up markedly year‑on‑year and electronics shipments to the United States rising strongly. The island has simultaneously pursued a broader strategic pivot: Reuters reported that on December 12, 2025, Taiwan’s president inaugurated a new 15‑megawatt cloud centre housing a supercomputer equipped with thousands of Nvidia H200 and Blackwell chips as part of a national “Ten Major AI Infrastructure Projects” programme aimed at positioning Taiwan as a leader in AI services as well as manufacturing.
Geopolitics and shifting supply priorities complicate the picture. Nvidia is reported to be weighing increases in H200 output to meet robust demand from Chinese cloud firms, even as export controls and tariffs shape flows. At the same time, governments are mobilising industrial policy: South Korea is planning a state‑backed foundry to strengthen domestic logic capacity and reduce import dependence for critical chips, while the United States and partners pursue onshore production for strategic lines of wafers.
All of which illustrates Ninety One’s central claim: the companies enabling AI are system firms that cross manufacturing, memory, connectivity and thermal management, and they are concentrated in markets that still trade with emerging‑market discounts. Market prices thus may not fully reflect strategic importance. Industry data shows that some of these suppliers sit at the physical limits of the AI cycle , where performance is constrained by fabrication nodes, memory bandwidth and power delivery , yet their valuations lag those of US peers supporting front‑end model development.
That disconnect presents an investment narrative and a strategic vulnerability. Upgrading fabs, adding HBM capacity, converting switches to higher line rates and re‑engineering power systems all require capital and time. TSMC’s choices on fab node upgrades and overseas capacity , including deliberations over whether to move a Japanese fab to more advanced 4nm/5nm processes or to slow construction , will shape how quickly supply can scale. Similarly, memory suppliers face the challenge of ramping very large volumes of HBM without disrupting consumer markets.
For companies building custom accelerators, the dependence on this ecosystem remains acute. Cloud providers may design their own chips, but they still require third‑party foundries, memory, packaging and system integration. The consequence is interdependence: more custom silicon looks likely to amplify demand for the underlying manufacturing and systems firms rather than render them redundant.
The strategic and market implications are twofold. First, the industrial concentration in East Asia confers unrivalled speed and integration today but also concentrates geopolitical and supply risk. Second, the current relative under‑pricing of strategic suppliers , what Ninety One calls the Secret Seven , may offer a window for long‑term investors seeking exposure to the physical backbone of AI. Whether valuations reprice through a modest market correction or a sustained rerating will depend on how quickly these firms can expand capacity, how policy and trade measures reshape flows, and whether memory and packaging bottlenecks are relieved.
In short, the next leg of the AI revolution will be decided as much in cleanrooms, fabs and data‑centre basements as in model labs. Industry reporting and government initiatives suggest demand is only intensifying; how supply scales, where it is located, and who captures the economics will determine whether the Secret Seven remain underappreciated or finally assume centre stage.
Source: Noah Wire Services



