(Top)

AI Chip Supply Chain

Click on any company or connection to learn more about it!

ASML
Zeiss
SMEE
Applied Materials
KLA
Tokyo Electron
Nikon
TSMC
Samsung
Intel Foundry
SMIC
UMC
GlobalFoundries
Nvidia
AMD
Groq
Intel AI
Cerebras
Huawei
ARM
Cadence
Synopsys
Micron
SK Hynix
ASE (Foxconn)
ASE Group
Amkor
JCET
SPIL
AWS
Azure (Microsoft)
OpenAI
Google
Meta AI
X.ai
Deepseek
Equipment
Foundry
Chip Design
EDA
Memory
Assembly
AI Labs

Equipment

ASML

Global monopoly in EUV lithography machines required for advanced chip manufacturing.

  • Only manufacturer of EUV lithography machines
  • Machines cost ~$200 million each
  • EUV machines have remote kill-switch capability
  • Critical choke point in AI chip supply chain
  • Subject to export controls from NL government

Customers:

  • TSMC: ASML supplies TSMC with essential EUV and High-NA EUV lithography systems, critical for TSMC's production of the world's most advanced semiconductor nodes (7nm, 5nm, 3nm, 2nm). TSMC is a primary customer for ASML's latest technologies.
  • Samsung: ASML provides Samsung Foundry with EUV lithography systems, enabling Samsung to manufacture advanced chips and compete at leading-edge nodes. Samsung is a major ASML customer and collaborates on High-NA EUV adoption.
  • Intel Foundry: ASML supplies Intel Foundry with EUV and is the first commercial supplier of High-NA EUV systems (TWINSCAN EXE:5000/5200). This is crucial for Intel's IDM 2.0 strategy and its goal to achieve process leadership.

Sources:

Zeiss

Exclusive provider of critical optical systems for ASML's EUV machines.

Customers:

  • ASML: Zeiss is the exclusive global supplier of critical optical systems (mirrors, lenses) for ASML's EUV and High-NA EUV lithography machines, a foundational and monopolistic relationship for advanced chip manufacturing.

SMEE (Shanghai Micro Electronics Equipment)

Chinese lithography equipment manufacturer developing DUV technology.

  • Developing DUV lithography tools (e.g., 28nm capable SSA800/900)
  • Working on more advanced DUV immersion systems
  • Key player in China's chip independence strategy
  • Cannot yet produce EUV machines

Customers:

  • SMIC: SMEE, China's leading domestic lithography tool maker, supplies DUV scanners (e.g., 28nm-capable SSA800) to SMIC, supporting China's semiconductor self-sufficiency efforts, especially for mature nodes and potentially enabling advanced nodes via multi-patterning.

Applied Materials

Largest semiconductor equipment company covering deposition, etch and CMP tools used by every leading-edge fab.

  • Leader in PVD/CVD, etch and CMP equipment
  • Critical for 3D NAND, logic and packaging steps
  • Supplies TSMC, Samsung, Intel and others
  • Subject to US export controls (China restrictions)

Customers:

  • TSMC: Applied Materials supplies critical deposition, etch and CMP tools that enable every advanced TSMC node.
  • Samsung: Samsung Foundry relies on Applied Materials equipment across logic and memory fabs.
  • Intel Foundry: Intel's new Arizona and Ohio fabs deploy Applied Materials platforms for GAAFET manufacturing.

KLA

Dominant player in process-control, wafer inspection and metrology systems.

  • Essential for yield-learning at advanced nodes
  • Sells to all top foundries and memory makers

Customers:

  • TSMC: KLA's inspection & metrology systems are indispensable for TSMC yield‑ramp at 3 nm and below.

Tokyo Electron

Top-three global equipment vendor providing etch, deposition and photoresist coat/develop tools.

  • Japanese powerhouse with strong co-development at TSMC & Samsung
  • Key for FinFET/GAAFET patterning and 3D NAND fabrication

Customers:

  • TSMC: Tokyo Electron coat/develop and etch tools form a backbone of TSMC patterning flows.

Nikon

Supplier of immersion DUV lithography scanners (No.2 after ASML in lithography).

  • Competition to ASML in mature nodes
  • Important for 28-90 nm and overlay tools

Foundries

TSMC (Taiwan Semiconductor)

World's largest dedicated semiconductor foundry, specializing in advanced process nodes.

  • Market leader in 3nm and 5nm processes
  • Supplies to NVIDIA, AMD, Apple
  • Located primarily in Taiwan
  • ~54% market share in foundry services

Customers:

  • Nvidia: TSMC is the primary foundry for Nvidia, manufacturing its leading AI GPUs (A100, H100, Blackwell) using its most advanced process nodes (e.g., 4NP for Blackwell). This is critical for Nvidia's market leadership.
  • AMD: TSMC is a key foundry partner for AMD, manufacturing its advanced CPUs, GPUs (including Instinct AI accelerators like MI300), and future 2nm 'Venice' chips, crucial for AMD's competitiveness in HPC and AI.
  • Cerebras: TSMC manufactures Cerebras Systems' unique Wafer-Scale Engines (WSE), with WSE-2 on 7nm and WSE-3 on 5nm. This partnership is essential for producing these massive, specialized AI chips.
  • OpenAI: OpenAI is partnering with TSMC to manufacture its first custom-designed AI chips using TSMC's 3nm process. This is a strategic move for OpenAI to optimize hardware and reduce reliance on external suppliers.
  • Meta AI: Meta is collaborating with TSMC to manufacture its in-house AI chips (MTIA program) on TSMC's 5nm process. This is key to Meta's strategy for custom silicon for its AI workloads.
  • Google: TSMC is set to manufacture Google's next-generation TPU v7 (via design partner MediaTek), continuing its role as a key foundry for Google's custom AI silicon, essential for Google's AI and cloud services.

Sources:

Samsung Semiconductor

Major player in memory and logic chip manufacturing with advanced facilities.

  • Competes in 3nm and 5nm processes
  • Strong in memory chip production
  • Facilities in Korea and US
  • ~17% foundry market share

Customers:

  • Nvidia: Samsung Foundry has manufactured GPUs for Nvidia (e.g., RTX 30 series on 8nm) and supplies GDDR7 memory. It remains a potential secondary foundry source for Nvidia to diversify its supply chain.
  • AMD: AMD has explored Samsung Foundry for 4nm production (SF4X) as a dual-source, but recent reports suggest a shift to TSMC for these nodes. Samsung could still be a partner for other nodes or components.
  • Groq: Groq has selected Samsung Foundry in Taylor, Texas, to manufacture its next-generation Language Processing Units (LPUs) using Samsung's 4nm (SF4X) process, critical for Groq's product roadmap.

Sources:

Intel Foundry

Traditional CPU giant expanding into foundry services with IDM 2.0 strategy.

  • Investing heavily in new fabs
  • Developing Intel 4 and 3 processes
  • US-based manufacturing
  • Focus on regaining technology leadership

Customers:

  • Intel AI: Intel Foundry manufactures Intel's own designed chips, including CPUs and AI accelerators like the Gaudi series, fundamental to Intel's IDM (Integrated Device Manufacturer) model.

Sources:

SMIC (Semiconductor Manufacturing International Corporation)

China's largest chip manufacturer, developing advanced process nodes despite restrictions.

  • Achieved 7nm process using DUV (multi-patterning)
  • Working on 5nm development
  • Subject to US export controls (limited access to EUV)
  • Key to China's domestic supply chain efforts

Customers:

  • Huawei: SMIC is Huawei's primary Chinese foundry partner, manufacturing Kirin SoCs and Ascend AI processors (e.g., on 7nm using DUV). This is vital for Huawei due to US sanctions restricting access to global foundries.

UMC (United Microelectronics)

Taiwan-based pure-play foundry focused on 28 nm and above with selective 14/12 nm capacity.

  • ~6% global foundry market share
  • Key second-source for mature logic and specialty nodes

GlobalFoundries

US-headquartered pure-play foundry specializing in mature and RF/process-optimized nodes.

  • Largely exited sub-10 nm race in 2018
  • Strategic US/EU fabs aligned with CHIPS Act customers

Chip Designers

NVIDIA

Leader in GPU design and AI accelerator chips.

  • Designs H100, A100 AI chips
  • ~80% market share in AI chips
  • Partners with TSMC for manufacturing
  • Pioneered CUDA ecosystem

Customers:

  • AWS: AWS deploys tens of thousands of Nvidia H100 GPUs in its EC2 UltraClusters.
  • Azure (Microsoft): Microsoft Azure is a launch partner for Nvidia Blackwell HGX racks.
  • ASE Group: Nvidia likely utilizes ASE Group, the world's largest OSAT, for a portion of its packaging needs for GPUs and AI chips, leveraging ASE's scale and broad capabilities for volume production.
  • Amkor: Nvidia partners with Amkor for AI chip packaging and testing, especially for its US-based manufacturing initiatives. Amkor is building a new advanced packaging facility in Arizona.
  • Deepseek: One of DeepSeek's research papers showed that it had used about 2,000 of Nvidia's H800 chips, which were designed to comply with U.S. export controls released in 2022.
  • OpenAI: OpenAI is a major consumer of Nvidia's AI GPUs (e.g., H100, Blackwell) for training and running its large-scale AI models, making Nvidia a critical hardware supplier for OpenAI's current operations.
  • Google: Google Cloud offers Nvidia's AI GPUs (A100, H100, Blackwell) for its cloud AI services. Nvidia is an important supplier for Google's cloud AI infrastructure offerings.
  • Meta AI: Meta is a massive consumer of Nvidia's AI GPUs, investing billions to power its AI initiatives for social media, recommendations, and Llama models. Nvidia is a critical supplier for Meta's AI infrastructure.
  • X.ai: X.ai utilizes a large cluster of Nvidia H100 GPUs (reportedly 200,000) to train its Grok LLMs, with plans to adopt H200/Blackwell. Nvidia is an indispensable hardware supplier for X.ai.
  • ASE (Foxconn): Nvidia supplies its AI chips and components to Foxconn (node 'ASE') for assembly into AI servers and supercomputers. Foxconn is a key contract manufacturing partner for Nvidia's AI infrastructure.

Sources:

AMD

Major chip designer competing in CPU, GPU, and AI accelerator markets.

  • Designs MI300 AI accelerators
  • Uses TSMC manufacturing
  • Growing presence in data centers
  • ROCm software ecosystem

Customers:

  • AWS: AWS offers AMD Instinct MI300X instances for inference‑optimized workloads.
  • Azure (Microsoft): Azure is an early customer of AMD's MI300A/X for its Maia super‑clusters.
  • ASE Group: AMD likely uses ASE Group for some of its packaging requirements, given ASE's market leadership. AMD employs a multi-OSAT strategy for its diverse product portfolio.
  • Amkor: Amkor is a strategic OSAT option for AMD, particularly for US-based supply chains, leveraging Amkor's advanced packaging capabilities and proximity to US fabs like TSMC Arizona.
  • ASE (Foxconn): AMD supplies its AI accelerators and components to Foxconn (node 'ASE') for assembly into AMD-based AI server systems, leveraging Foxconn's large-scale manufacturing capabilities.

Sources:

Groq

AI chip company focusing on Language Processing Units (LPUs) for ultra-low latency inference.

  • Developed LPU architecture for fast inference
  • Uses Samsung 4nm for next-gen chips
  • Claims superior inference speed and energy efficiency
  • Founded by former Google TPU engineers

Sources:

Intel AI (Products)

Intel's AI chip products, including Gaudi accelerators.

  • Develops Gaudi AI accelerators
  • Acquired Habana Labs
  • Internal manufacturing capability (via Intel Foundry)
  • OneAPI software platform

Customers:

  • ASE Group: Intel may use ASE Group for certain packaging needs or overflow capacity, supplementing its internal capabilities and strategic OSAT partnerships like with Amkor for EMIB.
  • Amkor: Intel Foundry partners with Amkor to qualify and enable its EMIB advanced packaging at Amkor facilities (Korea, future US), enhancing flexibility for foundry customers.
  • Google: Google Cloud offers Intel's Gaudi AI accelerators, providing customers an alternative for AI workloads. Intel supplies Gaudi chips to Google for its cloud infrastructure.

Sources:

Cerebras Systems

Developer of Wafer-Scale Engines (WSE), the largest AI chips in the world.

  • WSE-3 is current flagship (5nm, 4 trillion transistors)
  • Specialized for AI training and inference
  • Uses TSMC for manufacturing
  • Offers CS-3 systems built around WSE-3

Sources:

Huawei (HiSilicon)

Chinese technology company developing AI chips (Ascend) and SoCs (Kirin) through its HiSilicon division.

  • Designs Ascend AI processors and Kirin SoCs
  • Partners with SMIC for manufacturing due to sanctions
  • Subject to US trade restrictions
  • Focusing on domestic supply chain for chips

Customers:

  • Deepseek: Huawei is supplying DeepSeek with its 910C chips for inference.

Sources:

ARM

Licensor of CPU/GPU architectures underlying most mobile and emerging AI accelerators.

  • Neoverse for data-center CPUs
  • Widely licensed by Nvidia, AWS, Google, etc.

Customers:

  • Nvidia: Nvidia licenses ARM CPU cores (Grace) for its Grace Hopper/Blackwell superchips.
  • AWS: AWS Graviton and Trainium chips are built on ARM Neoverse designs.

undefined

Cadence Design Systems

EDA software provider (design, verification, implementation) indispensable to chip designers.

  • Fusion/Innovus, Spectre simulation suites
  • Collaborates with TSMC and Samsung on advanced PDKs

Customers:

  • Nvidia: Nvidia uses Cadence digital and analog toolchains for chip implementation and verification.

Synopsys

Largest EDA vendor and IP licensor (Interface & ARC).

  • Design Compiler & Fusion Compiler toolchains
  • Owns critical interface IP (PCIe, DDR, HBM)

Customers:

  • AMD: AMD relies on Synopsys EDA tools and interface IP (PCIe, DDR) for its MI300 family.

Micron Technology

US memory giant producing DRAM, NAND and HBM for AI accelerators.

  • HBM3E supplier for Nvidia Blackwell
  • Boise-based R&D; fabs in US, Taiwan, Japan

Customers:

  • Nvidia: Micron supplies HBM3E stacks for Nvidia Blackwell GPUs, a critical component for memory bandwidth.

SK Hynix

Korean memory leader and primary HBM3 supplier for Nvidia H100/H200.

  • Developed world-first HBM3E 12-high stack
  • Joint ventures with TSMC on advanced packaging

Customers:

  • Nvidia: SK Hynix is the primary HBM3 supplier for Nvidia's Hopper GPUs and early Blackwell shipments.

Assembly & Testing

ASE (Foxconn)

World's largest electronics manufacturer, key assembler of AI servers.

  • Major Apple supplier (general electronics)
  • Key assembler of AI servers for Nvidia, AMD, etc.
  • Facilities across Asia and globally
  • Handles final product assembly for many tech giants

Customers:

  • OpenAI: Foxconn (node 'ASE') assembles AI servers (often using Nvidia/AMD chips) and supplies these complete systems to large-scale AI consumers like OpenAI for their compute infrastructure.
  • Google: Foxconn (node 'ASE') assembles AI server systems which are supplied to Google for its Cloud services and internal AI research, fulfilling Google's need for vast AI server fleets.
  • Meta AI: Foxconn (node 'ASE') manufactures AI server systems procured by Meta for its extensive AI infrastructure, supporting its social media platforms and generative AI model development.
  • X.ai: Foxconn (node 'ASE') assembles high-performance AI server systems (e.g., Nvidia H100 based) acquired by X.ai for building supercomputing clusters to train LLMs like Grok.

Sources:

ASE Group

World's largest semiconductor packaging and testing (OSAT) provider.

  • Advanced packaging solutions (CoWoS-like, FOCoS, SiP)
  • Tests final chip products
  • Key role in supply chain for fabless & IDMs
  • Facilities in multiple countries

Customers:

  • OpenAI: Indirect: ASE Group packages chips for designers (e.g., Nvidia) which OpenAI then consumes. OpenAI's custom chips are directly partnered with TSMC/Broadcom for manufacturing/packaging.
  • Meta AI: Indirect: ASE Group packages chips for various vendors whose products Meta consumes. Meta's custom MTIA chips are directly partnered with TSMC for manufacturing/packaging.

Sources:

Amkor Technology

Major global OSAT provider with advanced packaging capabilities.

  • Provides advanced packaging (e.g., SWIFT, S-SWIFT, HDFO)
  • Key partner for US-based chip initiatives
  • Building new facility in Arizona
  • Serves fabless, IDM, and foundry customers

Sources:

JCET Group

China's largest OSAT offering flip-chip, SiP and fan-out panel-level packaging.

Siliconware Precision Industries (SPIL)

Taiwanese OSAT subsidiary of ASE focusing on bumping and advanced SiP.

AI Labs & End Users

Amazon Web Services

Largest cloud provider, massive buyer of AI accelerators and developer of Trainium/Inferentia chips.

  • Ongoing purchases of Nvidia H100/H200
  • Designs custom Graviton/Trainium chips (fabricated by TSMC)

Microsoft Azure

Cloud platform powering OpenAI and hosting Nvidia, AMD and custom Cobalt/ Maia AI accelerators.

  • Cobalt / Maia chips fabbed at TSMC
  • Major H100/H200 and MI300 customer

OpenAI

Leading AI research company focused on AGI development.

  • Developed GPT-4, DALL-E, Sora
  • Requires massive compute infrastructure (Nvidia GPUs, custom chips in development)
  • Partnership with Microsoft
  • Focus on AI safety research

Sources:

Google (DeepMind & Cloud)

Pioneer in AI research and cloud AI services.

  • Developed Gemini, PaLM, TPUs
  • Massive TPU infrastructure for internal use and Cloud
  • Leading AI research lab (DeepMind)
  • Offers Nvidia GPUs and Intel Gaudi on Cloud

Sources:

Meta AI

Major AI research organization with open source focus and large infrastructure.

  • Developed LLaMA models
  • Building massive GPU clusters (Nvidia)
  • Developing in-house AI chips (MTIA)
  • Focus on generative AI and metaverse applications

Sources:

X.ai

AI company by Elon Musk developing large language models like Grok.

  • Developing Grok model series
  • Building massive H100 GPU clusters for training
  • Focus on "truthful" and "maximum curiosity" AI
  • Aims for AGI development

Sources:

Deepseek

Chinese AI company developing large language models.

  • Acquired fame with the Deepseek r1 reasoning model
  • Focus on open source AI models

Sources:

Back to top ↑