Digital & Technology Global Critical Significance

Data Centre Supply Chain

The data centre supply chain assembles the physical infrastructure of the digital economy — from semiconductor fabrication through server manufacturing, facility construction, and electrical systems to the cloud services consumed by billions of users and enterprises. Driven by AI workload growth, hyperscaler investment is reaching $300B+ annually by 2025, creating acute demand constraints across the chain: NVIDIA GPUs, custom silicon, liquid cooling systems, power transformers, and grid connections are all in shortage. The chain is the physical embodiment of the AI infrastructure buildout and its constraints are setting the pace of AI capability deployment.

6 Chain Steps
3 Chokepoints
4 Supporting Industries
6 Key Themes
Risk Chokepoints

Where This Chain Is Most Vulnerable

Chokepoints are steps where geographic concentration, technical barriers, or long lead times create structural supply risk with limited short-term alternatives.

AI Chip Supply — TSMC CoWoS Bottleneck

Step 1 · ISIC 2610

TSMC's advanced packaging (CoWoS) capacity is the binding constraint on AI chip availability. ASML EUV machine delivery times constrain TSMC's capacity expansion. NVIDIA GPU allocation queues stretch 12+ months for all hyperscalers.

Geopolitical — Competitive Control

Data Centre Construction & Power Connection

Step 3 · ISIC 4100

Facility construction timelines (18-36 months) and grid power connection queues (3-7 years in constrained markets) are setting the pace of AI infrastructure deployment. Power transformer shortages add a secondary constraint.

Operational — Infrastructure

Grid Power Capacity

Step 4 · ISIC 3510

AI-driven data centre power demand is straining grid capacity in key markets. US grid operators warn of 47-84 GW additional demand by 2030. Power transformer lead times of 2-3 years are a material bottleneck for new connections.

Operational — Infrastructure
Step Analysis

Detailed Step Breakdown

Each step's role in the chain, key data points, and chokepoint detail where applicable.

1

Manufacture of Electronic Components and Boards

AI accelerators, CPUs, memory, and custom silicon (ASICs)
Chokepoint Component

The logic and memory silicon that powers data centre workloads: NVIDIA H100/B100 GPU clusters for AI training, AMD EPYC and Intel Xeon CPUs for general compute, and custom ASICs (Google TPU, Amazon Trainium/Inferentia, Microsoft Maia, Meta MTIA). TSMC fabricates virtually all advanced AI chips at its Taiwan facilities. HBM (High Bandwidth Memory) from SK Hynix, Samsung, and Micron is co-packaged with GPUs and is a co-equal bottleneck alongside the GPUs themselves.

Why this is a chokepoint: TSMC's CoWoS advanced packaging capacity is the binding constraint on AI chip production as of 2024-2025. Each H100/B100 GPU requires 6-8 CoWoS packages; TSMC can produce a fixed number per month. NVIDIA's allocation queue stretches 12+ months; Microsoft, Google, and Amazon are buying all available supply. ASML EUV machine delivery times (12-18 months per tool) constrain TSMC's ability to rapidly expand capacity.
  • NVIDIA H100: >100B transistors on TSMC 4nm, plus 6 HBM2e/HBM3 stacks — combined die area ~814mm²
  • HBM3e: SK Hynix dominant supplier; Samsung qualifying; Micron entering market 2024
  • CoWoS (Chip-on-Wafer-on-Substrate) packaging: TSMC is sole production source; capacity doubled in 2024 still insufficient
  • NVIDIA revenue: $60B (FY2024), forecast $130B+ (FY2025) — driven almost entirely by AI datacenter GPU

View ISIC 2610 industry profile →

2

Manufacture of Computers and Peripheral Equipment

Server manufacturing, GPU cluster systems, and networking hardware
Component

System integrators (Super Micro Computer, Dell, HPE, Lenovo) and ODMs (Foxconn, Quanta, Wiwynn) assemble chips, memory, storage, and networking into server nodes, GPU rack systems, and storage arrays. AI servers (DGX H100, DGX B200) contain 8-16 GPUs per node and require custom power delivery, cooling infrastructure, and networking (InfiniBand or Ethernet at 400-800Gb/s). Supply chain bottlenecks cascade from chips through to servers: a GPU allocation becomes a complete rack allocation.

  • Super Micro: ~70% of custom AI server market (NVIDIA DGX-adjacent); manufacturing in US and Taiwan
  • DGX H100 system: 8× H100 GPUs + NVLink + InfiniBand; list price ~$400,000
  • Networking: Arista, Cisco, and Nvidia InfiniBand for AI fabric; Broadcom custom ASICs for hyperscaler switching
  • AI cluster scale: Meta Llama training used 49,152 H100s on a single job (2024)

View ISIC 2620 industry profile →

3

Construction of Buildings

Data centre facility design, construction, and fit-out
Chokepoint Infrastructure

Purpose-built data centre facilities require specialised civil, mechanical, and electrical (CME) construction: raised floors or concrete slabs for immersion/liquid cooling, N+1 redundant UPS systems, cooling towers or liquid cooling distribution, fire suppression, and security systems. Construction lead times run 18-36 months for greenfield hyperscale campuses. The construction industry's capacity to build data centres fast enough to absorb AI chip production is now the binding constraint in several markets (US, Ireland, Netherlands). Planning consent is a growing bottleneck in Europe.

Why this is a chokepoint: The combination of 18-36 month construction lead times and power connection queues of 3-7 years in dense markets (Northern Virginia, Singapore, Dublin, Amsterdam) means the data centre facility pipeline cannot be rapidly accelerated. Microsoft, Google, and Amazon have committed $300B+ in combined datacenter capex for 2025 — but power connection timelines in PJM (US mid-Atlantic grid) stretch to 2031+. This constrains where and when AI capacity can be deployed.
  • Hyperscaler datacenter capex: Microsoft $80B (FY2025), Google $75B, Amazon $80B+, Meta $37B
  • PJM interconnection queue: 3,400+ GW of generation and 900+ GW of load requests in queue as of 2024
  • Northern Virginia ('Data Center Alley'): ~70% of world's internet traffic routed through Loudoun County, VA
  • Planning rejection: Amsterdam and Singapore imposed moratoria on new data centres (2019-2023)

View ISIC 4100 industry profile →

4

Electric Power Generation, Transmission and Distribution

Grid power supply, on-site generation, and Power Purchase Agreements
Chokepoint Energy

Data centres are power-intensive: a single hyperscale campus consumes 100-500 MW, comparable to a small city. AI GPU clusters consume 5-10× more power per rack than conventional compute (30-120 kW/rack vs 3-10 kW/rack). Power procurement — utility grid connections, on-site gas turbines, solar/wind PPAs — is the defining site selection factor for new facilities. Nuclear power is attracting serious hyperscaler interest (Microsoft invested in Three Mile Island restart; Google signed first-ever corporate SMR PPA).

Why this is a chokepoint: Grid power availability is the binding site selection constraint in mature data centre markets. US grid operators are warning that AI-driven demand could require 47-84 GW of new capacity by 2030 (EPRI 2024). Power transformer lead times have extended to 2-3 years due to supply chain constraints, becoming a bottleneck for grid connection and on-site genset deployment. Power Purchase Agreement prices for renewable energy have risen 30-40% due to demand from data centres and electrification.
  • Data centre electricity use: 1-2% of global electricity consumption currently; forecast 3-4% by 2030 (IEA)
  • Power transformer shortage: 52-week lead times extended to 3+ years for large transformers (2024)
  • Nuclear PPAs: Microsoft (Three Mile Island), Google (Kairos Power), Amazon (Dominion Energy) — all announced 2024
  • PUE (Power Usage Effectiveness): hyperscalers average ~1.1-1.2; legacy colocation ~1.5-2.0

View ISIC 3510 industry profile →

5

Data Processing, Hosting and Related Activities

Cloud infrastructure operations — IaaS, PaaS, GPU-as-a-service
Service

The hyperscale cloud providers (AWS, Microsoft Azure, Google Cloud) and GPU cloud platforms (CoreWeave, Lambda Labs) operate the physical infrastructure to deliver compute, storage, and AI services to enterprise and developer customers. This is where the capital investment converts into recurring revenue: AWS alone earns ~$100B ARR from cloud services. AI services (AWS Bedrock, Azure OpenAI, Google Vertex AI) are now the fastest growing cloud revenue category, commanding premium pricing over commodity compute.

  • Hyperscaler market share: AWS ~31%, Microsoft Azure ~25%, Google Cloud ~11% (2024)
  • CoreWeave valuation: $23B (2024 fundraise) — GPU cloud specialist serving AI companies
  • AI cloud pricing: H100 GPU $3.50-$8/hour on-demand; reserved contracts $2.50-$4.50/hour

View ISIC 6311 industry profile →

6

Wired Telecommunications Activities — cloud services

Fibre backbone, subsea cables, and last-mile connectivity
End Use

Data centres require massive fibre connectivity: backbone networks linking facilities, subsea cables connecting continents, and CDN edge nodes for last-mile content delivery. Hyperscalers now own or co-own significant subsea cable infrastructure (Google Equiano, Meta 2Africa). Fibre optic demand is surging with AI cluster interconnect requirements — AI training clusters need 400/800G optical transceivers at scale.

  • Optical transceiver demand: AI cluster scale-out driving 400G/800G adoption; Coherent, II-VI, Lumentum supply constrained
  • Subsea cables: new hyperscaler-owned routes bypassing traditional telecom carriers

View ISIC 6110 industry profile →

6

Web Portals, Search Engines and Related Activities — ai applications

AI applications and cloud-native services consuming data centre compute
End Use

The ultimate consumer of data centre compute: AI assistants (ChatGPT, Copilot, Gemini), search engines, content platforms, SaaS applications, and enterprise AI services. LLM inference demand is growing faster than training as AI adoption scales from developers to general consumers. AI inference requires lower-specification but higher-volume GPU clusters than training, creating additional demand at every capacity tier.

  • ChatGPT: estimated 100M daily active users by mid-2024 (OpenAI); inference cost ~$0.001 per query
  • AI inference vs training ratio: inference will consume ~80% of AI compute by 2027 (a16z estimate)

View ISIC 6312 industry profile →

Value Concentration

Where Margin Is Captured

Rough indication of value capture at each step — what creates pricing power and where the chain's economic returns concentrate.

Step Value Capture Margin Driver
Step 1
Manufacture of Electronic Components and Boards
Very High

NVIDIA earns ~75% gross margins on H100/B100; TSMC earns ~53% gross margins on advanced nodes. The chip design-fabrication duopoly captures the largest share of AI infrastructure value.

Step 2
Manufacture of Computers and Peripheral Equipment
Medium

Server OEMs earn 5-15% gross margins. Super Micro has earned temporarily higher margins as the AI server leader but faces margin compression as competition intensifies.

Step 3
Construction of Buildings
Low

Construction is a cost-plus business; general contractors earn 3-6% margins. Specialist MEP (mechanical/electrical/plumbing) subcontractors earn somewhat more.

Step 4
Electric Power Generation, Transmission and Distribution
Medium

Utilities earn regulated returns; independent power producers earn project-finance returns. PPAs transfer commodity risk from hyperscalers to renewable developers.

Step 5
Data Processing, Hosting and Related Activities
Very High

AWS, Azure, and Google Cloud earn 25-35% operating margins on cloud infrastructure. AI services command premium pricing: GPT-4o API at $5/million tokens vs commodity compute at $0.10/million equivalent tokens. CoreWeave raised at 20× revenue in 2024.

Step 6 — Ai Applications
Web Portals, Search Engines and Related Activities
Very High

AI application layer companies (OpenAI, Anthropic, Midjourney, character.ai) earn subscription and API margins at the top of the value chain, supported by the entire infrastructure stack below.

Supporting Industries

Industries That Enable This Chain

These industries do not transform the primary product but are essential for the chain to function — logistics, finance, professional services, and enabling technology.

Components 2711

Manufacture of Electric Motors, Generators and Transformers

Power transformers, UPS systems, and backup generators are critical facility infrastructure. Large transformer lead times of 2-3 years are a material bottleneck for new facility power connections in 2024-2026. Generator demand from hyperscalers is straining Caterpillar and Cummins production capacity.

Components 2829

Manufacture of Other Special-Purpose Machinery

Cooling systems: air handlers, chillers, computer room air conditioners (CRACs), and liquid cooling distribution units (CDUs) for direct liquid cooling (DLC) and immersion cooling. AI GPU clusters generate 30-120 kW/rack heat loads that traditional air cooling cannot handle — liquid cooling is becoming mandatory for AI-grade infrastructure. Vertiv and Schneider Electric lead this market.

Infrastructure 4100

Construction of Buildings

General construction contractors and specialist data centre builders (Turner Construction, Holder Construction, Skanska). Labour shortages in electrical and mechanical trades are a secondary constraint on build-out pace in the US and UK.

Professional Services 7490

Other Professional, Scientific and Technical Activities

Site selection consulting, power procurement advisory, ESG reporting (Scope 2 renewable matching for RE100 commitments), and planning/permitting support. Carbon reporting for data centres is under regulatory pressure: EU Corporate Sustainability Reporting Directive (CSRD) requires data centre energy and water intensity disclosures.

Data Sources
IEA — Data Centres and Data Transmission Networks 2024 EPRI — Powering Intelligence: Analyzing AI Demand for Electricity 2024 Goldman Sachs — AI Infrastructure Buildout: $1 Trillion Opportunity? 2024 McKinsey — Why data centers are the new critical infrastructure 2024 Synergy Research — Cloud Infrastructure Services 2024
Last reviewed: 2026-03-10 Review cycle: quarterly