Data Centre Supply Chain
The data centre supply chain assembles the physical infrastructure of the digital economy — from semiconductor fabrication through server manufacturing, facility construction, and electrical systems to the cloud services consumed by billions of users and enterprises. Driven by AI workload growth, hyperscaler investment is reaching $300B+ annually by 2025, creating acute demand constraints across the chain: NVIDIA GPUs, custom silicon, liquid cooling systems, power transformers, and grid connections are all in shortage. The chain is the physical embodiment of the AI infrastructure buildout and its constraints are setting the pace of AI capability deployment.
Step-by-Step Value Chain
6 steps from upstream extraction to end use. 3 chokepoints where supply disruptions have systemic impact.
Where This Chain Is Most Vulnerable
Chokepoints are steps where geographic concentration, technical barriers, or long lead times create structural supply risk with limited short-term alternatives.
AI Chip Supply — TSMC CoWoS Bottleneck
TSMC's advanced packaging (CoWoS) capacity is the binding constraint on AI chip availability. ASML EUV machine delivery times constrain TSMC's capacity expansion. NVIDIA GPU allocation queues stretch 12+ months for all hyperscalers.
Geopolitical — Competitive ControlData Centre Construction & Power Connection
Facility construction timelines (18-36 months) and grid power connection queues (3-7 years in constrained markets) are setting the pace of AI infrastructure deployment. Power transformer shortages add a secondary constraint.
Operational — InfrastructureGrid Power Capacity
AI-driven data centre power demand is straining grid capacity in key markets. US grid operators warn of 47-84 GW additional demand by 2030. Power transformer lead times of 2-3 years are a material bottleneck for new connections.
Operational — InfrastructureDetailed Step Breakdown
Each step's role in the chain, key data points, and chokepoint detail where applicable.
Manufacture of Electronic Components and Boards
The logic and memory silicon that powers data centre workloads: NVIDIA H100/B100 GPU clusters for AI training, AMD EPYC and Intel Xeon CPUs for general compute, and custom ASICs (Google TPU, Amazon Trainium/Inferentia, Microsoft Maia, Meta MTIA). TSMC fabricates virtually all advanced AI chips at its Taiwan facilities. HBM (High Bandwidth Memory) from SK Hynix, Samsung, and Micron is co-packaged with GPUs and is a co-equal bottleneck alongside the GPUs themselves.
- NVIDIA H100: >100B transistors on TSMC 4nm, plus 6 HBM2e/HBM3 stacks — combined die area ~814mm²
- HBM3e: SK Hynix dominant supplier; Samsung qualifying; Micron entering market 2024
- CoWoS (Chip-on-Wafer-on-Substrate) packaging: TSMC is sole production source; capacity doubled in 2024 still insufficient
- NVIDIA revenue: $60B (FY2024), forecast $130B+ (FY2025) — driven almost entirely by AI datacenter GPU
Manufacture of Computers and Peripheral Equipment
System integrators (Super Micro Computer, Dell, HPE, Lenovo) and ODMs (Foxconn, Quanta, Wiwynn) assemble chips, memory, storage, and networking into server nodes, GPU rack systems, and storage arrays. AI servers (DGX H100, DGX B200) contain 8-16 GPUs per node and require custom power delivery, cooling infrastructure, and networking (InfiniBand or Ethernet at 400-800Gb/s). Supply chain bottlenecks cascade from chips through to servers: a GPU allocation becomes a complete rack allocation.
- Super Micro: ~70% of custom AI server market (NVIDIA DGX-adjacent); manufacturing in US and Taiwan
- DGX H100 system: 8× H100 GPUs + NVLink + InfiniBand; list price ~$400,000
- Networking: Arista, Cisco, and Nvidia InfiniBand for AI fabric; Broadcom custom ASICs for hyperscaler switching
- AI cluster scale: Meta Llama training used 49,152 H100s on a single job (2024)
Construction of Buildings
Purpose-built data centre facilities require specialised civil, mechanical, and electrical (CME) construction: raised floors or concrete slabs for immersion/liquid cooling, N+1 redundant UPS systems, cooling towers or liquid cooling distribution, fire suppression, and security systems. Construction lead times run 18-36 months for greenfield hyperscale campuses. The construction industry's capacity to build data centres fast enough to absorb AI chip production is now the binding constraint in several markets (US, Ireland, Netherlands). Planning consent is a growing bottleneck in Europe.
- Hyperscaler datacenter capex: Microsoft $80B (FY2025), Google $75B, Amazon $80B+, Meta $37B
- PJM interconnection queue: 3,400+ GW of generation and 900+ GW of load requests in queue as of 2024
- Northern Virginia ('Data Center Alley'): ~70% of world's internet traffic routed through Loudoun County, VA
- Planning rejection: Amsterdam and Singapore imposed moratoria on new data centres (2019-2023)
Electric Power Generation, Transmission and Distribution
Data centres are power-intensive: a single hyperscale campus consumes 100-500 MW, comparable to a small city. AI GPU clusters consume 5-10× more power per rack than conventional compute (30-120 kW/rack vs 3-10 kW/rack). Power procurement — utility grid connections, on-site gas turbines, solar/wind PPAs — is the defining site selection factor for new facilities. Nuclear power is attracting serious hyperscaler interest (Microsoft invested in Three Mile Island restart; Google signed first-ever corporate SMR PPA).
- Data centre electricity use: 1-2% of global electricity consumption currently; forecast 3-4% by 2030 (IEA)
- Power transformer shortage: 52-week lead times extended to 3+ years for large transformers (2024)
- Nuclear PPAs: Microsoft (Three Mile Island), Google (Kairos Power), Amazon (Dominion Energy) — all announced 2024
- PUE (Power Usage Effectiveness): hyperscalers average ~1.1-1.2; legacy colocation ~1.5-2.0
Data Processing, Hosting and Related Activities
The hyperscale cloud providers (AWS, Microsoft Azure, Google Cloud) and GPU cloud platforms (CoreWeave, Lambda Labs) operate the physical infrastructure to deliver compute, storage, and AI services to enterprise and developer customers. This is where the capital investment converts into recurring revenue: AWS alone earns ~$100B ARR from cloud services. AI services (AWS Bedrock, Azure OpenAI, Google Vertex AI) are now the fastest growing cloud revenue category, commanding premium pricing over commodity compute.
- Hyperscaler market share: AWS ~31%, Microsoft Azure ~25%, Google Cloud ~11% (2024)
- CoreWeave valuation: $23B (2024 fundraise) — GPU cloud specialist serving AI companies
- AI cloud pricing: H100 GPU $3.50-$8/hour on-demand; reserved contracts $2.50-$4.50/hour
Wired Telecommunications Activities — cloud services
Data centres require massive fibre connectivity: backbone networks linking facilities, subsea cables connecting continents, and CDN edge nodes for last-mile content delivery. Hyperscalers now own or co-own significant subsea cable infrastructure (Google Equiano, Meta 2Africa). Fibre optic demand is surging with AI cluster interconnect requirements — AI training clusters need 400/800G optical transceivers at scale.
- Optical transceiver demand: AI cluster scale-out driving 400G/800G adoption; Coherent, II-VI, Lumentum supply constrained
- Subsea cables: new hyperscaler-owned routes bypassing traditional telecom carriers
Web Portals, Search Engines and Related Activities — ai applications
The ultimate consumer of data centre compute: AI assistants (ChatGPT, Copilot, Gemini), search engines, content platforms, SaaS applications, and enterprise AI services. LLM inference demand is growing faster than training as AI adoption scales from developers to general consumers. AI inference requires lower-specification but higher-volume GPU clusters than training, creating additional demand at every capacity tier.
- ChatGPT: estimated 100M daily active users by mid-2024 (OpenAI); inference cost ~$0.001 per query
- AI inference vs training ratio: inference will consume ~80% of AI compute by 2027 (a16z estimate)
Where Margin Is Captured
Rough indication of value capture at each step — what creates pricing power and where the chain's economic returns concentrate.
| Step | Value Capture | Margin Driver |
|---|---|---|
|
Step 1
Manufacture of Electronic Components and Boards
|
NVIDIA earns ~75% gross margins on H100/B100; TSMC earns ~53% gross margins on advanced nodes. The chip design-fabrication duopoly captures the largest share of AI infrastructure value. |
|
|
Step 2
Manufacture of Computers and Peripheral Equipment
|
Server OEMs earn 5-15% gross margins. Super Micro has earned temporarily higher margins as the AI server leader but faces margin compression as competition intensifies. |
|
|
Step 3
Construction of Buildings
|
Construction is a cost-plus business; general contractors earn 3-6% margins. Specialist MEP (mechanical/electrical/plumbing) subcontractors earn somewhat more. |
|
|
Step 4
Electric Power Generation, Transmission and Distribution
|
Utilities earn regulated returns; independent power producers earn project-finance returns. PPAs transfer commodity risk from hyperscalers to renewable developers. |
|
|
Step 5
Data Processing, Hosting and Related Activities
|
AWS, Azure, and Google Cloud earn 25-35% operating margins on cloud infrastructure. AI services command premium pricing: GPT-4o API at $5/million tokens vs commodity compute at $0.10/million equivalent tokens. CoreWeave raised at 20× revenue in 2024. |
|
|
Step 6 — Ai Applications
Web Portals, Search Engines and Related Activities
|
AI application layer companies (OpenAI, Anthropic, Midjourney, character.ai) earn subscription and API margins at the top of the value chain, supported by the entire infrastructure stack below. |
Industries That Enable This Chain
These industries do not transform the primary product but are essential for the chain to function — logistics, finance, professional services, and enabling technology.
Manufacture of Electric Motors, Generators and Transformers
Power transformers, UPS systems, and backup generators are critical facility infrastructure. Large transformer lead times of 2-3 years are a material bottleneck for new facility power connections in 2024-2026. Generator demand from hyperscalers is straining Caterpillar and Cummins production capacity.
Manufacture of Other Special-Purpose Machinery
Cooling systems: air handlers, chillers, computer room air conditioners (CRACs), and liquid cooling distribution units (CDUs) for direct liquid cooling (DLC) and immersion cooling. AI GPU clusters generate 30-120 kW/rack heat loads that traditional air cooling cannot handle — liquid cooling is becoming mandatory for AI-grade infrastructure. Vertiv and Schneider Electric lead this market.
Construction of Buildings
General construction contractors and specialist data centre builders (Turner Construction, Holder Construction, Skanska). Labour shortages in electrical and mechanical trades are a secondary constraint on build-out pace in the US and UK.
Other Professional, Scientific and Technical Activities
Site selection consulting, power procurement advisory, ESG reporting (Scope 2 renewable matching for RE100 commitments), and planning/permitting support. Carbon reporting for data centres is under regulatory pressure: EU Corporate Sustainability Reporting Directive (CSRD) requires data centre energy and water intensity disclosures.
Trends Shaping This Chain
Forward-looking macro forces creating headwinds or tailwinds across this supply chain. Sorted by intensity — critical pressures first.
AI & Machine Learning
AI training and inference demand is the primary driver of hyperscale data centre buildout.
Data Centre & AI Infrastructure Buildout
The data centre buildout is the defining demand event for the entire data centre supply chain.
Geopolitical Fragmentation & Friend-Shoring
Technology export controls are limiting Chinese data centre operators' access to advanced AI chips.
Net Zero Transition & Decarbonisation
Data centre power consumption is growing 20–40% annually, creating a tension between AI ambition and net-zero commitments.
IoT & Smart Sensors
Data centre IoT (power monitoring, thermal sensors, vibration analysis) is enabling AI-driven infrastructure management.
Digital Twins
Data centre twins enable dynamic cooling and power optimisation, reducing PUE and energy cost.