Data Center Economics—The $3 Trillion Build
Over the next five years, data center spend will go from $1 trillion to over $4 trillion.
That's not a typo. McKinsey's 2025 analysis projects $5.2 trillion in capital expenditures for AI-related data centers by 2030 in their base scenario, with scenarios ranging from $3.7 trillion (constrained: 78 GW) to $7.9 trillion (accelerated: 205 GW). The hyperscalers—Microsoft, Google, Amazon, Meta—plus enterprise and sovereign AI initiatives are committing capital at a scale that rivals the buildout of the U.S. interstate highway system or the electrification of America.
But Here's the Twist: They Can Afford It
The major tech companies sit on $500 billion in cash and generate $300 billion in free cash flow annually. This isn't speculative bubble capital or cheap-debt financing. This is retained earnings deployed into margin-accretive infrastructure. They're not borrowing hope—they're compounding profits.
The Unit Economics Are Unforgiving
It costs approximately $35 billion to light up 1 GW of AI compute capacity—fully loaded with chips, cooling, power infrastructure, networking, and facilities. But here's where specificity matters: each component has distinct economics.
- GPUs/accelerators: $13.65 billion per GW (39% of capex). Nvidia's gross profit alone represents 29% of total AI data center spending at approximately 70% margins.
- Mechanical & electrical systems: $11.55 billion per GW (33% of capex). Transformers, switchgear, backup generators—all on allocation with 18-24 month lead times.
- Power infrastructure: $8-12 billion per GW. Grid connections, substations, interconnects to existing capacity.
- Networking & interconnects: $4.55 billion per GW (13% of capex). 800G and 1.6T Ethernet switches, high-speed campus fabrics.
- Cooling systems: $3-5 billion per GW. Specialized chillers, liquid cooling loops—manufacturers scaling capacity but struggling to keep pace.
- Real estate & construction: $3-5 billion per GW. You can't software-engineer this into existence. You need electricians, HVAC specialists, concrete workers—the same labor scarcity pressuring wages elsewhere.
For context, 1 GW powers roughly 750,000 homes. But instead of homes, it's running continuous AI inference, training runs, and enterprise workloads at computational densities that were unimaginable five years ago.
And the operational burden is permanent: $1.3 billion per year in electricity costs at $0.15 per kWh for 1 GW of continuous operation. That's not a one-time capex; it's $1.3B in recurring annual operating expense.
The Revenue Side Changes Everything
A hyperscale data center with 1 GW of AI compute can generate $5-10 billion in annual revenue (depending on utilization rates and pricing). That's 4-6 year payback. IRRs in the 15-20% range for prime locations with locked power contracts.
Why This Matters: Real Estate Comparison
Compare those returns to traditional real estate:
- Office buildings: 6-8% yields
- Industrial real estate: 8-12% yields
- AI data centers with locked power: 15-20% IRRs
Data centers with secured power contracts are some of the highest-return real estate assets on the planet right now. That's not hype. That's spread math. The spread between 15-20% IRR and 8-12% comparable yields is **210-300 basis points**—enough to attract every major institutional investor watching yields compress elsewhere.
How Capital Is Actually Deploying: Real Examples
Oracle's Abilene facility illustrates this commitment at scale. They committed $40 billion for 1.2 GW capacity across 8 buildings totaling 4 million square feet, securing approximately 400,000 Nvidia GB200 superchips. $15 billion in funding is locked through JPMorgan loans ($9.6 billion) and equity investors.
The Stargate program is even larger: 7 GW capacity across multiple sites with over $400 billion in investment over three years, targeting 10 GW and $500 billion by end of 2025. These aren't aspirational numbers. These are funded commitments backed by institutional capital.
The Supply Chain Multiplier
If you're spending $35 billion to light up 1 GW, and the hyperscalers are collectively planning 30-50 GW of new capacity by 2030, the direct capital expenditure is $1.05-1.75 trillion. But the multiplier effects ripple through:
- Power generation and grid infrastructure: New generation capacity, transmission upgrades, interconnections
- Cooling technology: Specialized equipment manufacturing, liquid cooling suppliers
- Real estate: Land acquisition, construction labor, site development
- Networking hardware: Switch manufacturers, fiber suppliers, specialized interconnect vendors
- Operational spend: Electricity, maintenance, staffing across 10+ year asset life
When you include these second-order effects, the total economic activity pushed into this ecosystem reaches $3-4 trillion by 2030. That's comparable to the entire U.S. annual federal budget. It's larger than the GDP of most countries. And it's being deployed by a handful of companies with the balance sheets to execute without external financing.
The Constraint Progression Continues
We established that chips were the bottleneck (2022-2024) and power is the bottleneck now (2025-2027). But look at the capex breakdown: cooling represents $3-5B per GW. As compute density increases and power consumption rises, cooling becomes the next critical constraint.
Liquid cooling and immersion cooling aren't future technologies—they're being deployed now. Companies solving this problem at scale in 2024-2025 will own the competitive advantage in 2028-2030. The data center economics reward early movers on each constraint as it emerges.
Why This Is Financially Viable (Not Speculation)
The original objection from Day 1 was: '$560B invested but only $35B in revenue generated.' Now the picture clarifies. That revenue gap represents the infrastructure buildout phase. Revenue is growing because:
- Training demand: Foundation models require continuous training updates, each consuming 1-8 GW for weeks
- Inference revenue: The real money. ChatGPT, Claude, Gemini generate trillions of inferences annually. That's recurring $5-10B annual revenue per GW
- Enterprise workloads: Companies shifting ML operations to cloud, requiring sustained capacity
- Sovereign AI: UAE, Saudi Arabia, Japan building national AI capacity—governments writing long-term purchase agreements
What This Means for Investors
- Data center REITs with locked power contracts and development pipelines become acquisition targets—valuation premiums expand as scarcity becomes obvious
- Utilities in growth regions (Texas, Oklahoma, parts of Midwest) see revenue from data center power purchase agreements—regulated utility growth at 8-12% CAGR
- Power infrastructure plays: Independent power producers, grid equipment manufacturers, transformer suppliers all see sustained demand
- Cooling technology companies transition from niche to critical path—companies solving cooling efficiency capture 20%+ margins
- Networking hardware vendors: 800G and 1.6T switch suppliers see allocation and pricing power
- Geographic winners and losers crystallize: Power availability determines where capital deploys, not tax incentives or fiber proximity
Bottom Line
This isn't a bubble. This is the largest private-sector infrastructure build in modern history, funded by companies with the cash flow to sustain it. The constraint isn't capital. The constraint is physics (power, cooling, real estate), not finance. The winners will be those who secured power contracts in 2023-2025, before everyone realized it was the bottleneck. The losers will be those still optimizing for chip allocation in 2028 while sitting in a 7-year power queue.
Capital flows to where it compounds. Right now, that's the intersection of locked power, proven demand, and 15-20% IRR. The next question: where does all that labor come from to build and operate these facilities? That's Day 4.
Ready for Day 4? We tackle the execution bottleneck: Labor, automation, and the largest occupational transition in modern history.





