Four companies are on track to spend about $650 billion in capital expenditures in 2026, and the weird part is not the number. It’s what AI datacenter spending now buys: transformers, switchgear, substations, land, construction crews, and giant financing packages. The story stopped being “look how much Big Tech is spending” a while ago.
Bloomberg’s February reporting says Alphabet, Amazon, Meta, and Microsoft together forecast roughly $650 billion in 2026 capex. That figure is verified as a current hyperscaler capex total. The comparison to the Manhattan Project, Apollo, the ISS, and the Marshall Plan combined is directionally plausible but methodologically weak. Those were public programs with different accounting, time spans, and economic contexts. This is something stranger: a private-sector industrial mobilization.
That distinction matters. If you want to understand what happens next, don’t stare at the headline capex number. Look at the bottlenecks.
The $650 Billion Capex Number Is Real, But It Is Not “AI Only”
The strongest current number here is Bloomberg’s: Alphabet, Amazon, Meta, and Microsoft are expected to spend about $650 billion in 2026 capital expenditures. Bloomberg called it a boom “without a parallel this century.” That claim is verified by Bloomberg’s reporting and repeated in its April 1 feature on supply-chain constraints.
But wait, does that mean $650 billion of pure AI server spend? No. And this is where a lot of the discourse goes off the rails.
Capital expenditure means long-lived assets: land, buildings, power systems, networking gear, and data center capacity, not just GPUs. Some of that buildout is explicitly for AI. Some supports broader cloud demand. The cleanest factual claim is narrower: the hyperscalers are massively increasing capex in response to the AI race, and a lot of that spend is flowing into AI-oriented infrastructure. That is verified. The exact AI-only slice is not independently broken out in the source set, so any claim that the full $650 billion is “AI chips” would be unverified.
A quick baseline shows how fast this escalated. Bloomberg reported in January 2025 that Microsoft alone planned to spend $80 billion on AI data centers that fiscal year. By August 2025, Bloomberg was writing about a $29 billion Meta financing deal for data center infrastructure. By November 2025, AP reported Anthropic announcing a $50 billion computing infrastructure investment and Microsoft adding another major data center project in Atlanta tied to a “massive supercomputer.” The pace here is the point.
| Figure | What it refers to | Status |
|---|---|---|
| $650B | 2026 capex forecast for Alphabet, Amazon, Meta, Microsoft combined | Verified |
| $80B | Microsoft fiscal 2025 AI data center spending plan | Verified |
| $29B | Meta-related financing deal for data center buildout | Verified |
| $50B | Anthropic computing infrastructure investment announcement | Verified |
Why AI Datacenter Spending Is Different From Past Mega Projects
The “bigger than Apollo” framing grabs attention because it compresses the scale into something familiar. Fine. But it also smuggles in bad comparisons.
The Manhattan Project, Apollo, and the Marshall Plan were government programs. They had different goals, labor structures, procurement models, and accounting rules. They also happened in economies of very different sizes. So the viral claim that AI datacenter spending has surpassed them “combined” is not verified by the source material. At best, it is plausible as a rough inflation-adjusted comparison someone else made, but there is no authoritative source here validating that exact stack-ranked chart.
The more useful comparison is structural, not numerical.
Those historical projects reorganized supply chains around a strategic priority. That is what AI datacenter spending is starting to do now. The hyperscalers are not just buying compute. They are pulling power equipment imports, construction timelines, private credit, and regional land markets into their orbit. That looks less like a product cycle and more like an infrastructure regime.
That’s also why the comparison can mislead in another way: these assets produce revenue. A data center is not a one-off moonshot. It is a commercial machine meant to throw off cloud rent for years. So yes, the mega-project analogy is interesting. No, it is not the main thing.
What the Buildout Actually Depends On: Power, Gear, and Land

Bloomberg’s April 1 feature is the part of this story that actually made me stop. The US AI data center expansion reportedly relies heavily on Chinese electrical equipment imports. That is verified by Bloomberg’s reporting. Not “might someday.” Right now.
That detail changes the whole mental model. You can have money, GPUs, and demand. You still can’t open a giant AI facility without the boring parts:
- Power access
- Transformers and switchgear
- Substation equipment
- Construction capacity
- Permitted land in the right places
This is why the term AI factory is more useful than “data center” for some of these projects. The constraint is not software elegance. It’s whether you can assemble an industrial site fast enough.
And wait, if money is basically unlimited for the hyperscalers, why not just pay more and get the gear? Good question. Some bottlenecks do not clear instantly with price. Lead times for specialized electrical equipment are long. Utility interconnection is slow. Zoning fights happen on local political time, not venture time. Even where money helps, it helps by letting the biggest buyers jump the queue.
That is already feeding backlash. Local communities do not experience this buildout as “AI progress.” They experience it as transmission stress, water worries, and giant anonymous buildings. We’ve already seen the shape of that in the recent data center backlash coverage.
Why the Small Players May Get Squeezed Out
Once the limiting factor shifts from “who wants to build” to “who can secure power gear, financing, and utility relationships,” the winners change.
The obvious beneficiaries are still the hyperscalers. They can commit tens of billions upfront, sign long-term offtake, and finance projects at a scale that turns infrastructure into a moat. Bloomberg’s February piece says each company’s 2026 estimate is expected to be near or above its budget for the prior three years combined. If that holds, the giants are not merely keeping up with AI demand. They are pre-buying the future.
The less obvious winners are suppliers and financiers. Bloomberg’s April reporting points to electrical equipment imports as a choke point. Bloomberg’s August 2025 reporting on the $29 billion Meta deal shows that capital markets are becoming part of the operating stack. Data centers increasingly look like an asset class with AI attached.
That has two implications.
First, smaller cloud and model companies may get boxed out. This is plausible, not fully verified across the whole market, but the mechanism is straightforward: if Amazon, Microsoft, Google, and Meta lock up land, power queues, contractors, and debt capacity, everyone else faces higher prices and longer waits.
Second, states may start treating this buildout as strategic industry policy, even if it remains formally private. That opens the door to fights over subsidies, grid priority, and public financing, the kind of logic you also see in debates over a public wealth fund. Once infrastructure becomes the bottleneck, politics follows the bottleneck.

What the $650 Billion Really Means
So what does AI datacenter spending mean in practical terms? Not “the market believes in AI.” We knew that already.
It means four companies are spending at a level that can distort adjacent industries. It means electrical equipment makers, construction firms, utilities, landowners, and private credit shops are now part of the AI story whether they asked to be or not. It means the hard limit on AI growth may be outside the model lab.
And it means the historical-project memes miss the live wire. The important fact is not that AI capex makes for a dramatic chart. The important fact is that the money is now larger than the supply chain’s ability to absorb it cleanly.
That is when an industry stops behaving like software.
Key Takeaways
- Verified: Alphabet, Amazon, Meta, and Microsoft are projected to spend about $650 billion in 2026 capex combined.
- Verified: That number is not “AI chips only.” It includes broader long-lived infrastructure such as buildings, power systems, and network capacity.
- Unverified: Claims that this definitively exceeds the Manhattan Project, Apollo, ISS, and Marshall Plan combined are catchy but not solidly sourced here.
- Verified: The buildout is running into real bottlenecks in power equipment, imports, land, and construction.
- Plausible: Those bottlenecks favor hyperscalers and may squeeze smaller players out of prime capacity and financing.
Further Reading
- Bloomberg: Big Tech to Spend $650 Billion This Year as AI Race Intensifies, The best current source for the headline hyperscaler capex figure.
- Bloomberg: US AI Data Center Expansion Relies on Chinese Electrical Equipment Imports, The key reporting on supply-chain dependence and electrical equipment bottlenecks.
- AP News: Anthropic, Microsoft announce new AI data center projects, Concrete examples of new infrastructure projects and continued spending.
- Bloomberg: Microsoft to Spend $80 Billion on AI Data Centers This Year, Useful baseline for how quickly the spending curve steepened.
- Bloomberg: How Pimco Outmaneuvered Apollo, KKR to Win $29 Billion Meta Deal, Shows how financing itself has become a central part of the data center race.
The next phase of AI will be shaped less by benchmark jumps than by who can get a transformer, a grid connection, and a financing package before everyone else.






