Counting Atoms

Counting the Atoms

Theo Saville, March 2026

Eight billion people. Nine hundred million use ChatGPT in a given week, the biggest AI product ever built. Fifty million pay for it. Five million developers build on AI APIs. Fewer than ten thousand are building agentic systems that actually run autonomously.

The AI revolution hasn't stalled. It hasn't even started.

Six hundred billion dollars is being deployed in 2026 to change that, and the number is still climbing. The goal is the construction of the physical substrate for artificial general intelligence: compute at a scale that doesn't exist yet, powered by an energy infrastructure that doesn't exist yet, built by a workforce that is smaller than it was a decade ago. This is not a procurement exercise. It is an industrial mobilization, and the constraints it faces are physical, not financial.

But almost nobody in the scaling conversation is looking at the physical side. The bottlenecks are on factory floors and in machine shops and inside the clean rooms where a single company in Germany polishes mirrors to atomic flatness. Follow the constraints all the way down, past the data centers, past the chips, past the fabs, into the metallurgy and the optics and the workforce, and you find something the scaling discourse has entirely missed: the binding constraints on the AI buildout don't yield to capital alone. They yield to capital plus time. And the time is longer than anyone is pricing in.

So I ran the obvious test. The Manhattan Project cost roughly $1.9 billion in 1945 dollars, about $30 billion adjusted for inflation, and consumed nearly 1% of U.S. GDP. The 2026 AI infrastructure buildout, at $600 billion, represents about 2% of U.S. GDP. In real terms, the AI buildout is spending twenty times what the Manhattan Project spent, at twice the share of national output. What if you applied that level of mobilization, with 125,000 workers, unlimited political priority, and zero regulatory friction, to each physical bottleneck in the AI infrastructure stack?

The Manhattan Project is the right comparison, and not just as a metaphor. Oppenheimer's team didn't solve one physics problem. They built gaseous diffusion plants, electromagnetic separation calutrons, plutonium production reactors, and two fundamentally different bomb designs, all in parallel, because they didn't know which approaches would work. They ran three uranium enrichment methods simultaneously at Oak Ridge. They compressed timelines that physicists said couldn't be compressed. And it still took three years, with wartime urgency, zero permitting friction, and a government that could seize private land by telephone. The Manhattan Project had a clear endpoint: a working bomb. The AI buildout has one too: enough physical infrastructure to train and run artificial general intelligence. The difference is that the Manhattan Project had one binding constraint (enriched uranium). The AI buildout has a dozen, and they're nested.

Apply that same mobilization to the AI infrastructure stack and you get a floor of 3-5 years for most bottlenecks. You can compress them. Run parallel training programs for coil winders, build multiple mirror polishing facilities, fast-track brownfield copper mines, accept insane costs on redundant foundries. Manhattan Project-style parallelization works. But even fully compressed, extreme ultraviolet (EUV) lithography needs 5-7 years to double output because Zeiss mirror polishing is iterative physics that can't be parallelized below a cycle time of weeks per mirror. Gas turbines need 4-5 years because single-crystal blades solidify at 48-72 hours per casting cycle and new foundries spend 2-3 years just getting their rejection rates down. Transformer capacity needs 3-4 years because grain-oriented electrical steel (GOES) annealing has irreducible chemistry. The industry is already spending Manhattan Project money. The gap is between what the market is pricing, 12 to 18 months, and what the physics allows: 3 to 7 years.

The Manhattan Project compressed the uncompressible, and it still took three years. The AI infrastructure buildout is running a dozen Manhattan Projects simultaneously, each with different irreducible timelines and nested dependencies. The market is pricing this as an 18-month procurement problem. The physics says 3-7 years.

Here is what the Manhattan Project test reveals when you apply it bottleneck by bottleneck. Power transformers have 128-160 week lead times today, and even $30 billion of mobilization only compresses that to 3-4 years, because the grain-oriented electrical steel they require takes half a decade to bring online and the coil winding workforce doesn't exist. EUV lithography, the process that prints advanced chips, runs at 48 systems per year from a single company on Earth. Doubling that takes 5-7 years, because the mirrors inside each machine are polished by Zeiss to atomic flatness in a process that physically cannot go faster. Gas turbines carry 7-year backlogs that compress to 4-5 years at best, because each turbine blade is a single crystal of nickel superalloy that solidifies over 48-72 hours and there is no way to rush metallurgy. The skilled workforce, 354,800 machinists in the entire United States with 30% of them over 55, needs 3-5 years for even basic relief, because the rate-limiter is how fast a human nervous system learns to hold a tolerance. Copper faces a 304,000-tonne deficit, addressable in 4-6 years through brownfield mining but 10-15 years for new deposits. Semiconductor fabs cost $15-20 billion each and take 3-4 years to build and ramp, gated by EUV tool availability and the long climb to acceptable yields.

There's a geopolitical dimension that makes this worse, or more interesting, depending on where you sit. China can brute-force the base layer: transformers, mature-node chips, construction workforce, copper supply chains through Africa. They produce 50% of global transformers by volume, graduate 15 million vocational workers a year, and their state-owned miners already control half of DRC copper production. China has already demonstrated it can dominate entire physical industries when the manufacturing is scalable: solar panels, EV batteries, electric vehicles. But the AI infrastructure stack requires precision manufacturing at the frontier, not commodity production at scale, and at that level China is locked out. Zero domestic EUV capability. Turbine blades a generation behind. Process engineering 5-10 years behind Taiwan Semiconductor Manufacturing Company (TSMC). The US and allies hold the performance layer, advanced chips, EUV, frontier metallurgy, but can't scale the base. 354,800 machinists for an entire country. A welder shortage of 82,500 per year.

The result is a strange asymmetry: neither side can build the full stack alone, and the pieces they're missing are precisely the ones that take the longest to develop. That means the AI buildout either requires sustained cooperation between geopolitical rivals, or both sides accept permanently incomplete infrastructure. The historical precedent for that kind of cooperation during a technology race is thin.

This is a four-part series. It follows the bottlenecks all the way down, from the data center floor to the atomic structure of a turbine blade, and asks what happens when the largest capital deployment in history meets constraints that operate on the timescale of physics, not finance.

By Theo Saville — manufacturing, mechanical & robotics engineer, CEO of CloudNC, Honorary Professor of Engineering at the University of Warwick.