September 13, 2023

# The Limits of Scaling

I’ve seen a lot of people recently just post a random graph of FLOPS vs future iterations of GPT-n, assuming we somehow get exponential gains forever. That’s pretty clearly not the case, as you’ll see through the following napkin math. That obviously begs the question “How big can we get?”, which is what I’ll try to answer as well.

### Factors at Play

Here’s what will factor into this exploration of scaling limits:

**Energy:**Limited by solar radiation on Earth and heat dissipation capacity—critical constraints for compute power.**Compute:**Bound by thermodynamics and intrinsically tied to intelligence—this sets a cap on intelligence capacity.**Models:**Serve to estimate intelligence output based on compute input.

### Energy Constraints

Let’s analyze scenarios across increasing energy scales:

**$E_{0}=40GW$**$(4×10_{10}W)$: This is about the current energy capacity of U.S. data centers.**$E_{1}=10TW$**$(1×10_{13}W)$: Approximately matches global energy production today.**$E_{2}=1PW$**$(1×10_{15}W)$: Achievable by covering substantial landmass with solar panels.

Nuclear is tempting but faces heat dissipation challenges. Solar, on the other hand, doesn’t add heat dissipation burdens to Earth’s ecosystem, making it a more sustainable near-term source. Exceeding $E_{2}$ would prompt catastrophic warming, so we’ll use it as a sensible upper limit.

#### Calculating $E_{2}$

Assumptions:

- 20% solar panel efficiency, with 20% land coverage possible.
- Earth’s land area is approximately $5×10_{14}m_{2}$, with 168 W/m² energy absorbance and 29% of Earth as land.

Increasing solar panel coverage reduces Earth’s reflectivity, with a minor temperature rise of 1-2K.

### Compute Limitations

Compute constraints are abundant (see Wikipedia’s “Limits of Computation”). Given our Earth-bound setting, Landauer’s Principle applies, since Earth’s temperature is limited by solar irradiance balance.

#### Landauer’s Bound for Compute

Landauer’s Principle establishes the minimum energy for erasing a bit:

$E_{min}=k_{B}Tln2$At 300K, this translates to:

$E_{min}=(1.380649×10_{−23}J/K)×300K×ln2≈2.8707×10_{−21}J$This energy must be dissipated per bit operation. Assuming equilibrium (conservation of energy), the rate of operations per watt becomes:

$N_{ops/secperwatt}=E_{min}1 ≈3.483×10_{20}ops/s/W$For floating-point operations, with about 50 bit ops per FLOP, we get an upper bound:

$5×10_{18}FLOPS/W$This efficiency would be like powering a current top-tier data center on a single watt.

#### Scenarios

**$η_{0}=10_{13}FLOPS/W$**: Best available compute efficiency today.**$η_{1}=5×10_{16}FLOPS/W$**: Achievable at 1% of Landauer’s limit.**$η_{2}=5×10_{18}FLOPS/W$**: Landauer’s theoretical maximum.

These cover realistic to boundary scenarios.

### Models and Training Limits

For this analysis, let’s consider the current paradigm for LLM training—expensive and lengthy training with low-cost inference. GPT-4’s training compute estimate sits at $10_{25}$ FLOP, making $10_{27}$ FLOP a plausible next step.

Given an $E_{0},η_{0}$ scenario with a 20% compute efficiency for six months:

$≈1.3×10_{30}FLOP$This aligns with projections for 2030 models, like GPT-6.

#### The “Runaway Improvement” Scenario ($E_{1},η_{1}$):

Assuming intelligence-limited tasks are solvable:

$≈1.6×10_{36}FLOP$This would equate to something around GPT-9’s intelligence, potentially equaling millions of human scientists in capability.

#### World-Covering Scenario ($E_{2},η_{2}$):

With massive global reallocation of resources:

$≈1.6×10_{40}FLOP$GPT-11 training would approach this level.

### Nuclear Energy and Earth’s Heat Dissipation

Using the Stefan-Boltzmann Law, $P=CT_{4}$, nuclear power scaling can increase Earth’s temperature beyond solar’s contribution. A 50x nuclear output increases Earth to 325K, risking severe climatic and ecological impacts.

### Alternative Tech: Quantum and Reversible Computing

Quantum and reversible computing might eventually break through these limits. Quantum tasks excel narrowly, with uncertain general scaling advantages. Reversible computing could bypass Landauer’s limits but demands wholly new algorithms and hardware.

#### Revisiting the Margolus–Levitin Bound

This quantum limit offers $10_{33}$ FLOPS/W, theoretically achievable with a planetary-scale dilution refrigerator.

### Conclusion

GPT-11 training (at $10_{40}FLOP$) represents a scaling ceiling for terrestrial compute, bound by solar energy, thermodynamics, and the Earth’s heat dissipation. Without massive scientific leaps, GPT-9’s level ($10_{36}FLOP$) is more attainable without destabilizing our planet. The final word on scaling? Time to start building a Dyson Sphere.