In recent months, Silicon Valley has become enamored with a 160-year-old economic theory from the age of railroads and coal: the Jevons Paradox.
Originally applied to steam engines and energy consumption, the concept holds that efficiency gains don’t reduce resource use. Instead, they increase it, by making that resource more broadly usable. The dynamic, in the age of railroads, looked like this: Engines become more energy-efficient, requiring less coal, but coal demand still exploded. Why? Because the efficiency gains drove wider adoption of the energy-efficient engines.
In our own age, the Jevons Paradox gets applied to AI, which, like railroads and steam engines, requires enormous amounts of energy and therefore hinges on energy supply. Boosters like Microsoft CEO Satya Nadella, however, tend to elide the energy side of the equation, invoking the Jevons Paradox to suggest more simply that, as AI gets cheaper and more efficient, demand for it will explode, not contract – driving profits higher.
But one industry researcher argues that this vision, while seductive, is also misleading. In an influential Substack post, independent analyst Dave Friedman argues that the Jevons Paradox isn’t a footnote in the AI economy — it’s the entire plot, and points to a serious and growing constraint where AI companies’ profitability is concerned.
His argument? Yes, AI may be becoming gradually more efficient. At the same time, ballooning demand is devouring any efficiency gains. And short of a massive, game-changing breakthrough in the energy and compute (or operational and hardware costs) required to run AI, there’s no way that some AI companies will turn out to be as profitable as venture capitalists and Wall Street firms are hoping.
The problem is most visible at the small end of the market-cap spectrum, among startups. As Friedman points out, AI startups are portraying themselves to investors as software-like services, known as SaaS (software-as-a-service), a famously high-margin business and a gold rush that’s a recent-enough memory to get VC hearts racing.
With the SaaS business model, it doesn’t cost companies much more to service additional customers, so each additional customer’s subscription fee falls more or less straight to the bottom line. The best SaaS plays may have margins as high as 80 or 90%, putting them among the best businesses in the world in that regard and attracting capital like flies to honey.
But AI startups only superficially resemble software businesses, Friedman argues, and that’s because of the Jevons Paradox. With AI companies – unlike Saas companies – energy costs and related usage costs aren’t fixed. So serving additional customers and seeing wider adoption doesn’t make AI cheaper. Expenses scale with use.
“This dynamic explains why so many AI startups look great on paper but may never reach healthy profit margins,” Friedman, a startup adviser and former analyst at Citigroup and Bloomberg, told Quartz. “If your core product gets more expensive to deliver the more it’s used, that’s a structural problem, not a temporary glitch.”
In Friedman’s view, Silicon Valley is mispricing the physics. People are pretending inference costs are op-ex, he wrote in his post, referring to the way that some companies portray AI’s ongoing computing needs as fixed (or at least semi-fixed) expenses under GAAP accounting. In reality, he says, such costs rise with usage, so would be better understood as COGS, or the cost of goods sold, which more accurately reflects per-use costs.
In fact, in some other energy-intensive industries, energy usage and compute costs are counted as COGS. The energy used to smelt one ton of aluminum, for instance, is directly attributable to that ton and so included in COGS. The same is true of some tech businesses. AWS includes server electricity and cooling as COGS when tied to delivering customer workloads.
It could seem like a minor accounting nuance, but in practice, it may be the difference between an accurate picture of the underlying business economics — and an inaccurate one.
Worse, Friedman said, the misconception is widespread, in part because of a distorting factor: compute credits. Many LLM startups are running on free or heavily discounted infrastructure from hyperscalers like AWS, Google, and Microsoft. That can mask how much it actually costs to serve customers and makes P&Ls look artificially healthy.
Historically, this isn’t new — from telecom to cloud to energy, investors have repeatedly overestimated early margins in infrastructure-heavy buildouts. What makes AI different, Friedman says, is the velocity. “One breakthrough model can obsolete your entire stack overnight,” he told Quartz. “And the open-source community is churning out those breakthroughs faster than ever.”
The dynamic is especially punishing for startups. Giants like Google and Microsoft own their data centers, negotiate electricity at industrial rates, and have spent decades building the infrastructure that powers AI. Startups, by contrast, are stuck renting such resources, often from the very same incumbents they hope to disrupt. That makes them hypersensitive to cost structure.
“And on top of that, they don’t have the financial cushion to absorb margin hits the way Big Tech can,” Friedman said. “So even if two companies are running the same model, the startup is paying far more, and feels it more acutely.”
That doesn’t mean we’re set to see mass failure of AI and LLM startups. But it could mean a reset. In Friedman’s view, many AI startups will be forced into uncomfortable decisions: trim the most compute-hungry features, hike prices, or settle into permanently low-margin operations.
“They’ll survive,” Friedman said, “but they won’t look like classic SaaS businesses with 80-90% margins. They’ll look more like infrastructure companies, where profits are thinner and growth takes longer.”
In other words, the Jevons Paradox in our own age could mean more efficiency and more demand – but for startups, at least, not necessarily more money.