Dr. Elisha Rosensweig, a researcher at Dicta and one of Israel's most incisive voices on AI, offers a market read you may not have seen in the headlines: LLM token prices are heading up, and the reasons run deeper than a simple supply story. In a recent post, he makes a counterintuitive optimistic argument about what this shift could mean for the workforce.

Why Token Prices Are Heading Up

Demand for large language model usage is growing faster than supply can scale to meet it. Rosensweig explains that companies like Anthropic and OpenAI have been pricing their services below cost, subsidized by investor capital, as a deliberate strategy to build market share. With IPO preparations underway, that subsidy period is coming to a close.

Early signals are already visible: Anthropic has begun restricting third-party tools from accessing its APIs at scale, and pricing signals are shifting upward. Structural supply constraints compound the issue, including a global chip shortage, rising RAM costs, and a 3-4 year lead time for building the gigawatt-scale data centers that current demand requires.

The Hidden Economics of AI Pricing

Like many high-growth startups, AI companies deliberately underpriced their products to generate demand. The entity absorbing the cost gap was venture capital, not the end customer. Building data centers at the scale needed to serve current AI workloads takes 3-4 years from initiation to operation, and most facilities currently serving AI traffic were not originally built for that purpose.

The wave of optimism around new data center announcements that characterized recent years is already cooling. Supply-side growth will not keep pace with demand in the near term, which is a fundamental driver of the price shift ahead.

What Happens When Tokens Become a Scarce Resource

When token costs rise, organizations will need to make prioritization decisions they have largely avoided until now. Rosensweig argues that not every task justifies the cost of an AI call, so companies will direct AI toward use cases where it has a genuine structural advantage over human labor. For everything else, people will return to writing code.

This is not necessarily a setback. When AI usage carries a real cost, the discipline of ROI-driven adoption replaces the assumption that more AI is always better. The quality of decisions about what to automate tends to improve when the price of being wrong is visible.

What This Means for Junior Developers

One of the most significant implications Rosensweig highlights is the reopening of entry points for early-career developers. When tokens are expensive, it stops making economic sense to assign an AI agent to small, bounded tasks that a junior developer could handle. The reflexive preference for AI over human talent will need to be weighed against actual cost.

Organizations that have abandoned code reading and writing entirely in favor of AI-generated output will also encounter maintenance challenges that are difficult to navigate without foundational knowledge. The skills built through manual coding retain their value even in an agentic development environment.

Rosensweig's Optimistic Case

Rosensweig acknowledges he would have preferred the market to self-correct because of a deeper appreciation for coding as a discipline, not because of pricing economics. But he welcomes the economic version of the correction. A temporary slowdown in the adoption of fully agentic coding creates space for the dust to settle, for organizations to build better knowledge practices, and for the next generation of developers to establish a foothold in the industry.