An analysis by Epoch AI, a nonprofit AI research institute, suggests the AI industry may not be able to eke massive performance gains out of reasoning AI models for much longer. As soon as within a year, progress from reasoning models could slow down, according to the report’s findings.
Reasoning models such as OpenAI’s o3 have led to substantial gains on AI benchmarks in recent months, particularly benchmarks measuring math and programming skills. The models can apply more computing to problems, which can improve their performance, with the downside being that they take longer than conventional models to complete tasks.
Reasoning models are developed by first training a conventional model on a massive amount of data, then applying a technique called reinforcement learning, which effectively gives the model “feedback” on its solutions to difficult problems.
So far, frontier AI labs like OpenAI haven’t applied an enormous amount of computing power to the reinforcement learning stage of reasoning model training, according to Epoch.
That’s changing. OpenAI has said that it applied around 10x more computing to train o3 than its predecessor, o1, and Epoch speculates that most of this computing was devoted to reinforcement learning. And OpenAI researcher Dan Roberts recently revealed that the company’s future plans call for prioritizing reinforcement learning to use far more computing power, even more than for the initial model training.
But there’s still an upper bound to how much computing can be applied to reinforcement learning, per Epoch.

Josh You, an analyst at Epoch and the author of the analysis, explains that performance gains from standard AI model training are currently quadrupling every year, while performance gains from reinforcement learning are growing tenfold every 3-5 months. The progress of reasoning training will “probably converge with the overall frontier by 2026,” he continues.
Techcrunch event
Berkeley, CA
|
June 5
BOOK NOW
Epoch’s analysis makes a number of assumptions, and draws in part on public comments from AI company executives. But it also makes the case that scaling reasoning models may prove to be challenging for reasons besides computing, including high overhead costs for research.
“If there’s a persistent overhead cost required for research, reasoning models might not scale as far as expected,” writes You. “Rapid compute scaling is potentially a very important ingredient in reasoning model progress, so it’s worth tracking this closely.”
Any indication that reasoning models may reach some sort of limit in the near future is likely to worry the AI industry, which has invested enormous resources developing these types of models. Already, studies have shown that reasoning models, which can be incredibly expensive to run, have serious flaws, like a tendency to hallucinate more than certain conventional models.
#Improvements #reasoning #models #slow #analysis #finds