Getting large language models (LLMs) to reason better is one thing. Getting them to do it without burning through absurd amounts of compute is another. A new research paper from TU Darmstadt, UCLA, Google DeepMind, and Mila digs deep into this trade-off — and might just change how AI developers think about scaling reasoning at test time.
The core tension? Whether LLMs should spend their compute generating more answers (what’s known as Self-Consistency, or SC), or verifying a few promising answers using Generative Reward Models (GenRMs). Turns out, choosing wrong can make your model waste up to 128 times more compute — for a barely noticeable performance bump.
The new math of reasoning at scaleLLMs like GPT-4, Llama, or Qwen have gotten shockingly good at solving math and science problems by generating multiple chains of thought (CoTs) and picking the most common result. That’s the idea behind SC — brute force wisdom of the crowd. But researchers have also been excited by GenRMs, a newer approach that lets LLMs act like their own judge by verifying answers through further chain-of-thought reasoning.
Previous comparisons made GenRM look wildly efficient: matching SC’s accuracy with 4× fewer solutions. But this paper calls that framing out — hard. Why? Because nobody was counting the true compute cost of all those verification steps.
Compute budgets change everythingThis study introduces a clean framework for measuring the real cost of SC and GenRM approaches under a fixed compute budget. It works like this: you can either spend compute generating more answers (SC), or split that budget between a few answers and many verifications (GenRM). Their model for calculating total inference compute is refreshingly straightforward: C(S, V) = S(1 + λV), where S is the number of solutions, V the number of verifications, and λ reflects verification length relative to solutions.
The brutal result: SC is still king (unless you’re rich)The experiments left little doubt. Across Llama and Qwen models, from 7B to 70B parameters, and across math and science reasoning tasks, the story repeated: SC outperformed GenRM at lower compute budgets. Only when compute scaled past 8× did GenRM catch up. And getting a modest 3.8% performance boost over SC required an eye-watering 128× more compute.
That result held up even for advanced “thinking models” like QwQ-32B, and on hard math datasets like AIME24. SC wins when compute is tight. GenRM only makes sense when compute is practically free — or when the problems are so difficult that verification pays off dramatically.
IEA warns: AI could double global data center energy use by 2030
The smart way to use GenRM (if you must)Still, the study doesn’t dismiss GenRM entirely. In fact, it derives inference scaling laws for GenRM — a blueprint for compute-optimal problem solving. The key finding? When scaling GenRM, allocate compute towards generating solutions faster than verifications — roughly 1.5 to 2 times faster. In numbers, their scaling laws found optimal solution count scales with compute budget as S ∝ C^0.57, while optimal verifications scale as V ∝ C^0.39.
This research leaves practitioners with a very practical guide: if compute is limited, trust SC and spend it on generating more solutions. If compute is abundant, and especially if you’re dealing with harder reasoning tasks, using GenRM with the right scaling balance might be worth it — but only with serious optimization.
For AI developers facing real-world constraints, the takeaway is almost comically simple: more thinking beats more verifying, unless you have near-infinite resources. And even then, verifying needs to be smart, efficient, and minimal.
The full paper, “When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning,” is available on arXiv. Their codebase is open at GitHub.