Cost-of-Pass: An Economic Framework for Evaluating Language Models
Abstract
The widespread adoption of AI systems in the economy hinges on their ability to generate economic value that outweighs their inference costs. Evaluating this tradeoff requires metrics that account for both performance and costs. We propose a framework grounded in production theory for evaluating language models by combining accuracy and inference cost. We introduce "cost-of-pass", the expected monetary cost of generating a correct solution. We then define the "frontier cost-of-pass" as the minimum cost-of-pass achievable across available models or the "human-expert, using the approximate cost of hiring an expert. Our analysis reveals distinct economic insights. First, lightweight models are most cost-effective for basic quantitative tasks, large models for knowledge-intensive ones, and reasoning models for complex quantitative problems, despite higher per-token costs. Second, tracking this frontier cost-of-pass over the past year reveals significant progress, particularly for complex quantitative tasks where the cost has roughly halved every few months. Third, to trace key innovations driving this progress, we examine counterfactual frontiers: estimates of cost-efficiency without specific model classes. We find that innovations in lightweight, large, and reasoning models have been essential for pushing the frontier in basic quantitative, knowledge-intensive, and complex quantitative tasks, respectively. Finally, we assess the cost-reductions afforded by common inference-time techniques like majority voting and self-refinement, finding that their marginal accuracy gains rarely justify their costs. Our findings underscore that complementary model-level innovations are the primary drivers of cost-efficiency, and our economic framework provides a principled tool for measuring this progress and guiding deployment.
Community
This work proposes a grounded economic framework for evaluating language models by combining accuracy and inference cost under a single measure. It introduces Cost-of-Pass: the expected monetary cost of generating a correct solution for a problem, and then defines Frontier Cost-of-Pass: the minimum Cost-of-Pass achievable across available models or a human expert baseline.
With this framework, we quantify the economic benefit that language models provide over a human expert baseline, track the evolution of cost-efficiency over the past year across different task types, evaluate the essentialness of various model innovations, and assess the economic value of common inference-time techniques.
Our findings indicate clear trends in cost-efficiency across model classes and task types, reflecting the broader dynamics of innovation in the field. These patterns, and the shifts we have observed over time, offer a window into how economic value is increasingly shaped by model-level advances rather than surface-level improvements.
Paper: https://arxiv.org/abs/2504.13359
Repository: https://github.com/mhamzaerol/Cost-of-Pass
Benchmark: https://huggingface.co/CostOfPass
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Inference-Time Scaling for Complex Tasks: Where We Stand and What Lies Ahead (2025)
- Harnessing the Reasoning Economy: A Survey of Efficient Reasoning for Large Language Models (2025)
- A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond (2025)
- Scalable Best-of-N Selection for Large Language Models via Self-Certainty (2025)
- Self-Training Elicits Concise Reasoning in Large Language Models (2025)
- Efficient Inference for Large Reasoning Models: A Survey (2025)
- Sample, Don't Search: Rethinking Test-Time Alignment for Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper