Samsung researcher Alexia Jolicoeur-Martineau has created the Tiny Recursion Model, or TRM, VentureBeat reports. This artificial intelligence (AI) model has only 7 million parameters, making it about 10,000 times smaller than big models like OpenAI's o3-mini or Google's Gemini 2.5 Pro. Yet, it competes or beats them on hard reasoning tests.
The key is recursion, a method where the model repeatedly refines its own answers, like thinking step by step to fix mistakes. "With recursive reasoning, it turns out that 'less is more," says Jolicoeur-Martineau.
TRM simplifies an earlier approach called the Hierarchical Reasoning Model, which used two networks but was more complex. Instead, TRM uses one simple two-layer network that starts with a question, makes an initial guess, and updates both its hidden thinking space and the answer over loops until it stabilizes.
The model focuses on grid-based puzzles, such as Sudoku, where numbers fill a grid without repeats; mazes, paths through barriers; and the ARC-AGI benchmark, a set of visual tasks testing abstract thinking, easy for humans but hard for AI.
Performance and implications
Tests show TRM achieves 87 percent accuracy on extreme Sudoku, 85 percent on hard mazes, 45 percent on ARC-AGI-1, and 8 percent on the tougher ARC-AGI-2. These scores often top those of giant models, using far less computing power and data - just about 1,000 examples per task. It trains affordably, even on basic hardware, unlike big models that need huge energy and costly graphics units. The code is open on GitHub under a free license, allowing anyone to use or change it for research or business.
Reactions online praise TRM for proving small models can excel through clever loops, not size, but note it's specialized for puzzles, not general chat or text. Some see it as a step toward greener AI, while others say it highlights ways to add reasoning to larger systems. Overall, TRM suggests AI progress can come from minimalism, opening doors to cheaper, smarter tools for specific challenges.