The AI that learnt from life

2026-04-22
3 min read.
Researchers point to Artificial Life as the missing piece in the contemporary AI debate.
The AI that learnt from life
(Credit: Tesfu Assefa).

Almost nobody in the AI debate asks why biological evolution solved, at no runaway computational cost, problems of a complexity that today's artificial systems barely approach. A living organism reads its environment, adjusts its behaviour, learns from failure, anticipates and acts with autonomy. It does all this in real time, with limited resources, without anyone handing it a hundred billion parameters. That is why Spanish researcher Luis Martín points to Artificial Life as the missing piece in the contemporary AI debate.

Artificial Life is a discipline that studies how living systems emerge, adapt and evolve, and has spent decades building computational tools inspired by that logic. Martín positions it as the underlying infrastructure of his complex reasoning systems, because it offers solutions free from the prohibitive computational cost and the structural limitations that current AI models carry as standard.

Three principles

Three principles shape this approach.

The first is life at the edge of chaos, articulated by Langton and Packard: biological systems did not emerge in the rigid order of crystals, nor in the turbulent chaos of gases, but in the narrow margin between the two. Adaptation and evolution happen there, in that zone of productive tension. The second is Chaitin's principle of algorithmic complexity: the computational representation of a system should be the optimal one, not the largest or the most elaborate. Efficiency as a design criterion, not scale. The third is the most counter-intuitive for an industry fixated on raw power: complex biological systems are governed by simple and harmonious rules — simple, that is, not simplistic. Simple rules, complex behaviours. Complexity is not built by stacking layers. It emerges from below.

The tools

On these principles, Martín's research works with three families of architectures that the mainstream debate barely recognises. Cellular automata generate patterns of extraordinary complexity from a handful of local rules, with no central architecture and no conductor. Lindenmayer systems, originally developed to model plant growth, show that the most sophisticated morphology can emerge from minimal grammars. Genetic systems apply the logic of selection and variation to find solutions no human designer would have anticipated. Three distinct families, one shared principle: let complexity emerge from well-chosen rules.

Why now

What makes this approach timely is its practical relevance at this particular moment. The dominant language models are extraordinarily expensive in energy, data and compute. That makes them dependent on massive infrastructure, hard to deploy where resources are constrained, and virtually impossible to audit in their internal reasoning. Artificial Life systems, by their nature, operate from efficiency: less reliance on vast datasets, greater capacity for autonomous adaptation, and a logic of operation that can be exposed and justified.

Transparency

This last dimension — observability, the capacity of a system to show how it reached a conclusion — is the minimum requirement for deploying AI in any context where decisions carry real consequences. And it is precisely what the bioneurocognitive architecture proposed by Martín holds as a design requirement from the outset.

#ArtificialLife

#ArtificialLifeResearch



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!