Imagine microscopic, self-assembled robots crafted from human cells, meticulously engineered to navigate the body and repair tissues at a cellular level. This visionary concept is now a reality with Anthrobots—dynamic biological machines constructed from adult human lung cells. These biobots demonstrate the incredible plasticity of human cells, reshaping our understanding of cellular function. By leveraging advanced bioengineering techniques, scientists have created structures capable of self-assembly, motility, and even tissue repair, offering transformative possibilities in medicine. From targeted therapies to regenerative breakthroughs, Anthrobots could revolutionize healthcare.
From Airway Cells to Self-Propelled Spheres
The journey of Anthrobots begins with adult human bronchial epithelial (NHBE) cells—cells traditionally forming flat airway linings. By culturing these cells in three-dimensional environments and releasing them into a low-viscosity medium, researchers induced a remarkable transformation. These cells self-assembled into ciliated spheroids, flipping their polarity to reorient cilia outward. This reconfiguration enabled the spheroids to propel themselves autonomously, an extraordinary demonstration of cellular plasticity. Unlike traditional bioengineering, this process required no genetic modification or external scaffolding, showcasing the latent potential of human cells to construct functional, motile structures.
A Spectrum of Movement and Morphology
Anthrobots are not uniform in their behavior. They exhibit distinct motility patterns—some moving in tight circles, others along straight paths, and a third group adopting curvilinear trajectories that blend circular and linear motion. This behavioral diversity corresponds to specific morphological traits. Smaller, spherical Anthrobots predominantly exhibit circular motion, while larger, irregularly shaped bots are inclined to move linearly. Curvilinear Anthrobots combine traits of both. These insights, derived from extensive timelapse video analyses, reveal that Anthrobots are not random assemblages but intricately organized systems with clear correlations between their structure and function.
Bilateral Symmetry: A Universal Blueprint
Remarkably, Anthrobots display bilateral symmetry, a characteristic integral to many living organisms. Linearly moving Anthrobots, in particular, exhibit pronounced symmetry along their movement axis. This finding highlights parallels between these synthetic lifeforms and natural developmental principles shaping animal bodies. Detailed analyses of cilia distribution patterns confirmed this symmetry, further emphasizing the intricate self-organization of Anthrobots. Such observations suggest that these biobots may offer insights into fundamental biological processes while serving as a platform for advanced bioengineering.
Facilitating Repair in Damaged Neuronal Tissue
Beyond their inherent motility, Anthrobots have demonstrated remarkable therapeutic potential. When introduced to neuronal cell sheets with induced scratches, Anthrobots actively traversed these gaps, promoting the regrowth of neurons. This healing capability was absent in control setups with passive materials, underscoring the active role of Anthrobots in tissue repair.
Researchers took this a step further by creating “superbots” through the fusion of multiple Anthrobots. These larger constructs acted as living bridges over neuronal gaps, accelerating robust tissue regeneration beneath them. The implications are profound, offering a glimpse into future applications of Anthrobots as vehicles for delivering therapeutic agents or stimulating localized regeneration in damaged tissues.
Expanding the Frontiers of Biomedicine
The study of Anthrobots raises critical questions about the potential of adult human cells to self-organize into functional, motile systems. This work represents a significant step in understanding the breadth of morphological and behavioral possibilities achievable with wild-type genomes, free from genetic manipulation or external fabrication.
The capacity of Anthrobots to traverse and repair damaged tissues, combined with their ability to self-assemble and exhibit diverse motility patterns, positions them as powerful tools for regenerative medicine. Future research could explore enhancing their functionality through synthetic biology, potentially creating biobots tailored for specific therapeutic purposes.
Conclusion
Anthrobots exemplify the untapped potential of human cells to self-construct into sophisticated, functional entities. From their beginnings as airway epithelial cells to their transformation into motile, tissue-repairing biobots, they represent a groundbreaking intersection of biology and engineering.
This research not only advances our understanding of cellular plasticity but also opens doors to new medical paradigms. With continued exploration, Anthrobots could lead to innovations in bio-robotics, regenerative therapies, and beyond, paving the way for a future where our cells serve as autonomous agents of healing and discovery.
Reference
Gumuskaya, Gizem, Pranjal Srivastava, Ben G. Cooper, Hannah Lesser, Ben Semegran, Simon Garnier, and Michael Levin. “Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells.” Advanced Science 11, no. 11 (2024): 2303575. https://doi.org/10.1002/advs.202303575.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
It’s December 2024, and all hell has broken loose in crypto markets, with everything from Bitcoin to meme coins to NFTs to AI agent coins to last-cycle “dino coins” like XRP ripping to the upside following Donald Trump’s election win which in 2025 will herald in the most crypto-friendly U.S. administration in history.
With Bitcoin kissing $100k, the crypto market has entered what veteran traders call ‘the Banana Zone’, and everyone is losing their minds. Bitcoin keeps smashing through all-time highs. It had its biggest-ever one-month jump in November 2024 (a $26,000 jump), billionaires are scrambling to buy whatever Bitcoin they can find, and investment firms are desperately telling their clients to buy Bitcoin “urgently.” Even Jim Cramer has flipped pro-Bitcoin again. What can go wrong? Welcome to the wild end of 2024.
Remember though – this isn’t financial advice of any kind. When the Fear And Greed Index sits on 90 (‘Extreme Greed’), markets are overheated. Anything can happen, particularly in the Banana Zone. The market can stay irrational longer than you can stay solvent. Keep your head straight, don’t bet the farm, and enjoy watching history unfold.
What Is This ‘Banana Zone’ Thing?
Macro investor Raoul Pal came up with this term to describe those insane periods when Bitcoin’s price chart starts looking like a banana – it goes up almost vertically. But this isn’t just about the price going crazy.
If you haven’t read his explanation on it, take ten minutes and read this post twice. And then once again.
The Banana Zone is a cyclical market pattern observed since 2008, when global interest rates were reset to zero and debt maturities standardized to 3-4 years. This created a predictable macro cycle driven by global liquidity, visible in ISM data (published by the Institute for Supply Management). The cycle involves currency debasement through liquidity increases, which helps service debt rollovers and causes asset prices to rise.
Growth assets, particularly tech and crypto, perform best during ‘Macro Summer and Fall’ periods. Crypto has outperformed tech significantly. It has grown at twice the internet’s historical rate, following Metcalfe’s Law adoption patterns.
While the cycle is generally predictable, due to debt structure, current factors like elections and China’s foreign relations may influence the outcomes. The final leg’s exact structure remains uncertain, though the pattern is expected to continue.
Banana Zone in Practice
Just look at what’s happening: MicroStrategy just dropped another $2 billion to buy 27,200 Bitcoin. The over-the-counter trading desks, where the big players usually buy their Bitcoin, are running dry. Some traders are reporting there might be as little as 20,000-40,000 Bitcoin left available for large purchases. When you consider that Bitcoin ETFs have been buying roughly 100,000 Bitcoin in just the past few weeks, you start to understand why prices keep shooting up.
A Perfect Storm Has Hit
Several major factors have created the perfect conditions for this Banana Zone.
The Bitcoin halving earlier this year cut the new supply in half, right when demand started going through the roof. China has started pumping trillions into their markets to fight off recession. Central banks worldwide are beginning to lower interest rates again. Everyone is looking for somewhere to put their money that might actually keep up with inflation.
Investment firms are waking up to Bitcoin in a big way. Bernstein, a major investment firm, literally told their clients to “urgently” get Bitcoin exposure. That’s not the kind of language Wall Street typically uses. When Wall Street starts sounding like crypto Twitter, you know something unusual is happening.
This Time Hits Different
Previous Bitcoin bull markets were driven mainly by retail investors and pure speculation. This time around, the grown-ups have given their blessing: BlackRock’s ETF announcement last year gave the thumbs-up to TradFi to get behind BTC.
Major Wall Street firms are buying Bitcoin. Investment banks are creating crypto trading desks. Corporate treasuries are allocating serious money to Bitcoin. Even traditional banks are starting to offer crypto services to wealthy clients.
The supply squeeze is also more intense than ever before. Miners are holding onto their coins as their operations become more profitable with every price increase. Long-term holders aren’t selling. ETFs need to keep buying to meet demand. The available supply of Bitcoin is getting squeezed from all sides.
Warning Signs to Watch
The Banana Zone typically ends when the market reaches peak euphoria. Here are some classic warning signs to watch for as we move into 2025: your taxi driver starts giving you Bitcoin price predictions, people start quitting their jobs to become full-time crypto traders, everyone starts saying “this time is different.” When you see these signs, it might be time to start paying extra attention. Already seeing these signs? Well, it might be time to de-risk a little. After all, the key to riches is not just to make it, but to keep it.
But before that happens, we could very well see some mind-bending price action. The combination of limited supply and increasing institutional demand could drive prices to levels that seem impossible right now.
Don’t Slip in The Banana Zone
While watching the Banana Zone unfold is exciting, markets don’t go up forever. Every previous crypto bull market has ended with a significant pullback. The difference this time might be in how high we go before that happens, and how far we fall when it does.
The current Banana Zone could extend well into 2025, especially with all the institutional money flowing in and the supply getting more constrained by the day. But markets are unpredictable beasts. The best approach is to understand what’s happening and why, rather than trying to predict exactly how it will play out.
Pal says that 20 to 30% pullbacks are normal and to be expected, so it’s wise to be well prepared for these drawdowns and the pain that they’ll bring, in particular to altcoins.
One thing’s for certain: we’re living through a historic moment in financial markets. The Banana Zone of 2024-2025 is showing us what happens when an emerging asset class like Bitcoin starts to get mainstream acceptance right when its supply is getting squeezed. Whether you’re participating or just watching from the sidelines, this is going to be one hell of a show next year. If you hear the word WAGMI even in jest, run for the hills.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The potential benefits of artificial intelligence are huge, as are the dangers.
—Dave Waters
Introduction
Artificial intelligence (AI) is becoming deeply embedded in modern life, reshaping industries, communication, and even culture. Yet, beneath the surface of this technological marvel lies a concern that often goes unnoticed: the linguistic biases of multilingual large language models (LLMs) like Llama-2. These biases, especially their reliance on English as a latent processing language, could subtly erode linguistic diversity, marginalize non-English cultures, and inadvertently contribute to global cultural homogenization. This article highlights the mechanisms that lead to these biases and explores their far-reaching implications.
The English-Centric Lens
Multilingual LLMs such as Llama-2 are predominantly trained on datasets dominated by English, often comprising a vast majority of their training corpora. Despite this imbalance, they perform impressively across multiple languages. However, a closer look at their internal processing reveals an intriguing yet concerning mechanism: the use of English as a latent “concept space.”
A recent research tracking Llama-2’s embeddings through its layers demonstrates a three-phase progression. Initially, input data resides far from output embeddings in the model’s high-dimensional space. In the middle layers, the embeddings transition to an abstract conceptual representation that aligns more closely with English than with the input language. Finally, the embeddings adjust to output the appropriate language-specific tokens.
This mechanism is akin to a mental translator: an input sentence in Japanese might be internally represented as an English abstraction before being processed and rendered back into Japanese. While this process explains Llama-2’s robust multilingual capabilities, it reinforces an English-dominant perspective, skewing its linguistic neutrality and exacerbating cultural disparities.
Lost in Translation: Bias in Translation and Representation
One practical implication of this bias lies in translation. For example, when encountering idiomatic phrases such as the Spanish “dar en el clavo” (to hit the nail or to to hit in the nail), Llama-2 might prioritize an English-centric generic equivalents and instead of translating it in a way that preserves the cultural context and imagery of the original Spanish, it might default to a more generic phrases like “to get it right”. This often results in translations that lose cultural nuance, oversimplifying or misrepresenting the richness of the original expression.
Such distortions extend beyond semantics. When AI operates predominantly through an English lens, it risks diluting the cultural essence embedded in language, particularly in contexts such as literature or oral traditions where word choice and phrasing hold deep cultural significance.
Cultural Implications: “Winners Write History”!
Linguistic bias in LLMs also has profound ethical implications. The phrase “winners write history” captures the historical tendency for dominant groups to shape narratives according to their perspectives. LLMs trained on English-dominated datasets may unknowingly perpetuate such biases, privileging dominant cultural viewpoints while marginalizing others.
Consider the role of multilingual AI in generating or summarizing historical content. The inherent reliance on English representations risks introducing subtle shifts in how historical events are framed, aligning them with English-speaking cultural narratives. Over time, such biases could influence collective memory, perpetuating inequalities in cultural representation and equity.
Ethical Considerations and the Path Forward
The findings from Llama-2’s study underscore the urgency of addressing linguistic biases in AI development. While the use of English as a latent processing language enhances generalization and cross-lingual tasks, it inadvertently prioritizes English-centric perspectives over others. This raises critical questions: How can we ensure AI systems are linguistically and culturally inclusive? And what steps can be taken to mitigate the risks of cultural homogenization?
Efforts must begin with the training datasets themselves. Diversifying training corpora to include more balanced representations of non-English languages is a crucial first step. Additionally, rethinking architectural designs to minimize reliance on a single dominant language during intermediate processing could help preserve linguistic diversity.
Conclusion
Generative AI and then LLMs holds unparalleled potential to bridge linguistic divides and democratize knowledge. Yet, as the case of Llama-2 reveals, it also risks perpetuating biases that reinforce cultural hierarchies and suppress diversity. As we continue to develop and deploy multilingual LLMs, the onus is on researchers, developers, and policymakers to ensure these systems promote inclusivity rather than cultural assimilation. Understanding the mechanics of linguistic biases is not just an academic exercise—it is essential to building an AI-powered future that respects and celebrates the world’s cultural diversity.
Reference
Wendler, Chris, Veniamin Veselovsky, Giovanni Monea, and Robert West. “Do Llamas Work in English? On the Latent Language of Multilingual Transformers.” arXiv.org, February 16, 2024. https://arxiv.org/abs/2402.10588.
Notes: For those of you who are interested to read the full paper, we have summarized it bellow as a starter.
Core Question: The study investigates if multilingual models like Llama-2, trained predominantly on English data, exhibit an English-centric bias in their internal computations, even when handling non-English languages. This concern is significant for understanding inherent linguistic biases in these models.
Findings:
The research identified three distinct phases in how Llama-2 processes language inputs: (a) Initial input embedding, (b) transition through an abstract “concept space” closer to English, and (c) final token prediction specific to the input language.
The “concept space” aligns more closely with English representations, indicating a potential English-centric intermediary processing stage.
Methodology:
The researchers used logit lens analysis to interpret the latent embeddings at various layers of the model. This method decodes token distributions at intermediate layers to understand language representation.
To ensure clear results, they designed prompts with unambiguous continuations in multiple languages.
Implications: The English-centric processing may influence multilingual performance and highlight potential biases that could affect applications in low-resource languages.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Joe Rogan has been all over the news headlines recently. He played a role in securing the victory of Donald Trump in the recent U.S. presidential election with his interviews with Donald Trump, J.D. Vance and Elon Musk, and his last-minute perfectly timed endorsement of Trump.
Many people love Rogan for this, and many hate him. But this is nothing new; Joe Rogan has had his share of controversy over the years. Here I’ll focus on his interviews with scientists and technologists, which should be less controversial than politics. Should be.
One of Joe Rogan’s podcast guests, Roger Penrose, is as respectable as scientists get. He won the Nobel Prize in Physics 2020 “for the discovery that black hole formation is a robust prediction of the general theory of relativity.”
I’ve been reading a new biography about him titled The Impossible Man: Roger Penrose and the Cost of Genius by Patchen Barss (November 2024). Penrose appeared on Joe Rogan’s podcast in 2018. “I had this interview when I was in the States with this chap called Joe Rogan,” says Penrose as reported in the book.
I recently watched the interview again; Penrose is always worth listening to, and Rogan asks interesting questions.
Patchen Barrs accuses Rogan in the biography of “providing a high-profile platform to pseudoscientists, conspiracy theorists, and other perpetrators of misinformation. Rogan mixes credible scientists in with crackpots, making it difficult for people to know which is which.” Barss sources this claim from an article published by the Office for Science and Society at McGill University titled Science vs. Joe Rogan.
These people have at least three things in common: first, they are all reputable scientists or technologists. Second, they are all interested in the Big Questions to which we all would like to hear answers. Third, at one or another time, they have all expressed ideas that go against the scientific (or cultural and political) consensus.
Sure, Rogan has interviewed people of less firm reputation as well, including people that some like to dismiss as crackpots or pseudoscientists.
Some people who call themselves scientists have accused Joe Rogan of promoting misinformation on COVID-19 and vaccines, platforming fringe theories and giving equal footing to pseudoscience alongside genuine science, giving airtime to conspiracy theories, and conducting interviews with scientists and technologists in a casual manner without pushback or fact-checking.
They have expressed concern about how Rogan could shape public opinion, especially among younger listeners or those who might take what they hear on his show as authoritative. They fear that this could lead to a general distrust in science or skepticism towards mainstream scientific consensus.
Some of them find Rogan’s approach to be anti-science, particularly when he mocks or dismisses scientific consensus or when he engages in or encourages scientific debates on fringe theories.
No, it is scientific dogma that has a problem
I think science itself has no problem with Joe Rogan. It is scientific dogmatists that have a problem. By scientific dogmatists I mean the zealots who want to protect the scientific establishment from the disruptive spirit of inquiry.
I have a problem with the dogmatists who have a problem with Joe Rogan. Open inquiry must be defended against scientific dogmatism. The soul of science is freedom to question theories and assumptions, and this must be protected against censorship and excessive backlash.
‘Misinformation’ and ‘pseudoscience’ are in the eye of the beholder. Often, ‘misinformation’ is information that the authorities don’t want the people to know. Often ‘pseudoscience’ is science that contradicts the scientific establishment and its paradigms.
I said often, which doesn’t mean always. But ‘often’ is enough to give Joe Rogan the benefit of doubt, and to praise his excellent work to bring science and technology closer to the little people like us.
Rogan brings a broad spectrum of views to the table, including those from scientists with different opinions, which can lead to a better understanding of complex issues. This can be beneficial in fields where there’s active debate or where the science is evolving.
By discussing science in a casual, accessible manner, Rogan makes science more approachable for the average person. This can demystify science, making it less intimidating and more integrated into everyday conversation.
Rogan’s style encourages listeners to question information, which can be a double-edged sword, but does promote skepticism and critical analysis. This can be useful in encouraging people to look into scientific claims independently, fostering a culture of inquiry.
Even more important is Rogan’s willingness to tackle controversial topics, and give a platform to new science or technology that the public is not yet aware of. Even if not all the information presented is correct, his ability to influence culture can stimulate public enthusiasm and support for science and emerging technologies.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
In June 1967, the Beatles premiered a glorious new song: All you need is love. The performance was the UK’s contribution to what was the world’s first global satellite television broadcast, and was simultaneously watched by over 400 million people in 25 different countries. The broadcast occurred during what became known as the Summer of Love, and the song became a powerful anthem of flower power.
The Beatles’ manager Brian Epstein had described the performance as the band’s finest moment, but it turned out that singing “all you need is love” wasn’t quite enough to bring about a world of peace and harmony.
Almost exactly 50 years later, a group of eight researchers at Google were searching for a title for an article they were about to publish. They settled on “Attention is all you need” – the title being the brainchild of the only Briton on the team, Llion Jones, who had grown up in north Wales, not far from Liverpool, the home of the Beatles. The article has attained legendary status within the global AI community, for its introduction of the transformer technology that underpins breakthrough AI initiatives such as ChatGPT.
Despite omitting architectural features that were previously thought to be essential for many text-based processing tasks, transformers excelled in these same tasks. The key innovation, which was to pay special attention to whichever parts of the input appeared most salient, turned out to give these AI systems a strong competitive advantage. The Attention is all you need paper correctly predicted that transformers could handle not just text but also other kinds of data, including pictures and sounds.
How far might transformers take AI? A third claim has increasingly been heard: “Scale is all you need”. Feed transformer systems ever larger amounts of data, and provide them with ever more powerful computer chips to crunch all that data into models with ever greater numbers of parameters, and there’s no limits to the degree of intelligence that can result. The “scale is all you need” hypothesis looks forward to AIs with fully general reasoning capabilities by doing more of the same.
In this context, I want to examine yet another “all you need” hypothesis. It’s a hypothesis that is already changing investment decisions and personal career trajectories. It’s the hypothesis that, whatever major problem you hope to solve, the best way to solve it is to start by creating general intelligence.
In this way of thinking, AI is all you need. An AI with god-like abilities will be able to race ahead of slow-witted humans to solve all the fundamental problems of science, medicine, and human existence.
Amodei states it as follows: “I think that most people are underestimating just how radical the upside of AI could be… My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years”.
Amodei gives some examples of the discoveries that AI-enabled science could make:
“Design of better computational tools like AlphaFold and AlphaProteo”
“More efficient and selective CRISPR” (for gene-editing)
“More advanced cell therapies”
“Materials science and miniaturization breakthroughs leading to better implanted devices”
“Better control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue”
“Better control over the immune system: turning it on selectively to address cancer and infectious disease, and turning it off selectively to address autoimmune diseases”.
Who wouldn’t like such a vision?
According to this logic, spending effort in the next few years to create AI with these capabilities is a better investment than spending the same effort to improve biology and medicine here and now.
Funding is what marshals effort, and any funds at someone’s disposal should, it appears, be directed towards improving AI, rather than towards companies or foundations that are seeking to improve biology or medicine. Right?
A two-step mission statement
Back in February 2015, Demis Hassabis was relatively unknown. There had been a bit of press about the purchase of his company, DeepMind, by Google, for £400 million, but most people had little conception of what the company would accomplish in the following years.
You can also read on that page on Reddit, from nearly ten years ago, some fascinatingly scathing comments about that mission statement:
“Ridiculous and poorly-defined goals”
“FFS [what] a mission statement [for] a company”
“‘Fundamentally solve intelligence’ in the linked screenshot above is a whole load of nonsense”
“I don’t even think we have a working definition for ‘intelligence’ yet. We don’t even know how it works in humans… How can we hope to recreate it before knowing what it is?”
Once intelligence has been “fundamentally solved”, it should be relatively straightforward to solve climate change, economic distribution, cancer, dementia, and aging, right?
After all, given an AI model that can correctly predict how a long string of amino acids will fold up as a protein in three dimensions, won’t a scaled-up version of that model be able to predict other interactions between biochemical molecules – and, indeed, to predict how biological cells will respond to all kinds of proposed interventions?
The data bottleneck
One person striking a note of caution against exuberant forecasts of rapid additional progress about AI in medicine, was someone who shared the Nobel Prize with Demis Hassabis, namely David Baker of the University of Washington.
In an article published in MIT Technology Review shortly after the Nobel Prize, Baker pointed out that “AI needs masses of high-quality data to be useful for science, and databases containing that sort of data are rare”.
Indeed, the stunning success of DeepMind’s AlphaFold AI was fundamentally dependent on prior decades of painstaking work by numerous scientists to assemble what is known as PDB – the protein data bank.
The third of the joint winners, John Jumper of DeepMind, acknowledged this dependency in a press conference after the prize was announced. Jumper said, “I also want to really thank the giants on whose shoulders we stand, I think the entire experimental community, the people that developed the ability to measure protein structures, especially to Helen Berman and other pioneers of the Protein Data Bank, the PDB, who had the foresight to put these data together to make it available”.
Helen Berman had pioneered the PDB from 1971. As she graciously commented in a recent interview, “I am a very lucky person to have had an idea as a student, pursued that idea for more than 50 years, and then seen brand new science emerge for which three people have won this year’s Nobel Prize. It is really gratifying”.
Remarkably, Berman’s interest in protein folding predates even the Beatles song. In an online living history memoir written in 2012, Berman notes “In 1966 …I became fascinated by the world of protein folding. As part of my Ph.D. qualifier, … I proposed to perform structure-based sequence comparisons of known proteins…”.
Progress in determining protein structures was slow, for a long time, before becoming faster. This slide from a 2009 presentation by Berman that graphs the growth in the total number of proteins documented in PDB will look familiar to anyone familiar with singularitarian ideas:
In the MIT Technology Review article, ‘A data bottleneck is holding AI science back’, David Baker pointed out that “If the data that is fed into AI models is not good, the outcomes won’t be dazzling either. Garbage in, garbage out”.
The subtitle of that article says it straightforwardly: “AI’s usefulness for scientific discovery will be stunted without high-quality data”.
So, we can forget “AI is all we need”. Before we can develop an AI that can solve aging for us, we will need to obtain suitable data on which that AI can be trained. We’ll need the equivalent of PDB for all the interventions that might remove or repair the low-level biological damage that we call aging.
Unless, that is, the AI has a very special kind of superintelligence, which allows it to reach conclusions even in the absence of adequate data. Let’s turn to that option next.
AI Zero?
The AI which achieved worldwide renown in March 2016 by defeating human Go superstar Lee Sedol, namely AlphaGo, gained that ability by being able to study around 160,000 games played between expert-level human Go players. The design of that version of the AI utterly depended on learning which moves tended to be selected by the best human players in a wide variety of situations.
AlphaGo’s success against Lee Sedol was rightly celebrated, but what happened in the following year was arguably even more startling. As reported in an article in Nature in October 2017, a new version of the AI, dubbed “AlphaGo Zero”, was given no data from human games; nor did it receive any human feedback on moves it suggested. Instead, it started tabula rasa, knowing only the rules of the game, before proceeding to play itself 4.9 million times in just three days.
AlphaGo Zero new self-play algorithms proved sufficient to reach higher levels than the earlier version (sometimes called “AlphaGo Lee”) that played Lee Sedol. When AlphaGo Zero played 100 games against AlphaGo Lee, it won every single game.
A similar pattern can be observed in the progress of AIs that process text. The trend is to require less and less explicit human guidance.
Consider AIs that translate between two languages. From the 1950s onward, designers of these systems provided ever-larger numbers of rules about grammar and sentence structure – including information about exceptions to the rules. Later systems depended on AIs observing, by themselves, statistical connections in various matching sets of text – such as the official translations of materials from the European Parliament, the Canadian Parliament, and the United Nations.
Managers noticed that the statisticians tended to produce better results than linguists who toiled to document every jot and tittle of grammatical variations. Infamously, Frederick Jelinek, a lead researcher at IBM, remarked that “Every time I fire a linguist, the performance of the speech recognizer goes up”. Performance jumped up again with the adoption of deep neural networks from 2012 onward, with the translations now being accurate not only at the word-for-word level but also at the sentence-for-sentence level.
A final significant jump came when transformer-based AIs were adopted. (The word “transformer” had been chosen to reflect the ability of these systems to transform text from one language into another.) As mentioned earlier, transformers are powerful because their algorithms can work out the strengths of connections between different parts of text input by themselves; they don’t need these connections to be pointed out by humans.
Could something similar happen with medical AIs of the future? Could such an AI find sufficient reliable information in an ocean of less reliable data, and therefore propose what steps should be taken to solve aging?
AI omniscience?
To recap: AlphaGo Lee needed detailed guidance from humans, before it could improve itself to superhuman level; but its successor, AlphaGo Zero, attained that level (and exceeded it) simply by power of its own vast intelligence.
Might it be similar with medical AI? Today’s AI medical systems are constrained by the extent of data, but might a future AI be able to work out all the principles of biology (including biology in which there is no aging) by starting tabula rasa (with a blank slate)?
All You Need Is Love said, “there’s nothing you can know that isn’t known” – the ‘all you need is AI’ approach would mean there’s nothing can be known it doesn’t know. Effectively, the AI would be omniscient.
Well, count me sceptical. It’s my view that some things need to be discovered, rather than simply deduced.
For example, why are there eight planets in our solar system, rather than thirteen? No principles of astronomy, by themselves, could determine that answer. Instead, the configuration of our solar system depends on some brute facts about the initial conditions under which the solar system formed. The only way to know the number of planets is to count them.
Again, why has life on our planet adopted a particular coding scheme, in which specific triplets of the nucleotides A, T, C, and G result in specific amino acids being formed? Why did homo sapiens lose the ability to synthesize vitamin C, or other genetic features which would be useful to us? Why are particular genes found on specific chromosomes? The only way to know which genes are located where is to look and see. No “AI Zero” is going to discover the answer by meditating in a void.
Therefore, I do not accept that “AI is all you need”. Data is also needed. That is, critical data.
This need is correctly recognized in the article Machines of Loving Grace by Dario Amodei, which I’ve already quoted. Amodei includes in the article “a list of factors that limit or are complementary to intelligence”. One of these items is “Need for data”.
Amodei comments: “Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent – other than perhaps by speeding up the construction of a bigger accelerator.”
AI as Principal Investigator?
Amodei offers a bold solution to this lack of data: “The right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on.”
Amodei adds: “It is by speeding up the whole research process that AI can truly accelerate biology.”
He continues: “I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. …I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.”
Amodei highlights the power of intelligence to transcend the limitations of its data: “You might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little. This seems implausible to me – I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world”. Replace the “large group of really smart people” by an artificial superintelligence, and Amodei expects progress in science to rocket forward.
It’s an attractive vision, and I urge everyone to read Amodei’s entire essay carefully. (It covers many more topics than I can address in this article.)
But in case anyone is inclined to deprioritize existing research into promising lines of rejuvenation biotechnology, I have four remaining concerns: three negative and one strongly positive.
Three concerns and a huge opportunity
My first concern is that the pace of progress in AI capabilities will significantly slow down. For example, the data scaling laws may hit an impasse, so that applying more data to train new AI systems will fail to create the kind of superintelligence expected.
Personally I think that such a “wall” is unlikely, especially since AI developers have many other ideas in mind for how AI could be improved. But the possibility needs to be considered.
Second, it’s possible that AI capabilities will continue to surge ahead, but the resulting AI systems get involved in catastrophic harm against human wellbeing. In this scenario, rather than the AI curing you and me of a fatal condition – aging – it will cause us to die as a side-effect of a bad configuration, bad connectivity to fragile global infrastructure, an alien-like bug in its deep thinking processes, or simple misuse by bad actors (or naïve actors).
The leaders of the corporations which are trying to create artificial superintelligence – people like Demis Hassabis, Dario Amodei, Sam Altman, Elon Musk, Ben Goertzel, and a number of Chinese counterparts – say they are well aware of these dangers, and are taking due care to follow appropriate safety processes. But creating artificial superintelligence is an intensely competitive race, and that risks corners being cut.
This agreement, with appropriate monitoring and enforcement mechanisms, would have the same effect as in the first concern above: AI progress hits a wall. But this time, it will be a wall imposed by regulations, rather than one intrinsic to the engineering of AI.
Some critics have responded that the chances are very slim for such an agreement to be reached and adopted. However, I disagree. That’s on account of both a stick and a carrot.
The stick is the growing public awareness of the catastrophic risks that new generations of AI bring. (That awareness is still on the slow part of the exponential growth curve, but may well accelerate, especially if there is a scandalous disaster from existing AI systems, something like an AI Chernobyl.)
The carrot is a clearer understanding that all the benefits we want from artificial superintelligence can also be obtained from an AI with humbler powers – an AI that:
Is only modestly more capable than today’s best AIs
Lacks any possibility to develop autonomy, sentience, or independent volition
Will remain a passive, safe, but incredibly useful tool.
In a moment, I’ll say more about this huge opportunity. But first, let me interject an analogy about the choices facing humanity, as we contemplate how we might manage AI.
Peaceful progress or violent overthrow?
“Tear down the barricades!”
“Expropriate the expropriators!”
“Lock up the élites!”
“String up the capitalists!”
“Overthrow the ruling class!”
Such are the calls of revolutionaries in a hurry. However, the lesson of history is that violent revolutions tend to end up “devouring their own children” – to quote a phrase spoken by Jacques Mallet du Pan (referring to the French Revolution sending its original leaders to the guillotine) and also by former Hitler loyalist Ernst Röhm.
Similar remarks could have been uttered by many of the one-time supporters of Vladimir Lenin or Joseph Stalin, who subsequently found themselves denounced and subject to show trials.
However, the saying is not entirely correct. Some revolutions avoid subsequent internal bloodbaths: consider the American Revolutionary Wars of Independence, and the Glorious Revolution of 1689 in England.
Revolutionaries must uphold principle ahead of power-seeking, maintain a clear grip of reality (rather than becoming lost in self-deception), and continue to respect wise process (rather than allowing dictatorial leaders to do whatever they please) – in such cases, a revolution can lead to sustained progress with increased human flourishing.
Now consider the difference between what can be called “democratic socialists” and “Marxist-Leninists”. The former highlight ways in which the plight of the working class can be alleviated, stage by stage, through gradual societal reform. The latter lose patience with such a painstaking approach, and unleash a host of furies.
In case it’s not clear, I’m on the side of the democratic socialists, rather than the would-be revolutionaries who make themselves into gods and absolute arbiters.
For how humanity chooses to develop and deploy AI, I see the same choice between “harness accelerationists” and “absolute accelerationists”.
Harness accelerationists wish to apply steering and brakes, as well as pressing firmly on the throttle when needed.
Absolute accelerationists are happy to take their chances with whatever kind of AI emerges from a fast and furious development process. Indeed, the absolute accelerationists want to tear down regulation, lock up safety activists, and overthrow what they see as the mediocrity of existing international institutions.
Once again, in case it’s not clear, I’m on the side of harnessing acceleration. (Anyone still on X aka Twitter can see the “h/acc” label in my name on that platform.)
Harnessing requires more skill – more finesse – than keeping your foot pressed hard to the floor. I understand why absolute accelerationists find their approach psychologically comforting. It’s the same appeal as the Marxist promise that the victory of the working class is inevitable. But I see such choices as being paths toward humanitarian catastrophe.
Instead, we can proceed quickly to solving aging, without awaiting the emergence of a hopefully benevolent god-like AI.
Solving aging – without superintelligence
Above, I promised three concerns and one huge opportunity. The opportunity is that it’s pretty straightforward to solve aging, without waiting for a potentially catastrophically dangerous artificial superintelligence. There are low-hanging fruits which aren’t being picked – in part because funding for such projects is being diverted instead to AI startups.
Aging occurs because the body’s damage-repair mechanisms stop working. Our metabolism runs through countless biochemical interactions, and low-level biological damage arises as a natural consequence – due to injuries inflicted by the environment, bad lifestyle choices, the inevitable side-effects even of good lifestyle choices, or (perhaps) because of programmed obsolescence. When we are young, lots of that damage is routinely repaired or replaced soon after it occurs, but these replacement and repair mechanisms lose their effectiveness over time. The consequence is that our bodies become more prone to all sorts of disease and infirmity. That’s aging.
The most promising path to solving aging is to comprehensively reinforce or complement these damage-repair mechanisms. The low-hanging fruit is that we have a long list of ways this might be achieved:
By taking inspiration from various animal species in which at least some of the damage-repair mechanisms are better than in humans
By understanding what’s different about the damage-repair mechanisms in ‘human superagers’
By designing and applying new interventions at the biotech or nanotech levels.
To be clear, this does not mean that we have to understand all of human biological metabolism. That’s horrendously complicated, with numerous side-effects. Nor do we even need to understand all the mechanisms whereby damage accumulates. Instead, we just need to observe, as engineers, what happens when new damage-repair mechanisms are applied in various animals.
These mechanisms include senolytics that clean up senescent cells (sometimes called “zombie cells”), extending telomeres at the ends of chromosomes, reversing some of the epigenetic alterations that accumulate on our DNA, introducing specially programmed new stem cells, nanoparticles which can break-up accumulated plaques and tangles, re-energising the mitochondria within our cells – and much more.
In each case, some useful research is being done on the viability of introducing these repair mechanisms. But nothing like enough.
We especially need tests of the long-term effects of damage-repair mechanisms, especially applied in combination. These tests can determine something that even an artificial superintelligence would find difficult to predict by meditating in a void: which damage-repair interventions will positively synergize with each other, and which ones have antagonistic effects.
These are the kind of tests being pursued by one organisation where I need to declare an interest: the Longevity Escape Velocity Foundation (LEVF), where I have a role on the leadership team, and whose underlying ideas I have supported for nearly 20 years since first coming across them in meetings of what was the forerunner of London Futurists.
LEVF is carrying out a number of extended projects on large numbers of mice, involving combining treatments that have already been proven to individually extend the lifespan of mice treated from middle age. Interim results of the first such project, RMR1, can be reviewed here (RMR = Robust Mouse Rejuvenation), and plans for the second one, RMR2, have been posted here.
Rather cheekily, may I suggest that the 1967 slogan of the Beatles, All you need is love, got two letters wrong in the final word?
Two scenarios for trying to solve aging
To conclude, I envision two competing scenarios ahead, for how aging should be solved:
An “AI first” strategy, in which important research into rejuvenation biotechnology is starved of funding, with money being preferentially allocated to general AI initiatives whose outcomes remain deeply uncertain.
A “damage-repair research now” strategy, in which projects such as RMR2 receive ample funding to proceed at pace (and, even better, in multiple different versions in parallel, including in animals larger than mice), with the data produced by such experiments then being available to train AIs which can complement the ingenuity of pioneering human researchers.
What’s your pick?
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Why do some memories linger for a lifetime while others vanish soon after forming? Neuroscientists have long grappled with this puzzle, especially given the brain’s constant renewal of its molecular components. A recent breakthrough study has illuminated a key aspect of this phenomenon: the interaction between two proteins, KIBRA and PKMζ, provides a scaffolding that stabilizes memory. This remarkable discovery not only explains how long-term memory persists but also opens potential avenues for treating memory-related disorders like Alzheimer’s disease.
The Challenge of Long-Term Memory Retention
Memory is stored in the brain through the strengthening of synaptic connections, a process known as long-term potentiation (LTP). However, the machinery that sustains these connections, from proteins to receptors, is continually replaced. This turnover creates a paradox: how can memories persist for decades if the molecular components of the brain are so short-lived? Decades ago, Francis Crick suggested that stable interactions between molecules might hold the key to this mystery. Building on this idea, researchers focused on KIBRA, a scaffolding protein tied to memory performance, and PKMζ, a kinase known for its role in sustaining synaptic changes.
KIBRA and PKMζ: The Anchors of Memory
KIBRA and PKMζ work together to maintain the stability of synaptic connections crucial for memory. PKMζ, an atypical kinase, is persistently active, bypassing the need for external activation, and plays a vital role in reinforcing the connections at memory-encoded synapses. KIBRA serves as a docking station for PKMζ, anchoring it at synapses to ensure that the structural changes necessary for memory endure. Without KIBRA’s anchoring action, PKMζ cannot localize effectively, causing the connections to weaken and memories to fade.
Experiments highlighted how synaptic activity intensifies the interaction between KIBRA and PKMζ. Using cutting-edge proximity ligation assays, researchers observed that strong neuronal stimulation led to the accumulation of KIBRA-PKMζ complexes at active synapses. These complexes were particularly concentrated in regions of the brain linked to learning and memory, such as the hippocampus.
Testing the Limits of the KIBRA-PKMζ Partnership
The study explored deeper by introducing antagonists that blocked the interaction between KIBRA and PKMζ. The consequences were striking: blocking this interaction disrupted established LTP while leaving basal synaptic transmission unaffected. This finding underlined the selective importance of the KIBRA-PKMζ complex in maintaining memory-associated synaptic changes.
To explore how this disruption impacted behavior, the researchers tested mice trained in spatial memory tasks. Mice conditioned to avoid a specific area of their environment lost this memory after the introduction of an inhibitor that blocked KIBRA-PKMζ coupling. However, their ability to learn and form new memories remained intact. This specificity highlights the unique role of KIBRA and PKMζ in stabilizing existing memories, distinguishing it from the processes involved in learning.
A Molecular Mechanism That Defies Turnover
One of the study’s most intriguing findings was how the KIBRA-PKMζ interaction enables memory to persist despite the turnover of individual molecules. The research confirmed that while the proteins themselves are replaced within hours or days, the interaction between them is continuously re-established at memory-relevant synapses. This dynamic process ensures that the “scaffold” supporting memory remains intact even as its components are renewed.
The researchers proposed a model of “persistent synaptic tagging,” where KIBRA functions as a tag that attracts newly synthesized PKMζ molecules to the right locations. This process sustains the structural integrity of memory-encoded synapses, making it possible for memories to outlast the lifespan of their molecular building blocks.
Implications for Memory Disorders and Treatment
The discovery of the KIBRA-PKMζ interaction has profound implications for understanding and treating memory-related conditions. If this partnership is essential for maintaining memory, then therapies aimed at enhancing or mimicking this interaction could potentially combat age-related memory decline or neurodegenerative diseases like Alzheimer’s.
One promising avenue involves developing drugs that stabilize or amplify the KIBRA-PKMζ connection, strengthening synaptic changes and improving memory retention. Conversely, selectively disrupting this interaction could help erase traumatic memories, offering hope for conditions such as post-traumatic stress disorder (PTSD). The study also raises the possibility of targeted therapies that activate PKMζ in specific brain regions, enhancing memory resilience in the early stages of dementia.
A Glimpse Into the Future of Memory Research
The partnership between KIBRA and PKMζ represents a significant leap forward in our understanding of how the brain preserves memories over time. By acting as a molecular scaffold, these proteins ensure the stability of synaptic changes that underpin long-term memory. This discovery not only sheds light on the biological basis of memory but also offers a blueprint for future research and therapeutic innovation.
As scientists continue to unravel the intricacies of this molecular duo, the potential for groundbreaking treatments becomes clearer. From enhancing memory in aging populations to mitigating the effects of neurodegenerative diseases, the KIBRA-PKMζ connection offers a promising pathway for addressing some of the most pressing challenges in neuroscience.
Conclusion
Memory, once thought to be an ephemeral and mysterious phenomenon, is now being understood at the molecular level. The discovery of the KIBRA-PKMζ partnership provides a compelling explanation for how the brain defies the constraints of molecular turnover to preserve memory. This research not only advances our understanding of the brain’s inner workings but also points toward a future where memory loss is no longer an inevitable consequence of aging or disease.
Through continued exploration of this groundbreaking mechanism, we inch closer to unlocking the full potential of memory science and delivering tangible benefits to those affected by its decline. The intricate dance between KIBRA and PKMζ reminds us that even the smallest molecular interactions can have profound implications for the human experience.
Reference
Tsokas, Panayiotis, Changchi Hsieh, Rafael E. Flores-Obando, Matteo Bernabo, Andrew Tcherepanov, A. Iván Hernández, Christian Thomas, et al. “KIBRA Anchoring the Action of PKMζ Maintains the Persistence of Memory.” Science Advances 10, no. 26 (June 26, 2024). https://doi.org/10.1126/sciadv.adl0030.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
It is now possible to create a virtual replica of you that, “is able to embody your values and preferences with stunning accuracy,” MIT Technology Review reports (unpaywalled copy).
This is the conclusion of a study by a team including researchers from Stanford and Google DeepMind, published in arXiv.
Imagine if you could sit down with an Artificial Intelligence (AI) for a chat that lasts two hours, where you talk about everything from your childhood memories to your views on big issues like immigration. After your conversation, this AI could create an avatar that acts, thinks, and responds just like you.
This study involved over 1,000 people from diverse backgrounds who were interviewed for two hours each. The goal was to create ‘simulation agents’, which are AI models designed to reproduce the behavior of a specific person with high accuracy. These agents then went through the same set of tests as the humans to see how well they could replicate their behaviors, achieving an impressive 85% similarity.
Simulation agents
Simulation agents act as digital stand-ins for real people, allowing researchers to simulate social and behavioral scenarios. This can be useful for studying things like the spread of misinformation or how people might react in certain social situations.
These agents differ from the more common tool-based agents that perform specific tasks like booking appointments or retrieving information, rather than simulating human interactions or personalities.
John Horton, an associate professor from MIT, highlights that this approach represents a hybrid model where real human data feeds into AI personae that researchers use in simulations, opening new channels of research that would be too costly or ethically challenging with real people.
To make these AI agents, the researchers used qualitative interviews, where they talked to participants to gather rich, detailed information about their lives and views. This method proved more effective than traditional surveys for capturing the unique traits of individuals.
Joon Sung Park, leading the study, noted that interviews can reveal profound details about a person, like surviving a serious illness, which might not come out in typical surveys. This approach is more time-efficient for creating a detailed profile.
This new study suggests that with just a couple of in-depth conversations, it could be possible to create a fairly accurate digital twin. This could spark interest in companies like Tavus, which might now look into using similar techniques for their services.
Deepfake AI persons?
Advancements in AI open up discussions about privacy, consent, and the ethical implications of digital replication.
This technology isn’t without its risks. Just like AI can create deepfakes – fake media that looks real – this could lead to unauthorized digital impersonations. The methods used to assess the accuracy of these AI replicas were basic; they included standard social surveys and personality assessments, which might not capture all the nuances of human individuality. For instance, AI agents struggled more with behavioral tests that measure fairness and altruism.
Personality capture and preservation
However, there are other methods to create digital twins, where AI models learn from a person’s emails or other digital footprints, which yield a lot more data.
This is akin to the methods of personality capture and preservation proposed by William Sims Bainbridge and Martine Rothblatt; they have proposed to collect and preserve digital reflections of an individual’s personality, essentially creating a digital version of a person. There ideas sound like science fiction that could, one day, become science fact.
Bainbridge and Rothblatt propose to capture one’s personality by accumulating data including answers to structured questionnaires, but also emails, social media posts, blogs, photos, and online activities.
This collection forms a ‘mindfile’, which Rothblatt describes as a sum of saved digital reflections about an individual. The idea extends to using this data with future AI technologies to emulate or upload a person’s consciousness into a digital or robotic form, thereby achieving a form of digital immortality.
Bainbridge and Rothblatt believe that digital technology could capture human identity and personality. With sufficient technological advancement, digital data and AI recreate or preserve an individual’s essence after their physical death.
I think answers to questionnaires, emails, social media posts, blogs, photos, and online records recorded with current means could not achieve the rich texture needed to emulate a real human personality.
But I don’t rule out the possibility that new brain-machine interfaces like Neuralink could make the process viable.
Of course this would be light years ahead of the preliminary experiments of the Stanford and Google DeepMind researchers. But we can think of our preliminary experiments as encouraging baby steps toward the distant goal of creating a digital replica of a person. Technology could eventually allow to endow the replica with consciousness.
Popular culture is warming up to the idea
Greg Gutfeld and his guests discussed the MIT Technology Review article in Gutfeld’s very popular show.
Of course the discussion was fun, as appropriate to a comic show. But the very fact that the host of a popular show discussed these arcane and weird things shows that popular culture is warming up to the possibility of creating digital persons.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The crypto gaming industry in 2024 is evolving into a new form, in a transformation far from complete. That’s the message of a comprehensive new Q3 2024 report on GameFi.
Game7’s report shows Web3 gaming is reshaping the global gaming landscape. Telegram is emerging as a launchpad for new titles, and Web2 gaming powerhouses are cautiously testing these waters. This shift puts pressures on traditional mobile platforms to adapt, while they grapple with restrictive policies and regulatory uncertainty.
Here’s how the Web3 gaming ecosystem is growing in 2024, and how it is segueing into a potentially bumper 2025.
Web3 Gaming Ecosystem
The total gaming market cap peaked in early Q2 this year at around $40 billion, and has since halved, as hotter narratives like Bitcoin, AI and meme coins ate its share.
This doesn’t mean Game Over for the crypto gaming sector – the report’s data shows positive signs.
Blockchain ecosystems like Immutable and Arbitrum are thriving, driving rapid growth with innovative offerings, and attracting developers from rival platforms. While game announcements dipped 36% from 2023, fewer projects abandoned development, showing a maturing ecosystem.
Indie creators dominated: they drove over 90% of releases, favoring the RPG, casual, and action genres. Meanwhile, platforms like Epic Games Store embraced Web3, hinting at gaming’s bold new chapter amid ongoing challenges with integration and adoption.
Geographically, the Asia-Pacific region leads Web3 game development, accounting for nearly 40% of new titles, followed by North America at 35%. The USA contributes 27% of teams, with South Korea and Singapore following.
Genre trends remained stable: RPGs, casual, and action games top the charts as developers refine Web3 integration within the old familiars before any creative daring.
Telegram Reshapes Web3 Gaming
In addition to the rise of crypto trading bots on Telegram, the messaging app emerged as a surprising force in Web3 gaming in 2024. Telegram has carved a niche as a preferred platform for launching Web3 games, with 21% of new titles choosing it as their home. Distribution trends for new Web3 games largely followed patterns from 2023, but Telegram’s rise is a new turning point, as this graph shows –
What makes Telegram so appealing? Simplicity. Its massive user base (950 million active monthly users) and frictionless in-app experience sidestep the challenges of traditional app stores like Apple’s and Google’s.
To illustrate the discontent game developers have with Big Tech’s app stores, Epic Games – the parent company of Fortnite – sued Google and Samsung for blocking competition. For developers like these, Telegram answers the call for a way to distribute games without high store fees or restrictive NFT policies.
Despite this, many game publishers are crafting Telegram strategies – not for full-fledged titles, but as a complementary user-acquisition stream.
Telegram’s success could also disrupt the status quo in mobile distribution. Its integration of blockchain gaming within a messaging app shows how traditional channels might adapt, or risk being left behind.
Traditional Gaming Giants Enter Web3
Traditional gaming platforms are beginning to tiptoe into Web3 territory, blending innovation with caution. The Epic Games Store (EGS) has taken a leading role, adding 85 new Web3 titles in the past year. While it lacks dedicated Web3 tools, its open-minded content policies have made it a trusted partner for blockchain projects.
EGS is also expanding its global presence, launching on Android and EU iOS in 2024. Though its mobile stores don’t yet welcome third-party Web3 games, its pro-Web3 stance on PC keeps it a platform to watch. Meanwhile, Steam holds firm on its controversial Web3 restrictions. Still, savvy developers are finding workarounds by releasing modified versions of their games to leverage Steam’s massive market reach.
Consoles are also testing the Web3 waters. Off the Grid by Gunzilla Games made waves as the first Web3 game to debut on consoles. By separating blockchain features from core gameplay, it hints at a future where platforms like Microsoft and Sony may cautiously embrace this new frontier. As policies evolve, Web3 teams could gain access to the vast console audience, reshaping gaming as we know it.
Key Blockchain Stats
The blockchain gaming ecosystem grew rapidly in 2024, with 105 blockchain networks were announced that can support gaming – a 20% increase from the previous year. Of these, 64% were designed specifically for gaming, reflecting the rising developer interest in tailored blockchain solutions.
A major shift is happening toward Layer-2 and Layer-3 networks. L3 solutions, which only gained traction in 2023, accounted for 43% of new gaming blockchains this year, thanks to their customizable environments. L2s followed closely, making up 34% of the new entries. Though Layer-1 (L1) networks still host 66% of active games, most 2024 launches embraced L2 or L3 solutions for better scalability and gaming-specific features.
Ethereum Virtual Machine (EVM) continues to dominate as the go-to smart contract environment. Despite rising competition from ecosystems like Solana and Sui, developers are hesitant to leave the EVM due to its mature tools and lower switching costs. For now, EVM remains the backbone of blockchain gaming innovation.
Funding and Tokens
Web3 gaming funding in 2024 held steady, matching 2023 levels. Investors focused on content over infrastructure. Here again the same genres account for most of the market share: action, RPG, and casual games. Gaming chains and engagement platforms also attracted attention, but cautious optimism prevailed as investors awaited the next Axie Infinity moment.
Meanwhile, gaming token launches skyrocketed, up 200% from the previous year. By Q3, 74% of these launches were tied to game titles, reflecting a clear shift toward monetizing gameplay. The industry remains conservative, but the surge in tokens shows a drive for in-game economies and blockchain-powered experiences.
Web3 gaming projects face a complex regulatory landscape in the USA, dominated by the SEC and CFTC. With the election of crypto-friendly Donald Trump these regulations might finally be clarified.
Shifts in the U.S. political climate suggest the possibility of a more collaborative regulatory approach. Many projects are adapting by designing tokens with utility-focused mechanics, aligning more closely with traditional gaming models. While some delays in token launches persist, developers are finding ways to integrate blockchain elements into gameplay, balancing innovation with compliance.
Conclusion
Web3 gaming in 2024 is evolving, driven by innovation, indie creativity, and platforms like Telegram. Developers are refining blockchain integration while navigating regulatory and technical hurdles. Traditional platforms are slowly adapting, hinting at a broader shift to Web3. Challenges remain, but the foundation for growth is strong.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Donald Trump will be back in the White House in 2025, bringing with him possibly a more business-friendly approach to new digital technology sectors like AI and crypto. His ally Elon Musk has repeatedly warned against AI overreach which could end mankind, but will Trump heed these warnings and suppress AI? Or follow the ideology that deregulation leads to economic growth? Only time will tell.
Trump is pro-business, and that provides good negotiating ground with AI giants like OpenAI, Google, and Microsoft as they try to find the right regulatory frameworks to contain the exponentially evolving machine-learning sector.
Trump’s victory in the 2024 presidential election is sure to have significant implications for the development and regulation of artificial intelligence (AI) and cryptocurrency-related AI projects. While this could enhance the competitive position of U.S. tech firms, it also raises significant concerns about ethical standards, international cooperation, and the trajectory of technological growth.
The implications of these changes will be felt globally, as nations navigate the complexities of AI governance – and AI competition – in an increasingly interconnected world.
Here’s an analysis of the biggest potential impacts on AI that we might see in 2025:
The tech industry, particularly AI companies and startups, may welcome this less restrictive environment. It could potentially accelerate the development and deployment of AI technologies without the constraints of extensive governmental oversight. This approach might foster rapid innovation and allow U.S. companies to maintain a competitive edge in the global AI race.
Trump’s approach is likely to be welcomed by major tech companies that advocate for a lighter regulatory touch. Many in the industry argue that stringent regulations can stifle innovation and their competitiveness. However, companies can’t be relied on to self-regulate: they will surely prioritize profit over ethical considerations.
This is part of Trump’s broad economic agenda to reduce regulations on all businesses – not just digital. It could potentially lead to rapid advancements in AI capabilities – or to rapid AI catastrophe!
Potential Risks of Deregulation
The deregulatory approach raises concerns about safety, ethics, and accountability in AI development. Experts warn that without adequate regulatory frameworks, the risks associated with AI – such as bias, misinformation, and privacy violations – could escalate. The lack of robust oversight may hinder efforts to establish ethical standards and best practices in AI development, which are crucial for addressing the technology’s societal impacts.
Advocates of deregulation say it will spur innovation, however nothing but regulation can establish the ethical standards and best practices needed to address AI’s societal impacts.
Influence over Global Standards
Trump’s deregulatory policies – and his poor reputation internationally – do not bode well for the country’s ability to influence international norms and regulations governing AI. Historically, the USA shaped global AI standards, simply because it is home to the leading tech firms and research institutions: Google, Microsoft, MIT, etc. Trump’s deregulatory policies could diminish the USA’s ability to influence international norms and regulations governing AI. As countries like China continue to advance their AI capabilities, a weak regulatory landscape in the USA might hinder its competitive edge.
Geopolitical Tensions
Trump’s administration has signalled its intention to continue and expand Biden’s protectionist measures. The USA could attempt tighter export controls on AI technologies, particularly to China. Such actions could create barriers to global cooperation in AI development and governance, exacerbating geopolitical tensions, and limiting the USA’s ability to lead global discussions on responsible AI use.
Cryptocurrency and AI Projects
Trump’s election victory is seen as potentially beneficial for cryptocurrency-related AI projects. His administration is likely to foster an environment that encourages innovation and investment in blockchain technologies and digital currencies. This could be advantageous for startups looking to develop new crypto solutions without heavy regulatory scrutiny.
The cryptocurrency market has already shown a positive response to Trump’s victory, with Bitcoin reaching an all-time high. This surge reflects the market’s anticipation of a more crypto-friendly regulatory environment under Trump’s leadership.
However, the lack of regulation could also lead to increased volatility in the cryptocurrency markets. A deregulated environment may attract speculative investments, but could also expose investors to higher risks associated with fraud and market manipulation.
Uncertainty for Innovators
The anticipated changes in regulation could introduce uncertainty for businesses involved in AI development. Companies might face challenges maintaining compliance with rapidly shifting regulations, and this could impact investment decisions and strategic planning. The lack of clear guidelines may deter some innovators from pursuing ambitious projects due to fears of potential backlash or future regulatory changes.
National Security and Defense AI
In the realm of defense and national security, Trump’s administration might pursue a less stringent regulatory environment for AI development related to military technologies. This could lead to closer relationships between the government and private organizations involved in developing defense-related AI. However, this approach might also diminish prospects for international cooperation on defense-related AI governance, potentially exacerbating global tensions in military technology development.
Economic Impact
Trump’s policies are expected to prioritize private sector gains in AI development. This approach might help companies move fast without adequate safeguards! While this could spur economic growth and innovation in the short term, it raises concerns about long-term consequences regarding consumer safety and privacy – as well as catastrophic AI risk.
Workforce and Education
The rapid advancement of AI under a deregulated environment could have significant implications for the workforce. It might create new job opportunities in the tech sector, but accelerate job displacement in other industries. Trump’s administration will need to address these challenges, potentially through workforce retraining programs and education initiatives focused on AI and related technologies.
Global Competitiveness
Trump’s approach aims to keep the USA ahead of other regions, particularly Europe and China, in AI development. The administration hopes that reducing regulatory barriers will spur domestic innovation and give the country a competitive edge over more regulated countries. However, this strategy also risks widening the gap between the USA and other nations in terms of AI governance and ethical standards.
Conclusion
Donald Trump’s victory signals a significant shift in the U.S. approach to AI and crypto-related AI projects. The anticipated deregulation is likely to spur rapid innovation and investment in these sectors. However, this may come at the cost of safety, ethics, and long-term societal impacts.
The success of this approach will depend on how well the administration can balance the drive for innovation with necessary safeguards. It will also require careful navigation of international relations, particularly in managing technology transfers and global AI governance.
As the AI landscape mutates under Trump’s leadership, the tech industry, policymakers, and the public will need to remain vigilant. They must work to ensure that the benefits of AI advancement are realized, while mitigating potential risks and ethical concerns. The coming years will be crucial in shaping the future of AI and its impact on society, both in the USA and globally.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
In this article, we explore the growing influence of AI-generated content, focusing on deepfake technology and its impact on media and public trust. Deepfakes have made it harder to tell real from fake, challenging media literacy and raising concerns about misinformation, ethical issues, and societal risks. In a paper published in July 2024 in Devotion, Journal of Community Service, Riski Septiawan examines the technology behind deepfakes, their potential for misuse, and the steps needed to tackle the challenges they create. Here, we reflect on the key points of this study.
Introduction
AI-generated media is changing how people consume and evaluate information. Deepfakes, in particular, produce realistic but fabricated content that can manipulate public opinion and spread falsehoods. Septiawan’s research highlights how this technology can undermine trust and emphasizes the importance of media literacy to help people identify and critically analyze manipulated content. The study also discusses the need for ethical guidelines and regulations to address these challenges and ensure AI is used responsibly.
What is Deepfake Technology?
Deepfake technology uses advanced AI methods like generative adversarial networks (GANs) and convolutional neural networks (CNNs) to create lifelike but artificial images, videos, or audio. Here’s how it works:
Face Swap Deepfakes: AI replaces one person’s face with another in an image or video.
Lip Sync Deepfakes: AI matches lip movements to audio using deep learning models, creating videos that look natural.
Audio Deepfakes: Neural networks replicate a person’s voice, allowing someone to appear to say things they never did or even sing in another person’s voice.
A detailed look at the process behind deepfakes:
Data Collection: It starts with gathering extensive images or videos of the target face. The more diverse and high-quality the data, the better the AI model captures the nuances of the subject’s expressions, angles, and lighting.
Facial Recognition: Algorithms analyze the collected data to identify unique facial landmarks, such as the shape of the eyes, nose, and mouth. This data forms the foundation for creating realistic facial movements and expressions.
Face Replacement: Using GANs, the system replaces the original face with the target face. The generator creates synthetic images, while the discriminator critiques them, refining the output until it mimics reality.
Synchronization: Deep learning models align the replaced face’s lip and eye movements with the original speech or actions. LSTM and transformer models ensure temporal coherence, making the output seamless.
Editing and Finalization: The final step blends the manipulated face into the original media, adjusting details like skin tone, lighting, and shadows to produce content that withstands scrutiny.
Beyond face swaps, deepfake technology enables lip-syncing and audio deepfakes. Lip-sync deepfakes use AI to synchronize lip movements with new audio inputs, allowing actors’ dialogue to be dubbed into multiple languages without losing natural synchronization. Meanwhile, audio deepfakes mimic voices using neural networks, enabling applications such as AI-generated voiceovers and voice modifications.
Positive Applications: Harnessing the Power of Deepfakes
Despite their risks, deepfakes hold immense potential for positive applications:
Entertainment and Creativity: Filmmakers can recreate historical figures, enhance special effects, and localize content for global audiences, pushing the boundaries of storytelling.
Education: Deepfakes enable immersive learning experiences, such as simulating historical events or creating realistic scenarios for medical training.
Healthcare: In therapy, virtual recreations of lost loved ones are being explored as tools for grief counseling. Deepfake avatars also simplify patient-doctor communication by presenting complex medical procedures in relatable formats.
Hyper-Personalization Meets Manipulation
Deepfake technology is revolutionizing personalized marketing. Imagine advertisements tailored to an individual’s preferences—adjusting the celebrity’s voice, accent, or setting to suit a viewer’s cultural context. While this enhances consumer engagement, it also opens the door to darker applications.
These examples demonstrate that deepfake technology, when used responsibly, can amplify human creativity and problem-solving.
How Deepfakes are Shaping Celebrity Realities
Many well-known deepfake examples feature public figures and celebrities, often demonstrating the technology in a seemingly harmless way. For example, a video of soccer star David Beckham speaking nine different languages showcases the impressive capabilities of deepfake technology, while comedian Jordan Peele created a “public service announcement” by overlaying his mouth and jaw onto former president Barack Obama using accessible apps. Additionally, several deepfake videos on TikTok convincingly portray actor Tom Cruise. Legally, the technology itself is not illegal, but the content it generates can cross legal boundaries.
Although these examples may seem harmless or entertaining, they show how easy it is to use this technology. When applied without consent, deepfakes can invade privacy and sometimes violate laws related to data protection or personal rights.
Ethical Dilemmas: Creativity or Exploitation?
The duality of deepfake technology becomes evident when its ethical challenges are scrutinized. While it empowers filmmakers, educators, and marketers, it also provides tools for exploitation. The same tools used to create relatable content can fabricate fake, harmful media that damages reputations or spreads misinformation, raising questions about consent and accountability. One of the gravest concerns is the creation of non-consensual explicit content, disproportionately targeting women. Such violations not only harm individuals but also highlight the lack of safeguards to prevent misuse.
Deepfakes also threaten the integrity of information ecosystems. In an age of fake news, deepfakes add a potent weapon to the arsenal of disinformation campaigns. Videos of political figures making inflammatory statements or fabricated footage of global events can manipulate public opinion, incite unrest, and undermine trust in institutions.
In a non-media related application, disturbingly, Scammers may found deepfake technology to be a potent tool for targeting vulnerable individuals, particularly the elderly, by creating hyper-realistic audio or video manipulations that mimic the voices and appearances of trusted family members or friends. Using deepfake audio, criminals can place convincing phone calls, imitating a loved one’s voice, and fabricate urgent scenarios—such as requesting financial help due to an emergency or posing as a grandchild in distress. The same technology can generate fake video calls, further solidifying the illusion and exploiting emotional trust. This tactic not only makes it easier to deceive the victim but also bypasses traditional verification methods, as the impersonation appears authentic. Such scams highlight the urgent need for public awareness and technological safeguards to protect against deepfake-enabled fraud.
The question of accountability looms large. Should the creators of deepfake tools bear responsibility for their misuse? While some advocate for strict regulations, others argue for self-regulation within the industry. Septiawan’s research emphasizes the urgent need for a balanced approach that protects individuals without stifling innovation.
Societal Implications: The Erosion of Trust in Media
Deepfake technology has profound implications for society, particularly in its ability to erode trust. The mere existence of deepfakes fuels skepticism, giving rise to the “liar’s dividend”—the phenomenon where authentic content can be dismissed as fake. This undermines evidence-based discourse, making it easier for bad actors to deny accountability.
The burden of verifying authenticity falls heavily on journalists, fact-checkers, and media platforms, straining resources and complicating the dissemination of reliable information. Furthermore, constant exposure to fabricated realities affects public perception, fostering cynicism and confusion.
Septiawan’s research highlights the psychological toll of deepfakes on individuals and communities. As people struggle to discern fact from fiction, the resulting mistrust can divide societies and exacerbate conflicts.
Legal and Regulatory Responses
The legal landscape surrounding deepfakes remains fragmented. While some countries have criminalized malicious deepfakes, enforcement remains a challenge, especially in cross-border contexts. Laws often lag behind the rapid evolution of AI, leaving gaps that bad actors exploit.
Tech companies are stepping in to address these gaps. Platforms like Facebook and Twitter have implemented policies to identify and remove harmful content, while companies like Microsoft are developing tools to detect AI-generated manipulations. Yet, these efforts are reactive, highlighting the need for proactive solutions like embedding digital watermarks in authentic media and educating users about deepfake detection.
Why Media Literacy Matters
As deepfakes become more common, the ability to recognize manipulated media is critical. Septiawan stresses that media literacy is essential for helping people evaluate what they see and hear. For instance, Meta has introduced “Made with AI” tags to label AI-generated content, aiming to improve transparency. However, such measures need broader education efforts to be effective.
Data literacy is another important area. Understanding how AI works and how data is used can help hold creators accountable and prevent the misuse of this technology.
Conclusion
The rapid growth of deepfake technology highlights the need for stronger media and data literacy, clear ethical standards, and regulations to manage its risks. Deepfakes have the power to influence opinions, erode trust, and harm individuals, making it essential to act quickly to address these challenges. By promoting awareness and accountability, society can better manage the impacts of AI-generated media while also benefiting from its positive applications.
Reference
Septiawan, Riski. “Critical Analysis of AI-Produced Media: A Study of the Implications of Deepfake Technology.” Devotion Journal of Community Service 5, no. 7 (July 2024): 735–741. https://doi.org/10.59188/devotion.v5i7.747.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.