Joe Rogan has been all over the news headlines recently. He played a role in securing the victory of Donald Trump in the recent U.S. presidential election with his interviews with Donald Trump, J.D. Vance and Elon Musk, and his last-minute perfectly timed endorsement of Trump.
Many people love Rogan for this, and many hate him. But this is nothing new; Joe Rogan has had his share of controversy over the years. Here I’ll focus on his interviews with scientists and technologists, which should be less controversial than politics. Should be.
One of Joe Rogan’s podcast guests, Roger Penrose, is as respectable as scientists get. He won the Nobel Prize in Physics 2020 “for the discovery that black hole formation is a robust prediction of the general theory of relativity.”
I’ve been reading a new biography about him titled The Impossible Man: Roger Penrose and the Cost of Genius by Patchen Barss (November 2024). Penrose appeared on Joe Rogan’s podcast in 2018. “I had this interview when I was in the States with this chap called Joe Rogan,” says Penrose as reported in the book.
I recently watched the interview again; Penrose is always worth listening to, and Rogan asks interesting questions.
Patchen Barrs accuses Rogan in the biography of “providing a high-profile platform to pseudoscientists, conspiracy theorists, and other perpetrators of misinformation. Rogan mixes credible scientists in with crackpots, making it difficult for people to know which is which.” Barss sources this claim from an article published by the Office for Science and Society at McGill University titled Science vs. Joe Rogan.
These people have at least three things in common: first, they are all reputable scientists or technologists. Second, they are all interested in the Big Questions to which we all would like to hear answers. Third, at one or another time, they have all expressed ideas that go against the scientific (or cultural and political) consensus.
Sure, Rogan has interviewed people of less firm reputation as well, including people that some like to dismiss as crackpots or pseudoscientists.
Some people who call themselves scientists have accused Joe Rogan of promoting misinformation on COVID-19 and vaccines, platforming fringe theories and giving equal footing to pseudoscience alongside genuine science, giving airtime to conspiracy theories, and conducting interviews with scientists and technologists in a casual manner without pushback or fact-checking.
They have expressed concern about how Rogan could shape public opinion, especially among younger listeners or those who might take what they hear on his show as authoritative. They fear that this could lead to a general distrust in science or skepticism towards mainstream scientific consensus.
Some of them find Rogan’s approach to be anti-science, particularly when he mocks or dismisses scientific consensus or when he engages in or encourages scientific debates on fringe theories.
No, it is scientific dogma that has a problem
I think science itself has no problem with Joe Rogan. It is scientific dogmatists that have a problem. By scientific dogmatists I mean the zealots who want to protect the scientific establishment from the disruptive spirit of inquiry.
I have a problem with the dogmatists who have a problem with Joe Rogan. Open inquiry must be defended against scientific dogmatism. The soul of science is freedom to question theories and assumptions, and this must be protected against censorship and excessive backlash.
‘Misinformation’ and ‘pseudoscience’ are in the eye of the beholder. Often, ‘misinformation’ is information that the authorities don’t want the people to know. Often ‘pseudoscience’ is science that contradicts the scientific establishment and its paradigms.
I said often, which doesn’t mean always. But ‘often’ is enough to give Joe Rogan the benefit of doubt, and to praise his excellent work to bring science and technology closer to the little people like us.
Rogan brings a broad spectrum of views to the table, including those from scientists with different opinions, which can lead to a better understanding of complex issues. This can be beneficial in fields where there’s active debate or where the science is evolving.
By discussing science in a casual, accessible manner, Rogan makes science more approachable for the average person. This can demystify science, making it less intimidating and more integrated into everyday conversation.
Rogan’s style encourages listeners to question information, which can be a double-edged sword, but does promote skepticism and critical analysis. This can be useful in encouraging people to look into scientific claims independently, fostering a culture of inquiry.
Even more important is Rogan’s willingness to tackle controversial topics, and give a platform to new science or technology that the public is not yet aware of. Even if not all the information presented is correct, his ability to influence culture can stimulate public enthusiasm and support for science and emerging technologies.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
In June 1967, the Beatles premiered a glorious new song: All you need is love. The performance was the UK’s contribution to what was the world’s first global satellite television broadcast, and was simultaneously watched by over 400 million people in 25 different countries. The broadcast occurred during what became known as the Summer of Love, and the song became a powerful anthem of flower power.
The Beatles’ manager Brian Epstein had described the performance as the band’s finest moment, but it turned out that singing “all you need is love” wasn’t quite enough to bring about a world of peace and harmony.
Almost exactly 50 years later, a group of eight researchers at Google were searching for a title for an article they were about to publish. They settled on “Attention is all you need” – the title being the brainchild of the only Briton on the team, Llion Jones, who had grown up in north Wales, not far from Liverpool, the home of the Beatles. The article has attained legendary status within the global AI community, for its introduction of the transformer technology that underpins breakthrough AI initiatives such as ChatGPT.
Despite omitting architectural features that were previously thought to be essential for many text-based processing tasks, transformers excelled in these same tasks. The key innovation, which was to pay special attention to whichever parts of the input appeared most salient, turned out to give these AI systems a strong competitive advantage. The Attention is all you need paper correctly predicted that transformers could handle not just text but also other kinds of data, including pictures and sounds.
How far might transformers take AI? A third claim has increasingly been heard: “Scale is all you need”. Feed transformer systems ever larger amounts of data, and provide them with ever more powerful computer chips to crunch all that data into models with ever greater numbers of parameters, and there’s no limits to the degree of intelligence that can result. The “scale is all you need” hypothesis looks forward to AIs with fully general reasoning capabilities by doing more of the same.
In this context, I want to examine yet another “all you need” hypothesis. It’s a hypothesis that is already changing investment decisions and personal career trajectories. It’s the hypothesis that, whatever major problem you hope to solve, the best way to solve it is to start by creating general intelligence.
In this way of thinking, AI is all you need. An AI with god-like abilities will be able to race ahead of slow-witted humans to solve all the fundamental problems of science, medicine, and human existence.
Amodei states it as follows: “I think that most people are underestimating just how radical the upside of AI could be… My basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years”.
Amodei gives some examples of the discoveries that AI-enabled science could make:
“Design of better computational tools like AlphaFold and AlphaProteo”
“More efficient and selective CRISPR” (for gene-editing)
“More advanced cell therapies”
“Materials science and miniaturization breakthroughs leading to better implanted devices”
“Better control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue”
“Better control over the immune system: turning it on selectively to address cancer and infectious disease, and turning it off selectively to address autoimmune diseases”.
Who wouldn’t like such a vision?
According to this logic, spending effort in the next few years to create AI with these capabilities is a better investment than spending the same effort to improve biology and medicine here and now.
Funding is what marshals effort, and any funds at someone’s disposal should, it appears, be directed towards improving AI, rather than towards companies or foundations that are seeking to improve biology or medicine. Right?
A two-step mission statement
Back in February 2015, Demis Hassabis was relatively unknown. There had been a bit of press about the purchase of his company, DeepMind, by Google, for £400 million, but most people had little conception of what the company would accomplish in the following years.
You can also read on that page on Reddit, from nearly ten years ago, some fascinatingly scathing comments about that mission statement:
“Ridiculous and poorly-defined goals”
“FFS [what] a mission statement [for] a company”
“‘Fundamentally solve intelligence’ in the linked screenshot above is a whole load of nonsense”
“I don’t even think we have a working definition for ‘intelligence’ yet. We don’t even know how it works in humans… How can we hope to recreate it before knowing what it is?”
Once intelligence has been “fundamentally solved”, it should be relatively straightforward to solve climate change, economic distribution, cancer, dementia, and aging, right?
After all, given an AI model that can correctly predict how a long string of amino acids will fold up as a protein in three dimensions, won’t a scaled-up version of that model be able to predict other interactions between biochemical molecules – and, indeed, to predict how biological cells will respond to all kinds of proposed interventions?
The data bottleneck
One person striking a note of caution against exuberant forecasts of rapid additional progress about AI in medicine, was someone who shared the Nobel Prize with Demis Hassabis, namely David Baker of the University of Washington.
In an article published in MIT Technology Review shortly after the Nobel Prize, Baker pointed out that “AI needs masses of high-quality data to be useful for science, and databases containing that sort of data are rare”.
Indeed, the stunning success of DeepMind’s AlphaFold AI was fundamentally dependent on prior decades of painstaking work by numerous scientists to assemble what is known as PDB – the protein data bank.
The third of the joint winners, John Jumper of DeepMind, acknowledged this dependency in a press conference after the prize was announced. Jumper said, “I also want to really thank the giants on whose shoulders we stand, I think the entire experimental community, the people that developed the ability to measure protein structures, especially to Helen Berman and other pioneers of the Protein Data Bank, the PDB, who had the foresight to put these data together to make it available”.
Helen Berman had pioneered the PDB from 1971. As she graciously commented in a recent interview, “I am a very lucky person to have had an idea as a student, pursued that idea for more than 50 years, and then seen brand new science emerge for which three people have won this year’s Nobel Prize. It is really gratifying”.
Remarkably, Berman’s interest in protein folding predates even the Beatles song. In an online living history memoir written in 2012, Berman notes “In 1966 …I became fascinated by the world of protein folding. As part of my Ph.D. qualifier, … I proposed to perform structure-based sequence comparisons of known proteins…”.
Progress in determining protein structures was slow, for a long time, before becoming faster. This slide from a 2009 presentation by Berman that graphs the growth in the total number of proteins documented in PDB will look familiar to anyone familiar with singularitarian ideas:
In the MIT Technology Review article, ‘A data bottleneck is holding AI science back’, David Baker pointed out that “If the data that is fed into AI models is not good, the outcomes won’t be dazzling either. Garbage in, garbage out”.
The subtitle of that article says it straightforwardly: “AI’s usefulness for scientific discovery will be stunted without high-quality data”.
So, we can forget “AI is all we need”. Before we can develop an AI that can solve aging for us, we will need to obtain suitable data on which that AI can be trained. We’ll need the equivalent of PDB for all the interventions that might remove or repair the low-level biological damage that we call aging.
Unless, that is, the AI has a very special kind of superintelligence, which allows it to reach conclusions even in the absence of adequate data. Let’s turn to that option next.
AI Zero?
The AI which achieved worldwide renown in March 2016 by defeating human Go superstar Lee Sedol, namely AlphaGo, gained that ability by being able to study around 160,000 games played between expert-level human Go players. The design of that version of the AI utterly depended on learning which moves tended to be selected by the best human players in a wide variety of situations.
AlphaGo’s success against Lee Sedol was rightly celebrated, but what happened in the following year was arguably even more startling. As reported in an article in Nature in October 2017, a new version of the AI, dubbed “AlphaGo Zero”, was given no data from human games; nor did it receive any human feedback on moves it suggested. Instead, it started tabula rasa, knowing only the rules of the game, before proceeding to play itself 4.9 million times in just three days.
AlphaGo Zero new self-play algorithms proved sufficient to reach higher levels than the earlier version (sometimes called “AlphaGo Lee”) that played Lee Sedol. When AlphaGo Zero played 100 games against AlphaGo Lee, it won every single game.
A similar pattern can be observed in the progress of AIs that process text. The trend is to require less and less explicit human guidance.
Consider AIs that translate between two languages. From the 1950s onward, designers of these systems provided ever-larger numbers of rules about grammar and sentence structure – including information about exceptions to the rules. Later systems depended on AIs observing, by themselves, statistical connections in various matching sets of text – such as the official translations of materials from the European Parliament, the Canadian Parliament, and the United Nations.
Managers noticed that the statisticians tended to produce better results than linguists who toiled to document every jot and tittle of grammatical variations. Infamously, Frederick Jelinek, a lead researcher at IBM, remarked that “Every time I fire a linguist, the performance of the speech recognizer goes up”. Performance jumped up again with the adoption of deep neural networks from 2012 onward, with the translations now being accurate not only at the word-for-word level but also at the sentence-for-sentence level.
A final significant jump came when transformer-based AIs were adopted. (The word “transformer” had been chosen to reflect the ability of these systems to transform text from one language into another.) As mentioned earlier, transformers are powerful because their algorithms can work out the strengths of connections between different parts of text input by themselves; they don’t need these connections to be pointed out by humans.
Could something similar happen with medical AIs of the future? Could such an AI find sufficient reliable information in an ocean of less reliable data, and therefore propose what steps should be taken to solve aging?
AI omniscience?
To recap: AlphaGo Lee needed detailed guidance from humans, before it could improve itself to superhuman level; but its successor, AlphaGo Zero, attained that level (and exceeded it) simply by power of its own vast intelligence.
Might it be similar with medical AI? Today’s AI medical systems are constrained by the extent of data, but might a future AI be able to work out all the principles of biology (including biology in which there is no aging) by starting tabula rasa (with a blank slate)?
All You Need Is Love said, “there’s nothing you can know that isn’t known” – the ‘all you need is AI’ approach would mean there’s nothing can be known it doesn’t know. Effectively, the AI would be omniscient.
Well, count me sceptical. It’s my view that some things need to be discovered, rather than simply deduced.
For example, why are there eight planets in our solar system, rather than thirteen? No principles of astronomy, by themselves, could determine that answer. Instead, the configuration of our solar system depends on some brute facts about the initial conditions under which the solar system formed. The only way to know the number of planets is to count them.
Again, why has life on our planet adopted a particular coding scheme, in which specific triplets of the nucleotides A, T, C, and G result in specific amino acids being formed? Why did homo sapiens lose the ability to synthesize vitamin C, or other genetic features which would be useful to us? Why are particular genes found on specific chromosomes? The only way to know which genes are located where is to look and see. No “AI Zero” is going to discover the answer by meditating in a void.
Therefore, I do not accept that “AI is all you need”. Data is also needed. That is, critical data.
This need is correctly recognized in the article Machines of Loving Grace by Dario Amodei, which I’ve already quoted. Amodei includes in the article “a list of factors that limit or are complementary to intelligence”. One of these items is “Need for data”.
Amodei comments: “Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent – other than perhaps by speeding up the construction of a bigger accelerator.”
AI as Principal Investigator?
Amodei offers a bold solution to this lack of data: “The right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on.”
Amodei adds: “It is by speeding up the whole research process that AI can truly accelerate biology.”
He continues: “I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. …I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.”
Amodei highlights the power of intelligence to transcend the limitations of its data: “You might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little. This seems implausible to me – I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world”. Replace the “large group of really smart people” by an artificial superintelligence, and Amodei expects progress in science to rocket forward.
It’s an attractive vision, and I urge everyone to read Amodei’s entire essay carefully. (It covers many more topics than I can address in this article.)
But in case anyone is inclined to deprioritize existing research into promising lines of rejuvenation biotechnology, I have four remaining concerns: three negative and one strongly positive.
Three concerns and a huge opportunity
My first concern is that the pace of progress in AI capabilities will significantly slow down. For example, the data scaling laws may hit an impasse, so that applying more data to train new AI systems will fail to create the kind of superintelligence expected.
Personally I think that such a “wall” is unlikely, especially since AI developers have many other ideas in mind for how AI could be improved. But the possibility needs to be considered.
Second, it’s possible that AI capabilities will continue to surge ahead, but the resulting AI systems get involved in catastrophic harm against human wellbeing. In this scenario, rather than the AI curing you and me of a fatal condition – aging – it will cause us to die as a side-effect of a bad configuration, bad connectivity to fragile global infrastructure, an alien-like bug in its deep thinking processes, or simple misuse by bad actors (or naïve actors).
The leaders of the corporations which are trying to create artificial superintelligence – people like Demis Hassabis, Dario Amodei, Sam Altman, Elon Musk, Ben Goertzel, and a number of Chinese counterparts – say they are well aware of these dangers, and are taking due care to follow appropriate safety processes. But creating artificial superintelligence is an intensely competitive race, and that risks corners being cut.
This agreement, with appropriate monitoring and enforcement mechanisms, would have the same effect as in the first concern above: AI progress hits a wall. But this time, it will be a wall imposed by regulations, rather than one intrinsic to the engineering of AI.
Some critics have responded that the chances are very slim for such an agreement to be reached and adopted. However, I disagree. That’s on account of both a stick and a carrot.
The stick is the growing public awareness of the catastrophic risks that new generations of AI bring. (That awareness is still on the slow part of the exponential growth curve, but may well accelerate, especially if there is a scandalous disaster from existing AI systems, something like an AI Chernobyl.)
The carrot is a clearer understanding that all the benefits we want from artificial superintelligence can also be obtained from an AI with humbler powers – an AI that:
Is only modestly more capable than today’s best AIs
Lacks any possibility to develop autonomy, sentience, or independent volition
Will remain a passive, safe, but incredibly useful tool.
In a moment, I’ll say more about this huge opportunity. But first, let me interject an analogy about the choices facing humanity, as we contemplate how we might manage AI.
Peaceful progress or violent overthrow?
“Tear down the barricades!”
“Expropriate the expropriators!”
“Lock up the élites!”
“String up the capitalists!”
“Overthrow the ruling class!”
Such are the calls of revolutionaries in a hurry. However, the lesson of history is that violent revolutions tend to end up “devouring their own children” – to quote a phrase spoken by Jacques Mallet du Pan (referring to the French Revolution sending its original leaders to the guillotine) and also by former Hitler loyalist Ernst Röhm.
Similar remarks could have been uttered by many of the one-time supporters of Vladimir Lenin or Joseph Stalin, who subsequently found themselves denounced and subject to show trials.
However, the saying is not entirely correct. Some revolutions avoid subsequent internal bloodbaths: consider the American Revolutionary Wars of Independence, and the Glorious Revolution of 1689 in England.
Revolutionaries must uphold principle ahead of power-seeking, maintain a clear grip of reality (rather than becoming lost in self-deception), and continue to respect wise process (rather than allowing dictatorial leaders to do whatever they please) – in such cases, a revolution can lead to sustained progress with increased human flourishing.
Now consider the difference between what can be called “democratic socialists” and “Marxist-Leninists”. The former highlight ways in which the plight of the working class can be alleviated, stage by stage, through gradual societal reform. The latter lose patience with such a painstaking approach, and unleash a host of furies.
In case it’s not clear, I’m on the side of the democratic socialists, rather than the would-be revolutionaries who make themselves into gods and absolute arbiters.
For how humanity chooses to develop and deploy AI, I see the same choice between “harness accelerationists” and “absolute accelerationists”.
Harness accelerationists wish to apply steering and brakes, as well as pressing firmly on the throttle when needed.
Absolute accelerationists are happy to take their chances with whatever kind of AI emerges from a fast and furious development process. Indeed, the absolute accelerationists want to tear down regulation, lock up safety activists, and overthrow what they see as the mediocrity of existing international institutions.
Once again, in case it’s not clear, I’m on the side of harnessing acceleration. (Anyone still on X aka Twitter can see the “h/acc” label in my name on that platform.)
Harnessing requires more skill – more finesse – than keeping your foot pressed hard to the floor. I understand why absolute accelerationists find their approach psychologically comforting. It’s the same appeal as the Marxist promise that the victory of the working class is inevitable. But I see such choices as being paths toward humanitarian catastrophe.
Instead, we can proceed quickly to solving aging, without awaiting the emergence of a hopefully benevolent god-like AI.
Solving aging – without superintelligence
Above, I promised three concerns and one huge opportunity. The opportunity is that it’s pretty straightforward to solve aging, without waiting for a potentially catastrophically dangerous artificial superintelligence. There are low-hanging fruits which aren’t being picked – in part because funding for such projects is being diverted instead to AI startups.
Aging occurs because the body’s damage-repair mechanisms stop working. Our metabolism runs through countless biochemical interactions, and low-level biological damage arises as a natural consequence – due to injuries inflicted by the environment, bad lifestyle choices, the inevitable side-effects even of good lifestyle choices, or (perhaps) because of programmed obsolescence. When we are young, lots of that damage is routinely repaired or replaced soon after it occurs, but these replacement and repair mechanisms lose their effectiveness over time. The consequence is that our bodies become more prone to all sorts of disease and infirmity. That’s aging.
The most promising path to solving aging is to comprehensively reinforce or complement these damage-repair mechanisms. The low-hanging fruit is that we have a long list of ways this might be achieved:
By taking inspiration from various animal species in which at least some of the damage-repair mechanisms are better than in humans
By understanding what’s different about the damage-repair mechanisms in ‘human superagers’
By designing and applying new interventions at the biotech or nanotech levels.
To be clear, this does not mean that we have to understand all of human biological metabolism. That’s horrendously complicated, with numerous side-effects. Nor do we even need to understand all the mechanisms whereby damage accumulates. Instead, we just need to observe, as engineers, what happens when new damage-repair mechanisms are applied in various animals.
These mechanisms include senolytics that clean up senescent cells (sometimes called “zombie cells”), extending telomeres at the ends of chromosomes, reversing some of the epigenetic alterations that accumulate on our DNA, introducing specially programmed new stem cells, nanoparticles which can break-up accumulated plaques and tangles, re-energising the mitochondria within our cells – and much more.
In each case, some useful research is being done on the viability of introducing these repair mechanisms. But nothing like enough.
We especially need tests of the long-term effects of damage-repair mechanisms, especially applied in combination. These tests can determine something that even an artificial superintelligence would find difficult to predict by meditating in a void: which damage-repair interventions will positively synergize with each other, and which ones have antagonistic effects.
These are the kind of tests being pursued by one organisation where I need to declare an interest: the Longevity Escape Velocity Foundation (LEVF), where I have a role on the leadership team, and whose underlying ideas I have supported for nearly 20 years since first coming across them in meetings of what was the forerunner of London Futurists.
LEVF is carrying out a number of extended projects on large numbers of mice, involving combining treatments that have already been proven to individually extend the lifespan of mice treated from middle age. Interim results of the first such project, RMR1, can be reviewed here (RMR = Robust Mouse Rejuvenation), and plans for the second one, RMR2, have been posted here.
Rather cheekily, may I suggest that the 1967 slogan of the Beatles, All you need is love, got two letters wrong in the final word?
Two scenarios for trying to solve aging
To conclude, I envision two competing scenarios ahead, for how aging should be solved:
An “AI first” strategy, in which important research into rejuvenation biotechnology is starved of funding, with money being preferentially allocated to general AI initiatives whose outcomes remain deeply uncertain.
A “damage-repair research now” strategy, in which projects such as RMR2 receive ample funding to proceed at pace (and, even better, in multiple different versions in parallel, including in animals larger than mice), with the data produced by such experiments then being available to train AIs which can complement the ingenuity of pioneering human researchers.
What’s your pick?
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Why do some memories linger for a lifetime while others vanish soon after forming? Neuroscientists have long grappled with this puzzle, especially given the brain’s constant renewal of its molecular components. A recent breakthrough study has illuminated a key aspect of this phenomenon: the interaction between two proteins, KIBRA and PKMζ, provides a scaffolding that stabilizes memory. This remarkable discovery not only explains how long-term memory persists but also opens potential avenues for treating memory-related disorders like Alzheimer’s disease.
The Challenge of Long-Term Memory Retention
Memory is stored in the brain through the strengthening of synaptic connections, a process known as long-term potentiation (LTP). However, the machinery that sustains these connections, from proteins to receptors, is continually replaced. This turnover creates a paradox: how can memories persist for decades if the molecular components of the brain are so short-lived? Decades ago, Francis Crick suggested that stable interactions between molecules might hold the key to this mystery. Building on this idea, researchers focused on KIBRA, a scaffolding protein tied to memory performance, and PKMζ, a kinase known for its role in sustaining synaptic changes.
KIBRA and PKMζ: The Anchors of Memory
KIBRA and PKMζ work together to maintain the stability of synaptic connections crucial for memory. PKMζ, an atypical kinase, is persistently active, bypassing the need for external activation, and plays a vital role in reinforcing the connections at memory-encoded synapses. KIBRA serves as a docking station for PKMζ, anchoring it at synapses to ensure that the structural changes necessary for memory endure. Without KIBRA’s anchoring action, PKMζ cannot localize effectively, causing the connections to weaken and memories to fade.
Experiments highlighted how synaptic activity intensifies the interaction between KIBRA and PKMζ. Using cutting-edge proximity ligation assays, researchers observed that strong neuronal stimulation led to the accumulation of KIBRA-PKMζ complexes at active synapses. These complexes were particularly concentrated in regions of the brain linked to learning and memory, such as the hippocampus.
Testing the Limits of the KIBRA-PKMζ Partnership
The study explored deeper by introducing antagonists that blocked the interaction between KIBRA and PKMζ. The consequences were striking: blocking this interaction disrupted established LTP while leaving basal synaptic transmission unaffected. This finding underlined the selective importance of the KIBRA-PKMζ complex in maintaining memory-associated synaptic changes.
To explore how this disruption impacted behavior, the researchers tested mice trained in spatial memory tasks. Mice conditioned to avoid a specific area of their environment lost this memory after the introduction of an inhibitor that blocked KIBRA-PKMζ coupling. However, their ability to learn and form new memories remained intact. This specificity highlights the unique role of KIBRA and PKMζ in stabilizing existing memories, distinguishing it from the processes involved in learning.
A Molecular Mechanism That Defies Turnover
One of the study’s most intriguing findings was how the KIBRA-PKMζ interaction enables memory to persist despite the turnover of individual molecules. The research confirmed that while the proteins themselves are replaced within hours or days, the interaction between them is continuously re-established at memory-relevant synapses. This dynamic process ensures that the “scaffold” supporting memory remains intact even as its components are renewed.
The researchers proposed a model of “persistent synaptic tagging,” where KIBRA functions as a tag that attracts newly synthesized PKMζ molecules to the right locations. This process sustains the structural integrity of memory-encoded synapses, making it possible for memories to outlast the lifespan of their molecular building blocks.
Implications for Memory Disorders and Treatment
The discovery of the KIBRA-PKMζ interaction has profound implications for understanding and treating memory-related conditions. If this partnership is essential for maintaining memory, then therapies aimed at enhancing or mimicking this interaction could potentially combat age-related memory decline or neurodegenerative diseases like Alzheimer’s.
One promising avenue involves developing drugs that stabilize or amplify the KIBRA-PKMζ connection, strengthening synaptic changes and improving memory retention. Conversely, selectively disrupting this interaction could help erase traumatic memories, offering hope for conditions such as post-traumatic stress disorder (PTSD). The study also raises the possibility of targeted therapies that activate PKMζ in specific brain regions, enhancing memory resilience in the early stages of dementia.
A Glimpse Into the Future of Memory Research
The partnership between KIBRA and PKMζ represents a significant leap forward in our understanding of how the brain preserves memories over time. By acting as a molecular scaffold, these proteins ensure the stability of synaptic changes that underpin long-term memory. This discovery not only sheds light on the biological basis of memory but also offers a blueprint for future research and therapeutic innovation.
As scientists continue to unravel the intricacies of this molecular duo, the potential for groundbreaking treatments becomes clearer. From enhancing memory in aging populations to mitigating the effects of neurodegenerative diseases, the KIBRA-PKMζ connection offers a promising pathway for addressing some of the most pressing challenges in neuroscience.
Conclusion
Memory, once thought to be an ephemeral and mysterious phenomenon, is now being understood at the molecular level. The discovery of the KIBRA-PKMζ partnership provides a compelling explanation for how the brain defies the constraints of molecular turnover to preserve memory. This research not only advances our understanding of the brain’s inner workings but also points toward a future where memory loss is no longer an inevitable consequence of aging or disease.
Through continued exploration of this groundbreaking mechanism, we inch closer to unlocking the full potential of memory science and delivering tangible benefits to those affected by its decline. The intricate dance between KIBRA and PKMζ reminds us that even the smallest molecular interactions can have profound implications for the human experience.
Reference
Tsokas, Panayiotis, Changchi Hsieh, Rafael E. Flores-Obando, Matteo Bernabo, Andrew Tcherepanov, A. Iván Hernández, Christian Thomas, et al. “KIBRA Anchoring the Action of PKMζ Maintains the Persistence of Memory.” Science Advances 10, no. 26 (June 26, 2024). https://doi.org/10.1126/sciadv.adl0030.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
It is now possible to create a virtual replica of you that, “is able to embody your values and preferences with stunning accuracy,” MIT Technology Review reports (unpaywalled copy).
This is the conclusion of a study by a team including researchers from Stanford and Google DeepMind, published in arXiv.
Imagine if you could sit down with an Artificial Intelligence (AI) for a chat that lasts two hours, where you talk about everything from your childhood memories to your views on big issues like immigration. After your conversation, this AI could create an avatar that acts, thinks, and responds just like you.
This study involved over 1,000 people from diverse backgrounds who were interviewed for two hours each. The goal was to create ‘simulation agents’, which are AI models designed to reproduce the behavior of a specific person with high accuracy. These agents then went through the same set of tests as the humans to see how well they could replicate their behaviors, achieving an impressive 85% similarity.
Simulation agents
Simulation agents act as digital stand-ins for real people, allowing researchers to simulate social and behavioral scenarios. This can be useful for studying things like the spread of misinformation or how people might react in certain social situations.
These agents differ from the more common tool-based agents that perform specific tasks like booking appointments or retrieving information, rather than simulating human interactions or personalities.
John Horton, an associate professor from MIT, highlights that this approach represents a hybrid model where real human data feeds into AI personae that researchers use in simulations, opening new channels of research that would be too costly or ethically challenging with real people.
To make these AI agents, the researchers used qualitative interviews, where they talked to participants to gather rich, detailed information about their lives and views. This method proved more effective than traditional surveys for capturing the unique traits of individuals.
Joon Sung Park, leading the study, noted that interviews can reveal profound details about a person, like surviving a serious illness, which might not come out in typical surveys. This approach is more time-efficient for creating a detailed profile.
This new study suggests that with just a couple of in-depth conversations, it could be possible to create a fairly accurate digital twin. This could spark interest in companies like Tavus, which might now look into using similar techniques for their services.
Deepfake AI persons?
Advancements in AI open up discussions about privacy, consent, and the ethical implications of digital replication.
This technology isn’t without its risks. Just like AI can create deepfakes – fake media that looks real – this could lead to unauthorized digital impersonations. The methods used to assess the accuracy of these AI replicas were basic; they included standard social surveys and personality assessments, which might not capture all the nuances of human individuality. For instance, AI agents struggled more with behavioral tests that measure fairness and altruism.
Personality capture and preservation
However, there are other methods to create digital twins, where AI models learn from a person’s emails or other digital footprints, which yield a lot more data.
This is akin to the methods of personality capture and preservation proposed by William Sims Bainbridge and Martine Rothblatt; they have proposed to collect and preserve digital reflections of an individual’s personality, essentially creating a digital version of a person. There ideas sound like science fiction that could, one day, become science fact.
Bainbridge and Rothblatt propose to capture one’s personality by accumulating data including answers to structured questionnaires, but also emails, social media posts, blogs, photos, and online activities.
This collection forms a ‘mindfile’, which Rothblatt describes as a sum of saved digital reflections about an individual. The idea extends to using this data with future AI technologies to emulate or upload a person’s consciousness into a digital or robotic form, thereby achieving a form of digital immortality.
Bainbridge and Rothblatt believe that digital technology could capture human identity and personality. With sufficient technological advancement, digital data and AI recreate or preserve an individual’s essence after their physical death.
I think answers to questionnaires, emails, social media posts, blogs, photos, and online records recorded with current means could not achieve the rich texture needed to emulate a real human personality.
But I don’t rule out the possibility that new brain-machine interfaces like Neuralink could make the process viable.
Of course this would be light years ahead of the preliminary experiments of the Stanford and Google DeepMind researchers. But we can think of our preliminary experiments as encouraging baby steps toward the distant goal of creating a digital replica of a person. Technology could eventually allow to endow the replica with consciousness.
Popular culture is warming up to the idea
Greg Gutfeld and his guests discussed the MIT Technology Review article in Gutfeld’s very popular show.
Of course the discussion was fun, as appropriate to a comic show. But the very fact that the host of a popular show discussed these arcane and weird things shows that popular culture is warming up to the possibility of creating digital persons.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The crypto gaming industry in 2024 is evolving into a new form, in a transformation far from complete. That’s the message of a comprehensive new Q3 2024 report on GameFi.
Game7’s report shows Web3 gaming is reshaping the global gaming landscape. Telegram is emerging as a launchpad for new titles, and Web2 gaming powerhouses are cautiously testing these waters. This shift puts pressures on traditional mobile platforms to adapt, while they grapple with restrictive policies and regulatory uncertainty.
Here’s how the Web3 gaming ecosystem is growing in 2024, and how it is segueing into a potentially bumper 2025.
Web3 Gaming Ecosystem
The total gaming market cap peaked in early Q2 this year at around $40 billion, and has since halved, as hotter narratives like Bitcoin, AI and meme coins ate its share.
This doesn’t mean Game Over for the crypto gaming sector – the report’s data shows positive signs.
Blockchain ecosystems like Immutable and Arbitrum are thriving, driving rapid growth with innovative offerings, and attracting developers from rival platforms. While game announcements dipped 36% from 2023, fewer projects abandoned development, showing a maturing ecosystem.
Indie creators dominated: they drove over 90% of releases, favoring the RPG, casual, and action genres. Meanwhile, platforms like Epic Games Store embraced Web3, hinting at gaming’s bold new chapter amid ongoing challenges with integration and adoption.
Geographically, the Asia-Pacific region leads Web3 game development, accounting for nearly 40% of new titles, followed by North America at 35%. The USA contributes 27% of teams, with South Korea and Singapore following.
Genre trends remained stable: RPGs, casual, and action games top the charts as developers refine Web3 integration within the old familiars before any creative daring.
Telegram Reshapes Web3 Gaming
In addition to the rise of crypto trading bots on Telegram, the messaging app emerged as a surprising force in Web3 gaming in 2024. Telegram has carved a niche as a preferred platform for launching Web3 games, with 21% of new titles choosing it as their home. Distribution trends for new Web3 games largely followed patterns from 2023, but Telegram’s rise is a new turning point, as this graph shows –
What makes Telegram so appealing? Simplicity. Its massive user base (950 million active monthly users) and frictionless in-app experience sidestep the challenges of traditional app stores like Apple’s and Google’s.
To illustrate the discontent game developers have with Big Tech’s app stores, Epic Games – the parent company of Fortnite – sued Google and Samsung for blocking competition. For developers like these, Telegram answers the call for a way to distribute games without high store fees or restrictive NFT policies.
Despite this, many game publishers are crafting Telegram strategies – not for full-fledged titles, but as a complementary user-acquisition stream.
Telegram’s success could also disrupt the status quo in mobile distribution. Its integration of blockchain gaming within a messaging app shows how traditional channels might adapt, or risk being left behind.
Traditional Gaming Giants Enter Web3
Traditional gaming platforms are beginning to tiptoe into Web3 territory, blending innovation with caution. The Epic Games Store (EGS) has taken a leading role, adding 85 new Web3 titles in the past year. While it lacks dedicated Web3 tools, its open-minded content policies have made it a trusted partner for blockchain projects.
EGS is also expanding its global presence, launching on Android and EU iOS in 2024. Though its mobile stores don’t yet welcome third-party Web3 games, its pro-Web3 stance on PC keeps it a platform to watch. Meanwhile, Steam holds firm on its controversial Web3 restrictions. Still, savvy developers are finding workarounds by releasing modified versions of their games to leverage Steam’s massive market reach.
Consoles are also testing the Web3 waters. Off the Grid by Gunzilla Games made waves as the first Web3 game to debut on consoles. By separating blockchain features from core gameplay, it hints at a future where platforms like Microsoft and Sony may cautiously embrace this new frontier. As policies evolve, Web3 teams could gain access to the vast console audience, reshaping gaming as we know it.
Key Blockchain Stats
The blockchain gaming ecosystem grew rapidly in 2024, with 105 blockchain networks were announced that can support gaming – a 20% increase from the previous year. Of these, 64% were designed specifically for gaming, reflecting the rising developer interest in tailored blockchain solutions.
A major shift is happening toward Layer-2 and Layer-3 networks. L3 solutions, which only gained traction in 2023, accounted for 43% of new gaming blockchains this year, thanks to their customizable environments. L2s followed closely, making up 34% of the new entries. Though Layer-1 (L1) networks still host 66% of active games, most 2024 launches embraced L2 or L3 solutions for better scalability and gaming-specific features.
Ethereum Virtual Machine (EVM) continues to dominate as the go-to smart contract environment. Despite rising competition from ecosystems like Solana and Sui, developers are hesitant to leave the EVM due to its mature tools and lower switching costs. For now, EVM remains the backbone of blockchain gaming innovation.
Funding and Tokens
Web3 gaming funding in 2024 held steady, matching 2023 levels. Investors focused on content over infrastructure. Here again the same genres account for most of the market share: action, RPG, and casual games. Gaming chains and engagement platforms also attracted attention, but cautious optimism prevailed as investors awaited the next Axie Infinity moment.
Meanwhile, gaming token launches skyrocketed, up 200% from the previous year. By Q3, 74% of these launches were tied to game titles, reflecting a clear shift toward monetizing gameplay. The industry remains conservative, but the surge in tokens shows a drive for in-game economies and blockchain-powered experiences.
Web3 gaming projects face a complex regulatory landscape in the USA, dominated by the SEC and CFTC. With the election of crypto-friendly Donald Trump these regulations might finally be clarified.
Shifts in the U.S. political climate suggest the possibility of a more collaborative regulatory approach. Many projects are adapting by designing tokens with utility-focused mechanics, aligning more closely with traditional gaming models. While some delays in token launches persist, developers are finding ways to integrate blockchain elements into gameplay, balancing innovation with compliance.
Conclusion
Web3 gaming in 2024 is evolving, driven by innovation, indie creativity, and platforms like Telegram. Developers are refining blockchain integration while navigating regulatory and technical hurdles. Traditional platforms are slowly adapting, hinting at a broader shift to Web3. Challenges remain, but the foundation for growth is strong.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Donald Trump will be back in the White House in 2025, bringing with him possibly a more business-friendly approach to new digital technology sectors like AI and crypto. His ally Elon Musk has repeatedly warned against AI overreach which could end mankind, but will Trump heed these warnings and suppress AI? Or follow the ideology that deregulation leads to economic growth? Only time will tell.
Trump is pro-business, and that provides good negotiating ground with AI giants like OpenAI, Google, and Microsoft as they try to find the right regulatory frameworks to contain the exponentially evolving machine-learning sector.
Trump’s victory in the 2024 presidential election is sure to have significant implications for the development and regulation of artificial intelligence (AI) and cryptocurrency-related AI projects. While this could enhance the competitive position of U.S. tech firms, it also raises significant concerns about ethical standards, international cooperation, and the trajectory of technological growth.
The implications of these changes will be felt globally, as nations navigate the complexities of AI governance – and AI competition – in an increasingly interconnected world.
Here’s an analysis of the biggest potential impacts on AI that we might see in 2025:
The tech industry, particularly AI companies and startups, may welcome this less restrictive environment. It could potentially accelerate the development and deployment of AI technologies without the constraints of extensive governmental oversight. This approach might foster rapid innovation and allow U.S. companies to maintain a competitive edge in the global AI race.
Trump’s approach is likely to be welcomed by major tech companies that advocate for a lighter regulatory touch. Many in the industry argue that stringent regulations can stifle innovation and their competitiveness. However, companies can’t be relied on to self-regulate: they will surely prioritize profit over ethical considerations.
This is part of Trump’s broad economic agenda to reduce regulations on all businesses – not just digital. It could potentially lead to rapid advancements in AI capabilities – or to rapid AI catastrophe!
Potential Risks of Deregulation
The deregulatory approach raises concerns about safety, ethics, and accountability in AI development. Experts warn that without adequate regulatory frameworks, the risks associated with AI – such as bias, misinformation, and privacy violations – could escalate. The lack of robust oversight may hinder efforts to establish ethical standards and best practices in AI development, which are crucial for addressing the technology’s societal impacts.
Advocates of deregulation say it will spur innovation, however nothing but regulation can establish the ethical standards and best practices needed to address AI’s societal impacts.
Influence over Global Standards
Trump’s deregulatory policies – and his poor reputation internationally – do not bode well for the country’s ability to influence international norms and regulations governing AI. Historically, the USA shaped global AI standards, simply because it is home to the leading tech firms and research institutions: Google, Microsoft, MIT, etc. Trump’s deregulatory policies could diminish the USA’s ability to influence international norms and regulations governing AI. As countries like China continue to advance their AI capabilities, a weak regulatory landscape in the USA might hinder its competitive edge.
Geopolitical Tensions
Trump’s administration has signalled its intention to continue and expand Biden’s protectionist measures. The USA could attempt tighter export controls on AI technologies, particularly to China. Such actions could create barriers to global cooperation in AI development and governance, exacerbating geopolitical tensions, and limiting the USA’s ability to lead global discussions on responsible AI use.
Cryptocurrency and AI Projects
Trump’s election victory is seen as potentially beneficial for cryptocurrency-related AI projects. His administration is likely to foster an environment that encourages innovation and investment in blockchain technologies and digital currencies. This could be advantageous for startups looking to develop new crypto solutions without heavy regulatory scrutiny.
The cryptocurrency market has already shown a positive response to Trump’s victory, with Bitcoin reaching an all-time high. This surge reflects the market’s anticipation of a more crypto-friendly regulatory environment under Trump’s leadership.
However, the lack of regulation could also lead to increased volatility in the cryptocurrency markets. A deregulated environment may attract speculative investments, but could also expose investors to higher risks associated with fraud and market manipulation.
Uncertainty for Innovators
The anticipated changes in regulation could introduce uncertainty for businesses involved in AI development. Companies might face challenges maintaining compliance with rapidly shifting regulations, and this could impact investment decisions and strategic planning. The lack of clear guidelines may deter some innovators from pursuing ambitious projects due to fears of potential backlash or future regulatory changes.
National Security and Defense AI
In the realm of defense and national security, Trump’s administration might pursue a less stringent regulatory environment for AI development related to military technologies. This could lead to closer relationships between the government and private organizations involved in developing defense-related AI. However, this approach might also diminish prospects for international cooperation on defense-related AI governance, potentially exacerbating global tensions in military technology development.
Economic Impact
Trump’s policies are expected to prioritize private sector gains in AI development. This approach might help companies move fast without adequate safeguards! While this could spur economic growth and innovation in the short term, it raises concerns about long-term consequences regarding consumer safety and privacy – as well as catastrophic AI risk.
Workforce and Education
The rapid advancement of AI under a deregulated environment could have significant implications for the workforce. It might create new job opportunities in the tech sector, but accelerate job displacement in other industries. Trump’s administration will need to address these challenges, potentially through workforce retraining programs and education initiatives focused on AI and related technologies.
Global Competitiveness
Trump’s approach aims to keep the USA ahead of other regions, particularly Europe and China, in AI development. The administration hopes that reducing regulatory barriers will spur domestic innovation and give the country a competitive edge over more regulated countries. However, this strategy also risks widening the gap between the USA and other nations in terms of AI governance and ethical standards.
Conclusion
Donald Trump’s victory signals a significant shift in the U.S. approach to AI and crypto-related AI projects. The anticipated deregulation is likely to spur rapid innovation and investment in these sectors. However, this may come at the cost of safety, ethics, and long-term societal impacts.
The success of this approach will depend on how well the administration can balance the drive for innovation with necessary safeguards. It will also require careful navigation of international relations, particularly in managing technology transfers and global AI governance.
As the AI landscape mutates under Trump’s leadership, the tech industry, policymakers, and the public will need to remain vigilant. They must work to ensure that the benefits of AI advancement are realized, while mitigating potential risks and ethical concerns. The coming years will be crucial in shaping the future of AI and its impact on society, both in the USA and globally.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
In this article, we explore the growing influence of AI-generated content, focusing on deepfake technology and its impact on media and public trust. Deepfakes have made it harder to tell real from fake, challenging media literacy and raising concerns about misinformation, ethical issues, and societal risks. In a paper published in July 2024 in Devotion, Journal of Community Service, Riski Septiawan examines the technology behind deepfakes, their potential for misuse, and the steps needed to tackle the challenges they create. Here, we reflect on the key points of this study.
Introduction
AI-generated media is changing how people consume and evaluate information. Deepfakes, in particular, produce realistic but fabricated content that can manipulate public opinion and spread falsehoods. Septiawan’s research highlights how this technology can undermine trust and emphasizes the importance of media literacy to help people identify and critically analyze manipulated content. The study also discusses the need for ethical guidelines and regulations to address these challenges and ensure AI is used responsibly.
What is Deepfake Technology?
Deepfake technology uses advanced AI methods like generative adversarial networks (GANs) and convolutional neural networks (CNNs) to create lifelike but artificial images, videos, or audio. Here’s how it works:
Face Swap Deepfakes: AI replaces one person’s face with another in an image or video.
Lip Sync Deepfakes: AI matches lip movements to audio using deep learning models, creating videos that look natural.
Audio Deepfakes: Neural networks replicate a person’s voice, allowing someone to appear to say things they never did or even sing in another person’s voice.
A detailed look at the process behind deepfakes:
Data Collection: It starts with gathering extensive images or videos of the target face. The more diverse and high-quality the data, the better the AI model captures the nuances of the subject’s expressions, angles, and lighting.
Facial Recognition: Algorithms analyze the collected data to identify unique facial landmarks, such as the shape of the eyes, nose, and mouth. This data forms the foundation for creating realistic facial movements and expressions.
Face Replacement: Using GANs, the system replaces the original face with the target face. The generator creates synthetic images, while the discriminator critiques them, refining the output until it mimics reality.
Synchronization: Deep learning models align the replaced face’s lip and eye movements with the original speech or actions. LSTM and transformer models ensure temporal coherence, making the output seamless.
Editing and Finalization: The final step blends the manipulated face into the original media, adjusting details like skin tone, lighting, and shadows to produce content that withstands scrutiny.
Beyond face swaps, deepfake technology enables lip-syncing and audio deepfakes. Lip-sync deepfakes use AI to synchronize lip movements with new audio inputs, allowing actors’ dialogue to be dubbed into multiple languages without losing natural synchronization. Meanwhile, audio deepfakes mimic voices using neural networks, enabling applications such as AI-generated voiceovers and voice modifications.
Positive Applications: Harnessing the Power of Deepfakes
Despite their risks, deepfakes hold immense potential for positive applications:
Entertainment and Creativity: Filmmakers can recreate historical figures, enhance special effects, and localize content for global audiences, pushing the boundaries of storytelling.
Education: Deepfakes enable immersive learning experiences, such as simulating historical events or creating realistic scenarios for medical training.
Healthcare: In therapy, virtual recreations of lost loved ones are being explored as tools for grief counseling. Deepfake avatars also simplify patient-doctor communication by presenting complex medical procedures in relatable formats.
Hyper-Personalization Meets Manipulation
Deepfake technology is revolutionizing personalized marketing. Imagine advertisements tailored to an individual’s preferences—adjusting the celebrity’s voice, accent, or setting to suit a viewer’s cultural context. While this enhances consumer engagement, it also opens the door to darker applications.
These examples demonstrate that deepfake technology, when used responsibly, can amplify human creativity and problem-solving.
How Deepfakes are Shaping Celebrity Realities
Many well-known deepfake examples feature public figures and celebrities, often demonstrating the technology in a seemingly harmless way. For example, a video of soccer star David Beckham speaking nine different languages showcases the impressive capabilities of deepfake technology, while comedian Jordan Peele created a “public service announcement” by overlaying his mouth and jaw onto former president Barack Obama using accessible apps. Additionally, several deepfake videos on TikTok convincingly portray actor Tom Cruise. Legally, the technology itself is not illegal, but the content it generates can cross legal boundaries.
Although these examples may seem harmless or entertaining, they show how easy it is to use this technology. When applied without consent, deepfakes can invade privacy and sometimes violate laws related to data protection or personal rights.
Ethical Dilemmas: Creativity or Exploitation?
The duality of deepfake technology becomes evident when its ethical challenges are scrutinized. While it empowers filmmakers, educators, and marketers, it also provides tools for exploitation. The same tools used to create relatable content can fabricate fake, harmful media that damages reputations or spreads misinformation, raising questions about consent and accountability. One of the gravest concerns is the creation of non-consensual explicit content, disproportionately targeting women. Such violations not only harm individuals but also highlight the lack of safeguards to prevent misuse.
Deepfakes also threaten the integrity of information ecosystems. In an age of fake news, deepfakes add a potent weapon to the arsenal of disinformation campaigns. Videos of political figures making inflammatory statements or fabricated footage of global events can manipulate public opinion, incite unrest, and undermine trust in institutions.
In a non-media related application, disturbingly, Scammers may found deepfake technology to be a potent tool for targeting vulnerable individuals, particularly the elderly, by creating hyper-realistic audio or video manipulations that mimic the voices and appearances of trusted family members or friends. Using deepfake audio, criminals can place convincing phone calls, imitating a loved one’s voice, and fabricate urgent scenarios—such as requesting financial help due to an emergency or posing as a grandchild in distress. The same technology can generate fake video calls, further solidifying the illusion and exploiting emotional trust. This tactic not only makes it easier to deceive the victim but also bypasses traditional verification methods, as the impersonation appears authentic. Such scams highlight the urgent need for public awareness and technological safeguards to protect against deepfake-enabled fraud.
The question of accountability looms large. Should the creators of deepfake tools bear responsibility for their misuse? While some advocate for strict regulations, others argue for self-regulation within the industry. Septiawan’s research emphasizes the urgent need for a balanced approach that protects individuals without stifling innovation.
Societal Implications: The Erosion of Trust in Media
Deepfake technology has profound implications for society, particularly in its ability to erode trust. The mere existence of deepfakes fuels skepticism, giving rise to the “liar’s dividend”—the phenomenon where authentic content can be dismissed as fake. This undermines evidence-based discourse, making it easier for bad actors to deny accountability.
The burden of verifying authenticity falls heavily on journalists, fact-checkers, and media platforms, straining resources and complicating the dissemination of reliable information. Furthermore, constant exposure to fabricated realities affects public perception, fostering cynicism and confusion.
Septiawan’s research highlights the psychological toll of deepfakes on individuals and communities. As people struggle to discern fact from fiction, the resulting mistrust can divide societies and exacerbate conflicts.
Legal and Regulatory Responses
The legal landscape surrounding deepfakes remains fragmented. While some countries have criminalized malicious deepfakes, enforcement remains a challenge, especially in cross-border contexts. Laws often lag behind the rapid evolution of AI, leaving gaps that bad actors exploit.
Tech companies are stepping in to address these gaps. Platforms like Facebook and Twitter have implemented policies to identify and remove harmful content, while companies like Microsoft are developing tools to detect AI-generated manipulations. Yet, these efforts are reactive, highlighting the need for proactive solutions like embedding digital watermarks in authentic media and educating users about deepfake detection.
Why Media Literacy Matters
As deepfakes become more common, the ability to recognize manipulated media is critical. Septiawan stresses that media literacy is essential for helping people evaluate what they see and hear. For instance, Meta has introduced “Made with AI” tags to label AI-generated content, aiming to improve transparency. However, such measures need broader education efforts to be effective.
Data literacy is another important area. Understanding how AI works and how data is used can help hold creators accountable and prevent the misuse of this technology.
Conclusion
The rapid growth of deepfake technology highlights the need for stronger media and data literacy, clear ethical standards, and regulations to manage its risks. Deepfakes have the power to influence opinions, erode trust, and harm individuals, making it essential to act quickly to address these challenges. By promoting awareness and accountability, society can better manage the impacts of AI-generated media while also benefiting from its positive applications.
Reference
Septiawan, Riski. “Critical Analysis of AI-Produced Media: A Study of the Implications of Deepfake Technology.” Devotion Journal of Community Service 5, no. 7 (July 2024): 735–741. https://doi.org/10.59188/devotion.v5i7.747.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Bitcoin’s hit to $99,500 today, tantalizing crypto price watchers. The king of crypto attracts renewed attention from retail and institutional investors as it flirts with $100,000.
The recent U.S. presidential election delivered a Republican Senate majority and a Trump presidency, signaling the possibility of more relaxed crypto regulations. Just over six months ago, it was unthinkable that Bitcoin could hit $100K, despite what all those laser eyes would have you believe. Now, Bitcoin’s path to $100,000 feels inevitable. The only real question is: when will it happen?
Disclaimer: This article is for educational purposes only and does not constitute financial advice of any kind.
Bitcoin Catalysts for the Rally to $100,000
The recent surge in Bitcoin’s price marks a turning point in the cryptocurrency market. Following Trump’s stunning election victory over Vice President Kamala Harris, Bitcoin has rallied non-stop, and the news that other pro-Bitcoiners like Elon Musk and RFK will be in Trump’s government hasn’t hurt. This surge highlights the potential for November to remain a historically strong month for Bitcoin, as past data shows consistent upward trends during this period.
The convergence of political, institutional, and retail factors is fueling optimism for Bitcoin to break the $100k barrier.
Gary Gensler, the current U.S. Securities and Exchange Commission (SEC) is widely seen as anti-crypto. It is hard to argue with this opinion: Gensler has sued and fined hundreds of crypto firms including Kraken, Coinbase, and Ripple. Trump said earlier this year – at Bitcoin Nashville – that he would fire Gensler on day one, and rumors are now that ‘GG’ will resign and be replaced by Brian Brooks, a former Acting Comptroller and CEO of Binance USA, which would be an extremely factor.
Trump is promising to shake things up by aligning with a celebrity team that consists of crypto favorite Elon Musk – who will lead the new Department of Government Efficiency (DOGE), Vivek Ramaswamy, and Robert F. Kennedy Jr. who revealed buying 21 Bitcoins, including three for each of his children. This is one-millionth of BTC’s hard cap of 21 million coins.
The creation of the Department of Government Efficiency has also added momentum. While primarily aimed at cutting government spending and improving, its acronym is a nod to Dogecoin. This has sparked excitement among Bitcoin enthusiasts and bolsters the narrative that crypto is becoming a core part of governmental strategies, encouraging wider adoption and investment.
Bitcoin ETFs and Institutional Capital
Institutional interest has reached high levels, with spot Bitcoin ETF inflows soaring. Major asset managers are racing to secure Bitcoin ETFs, signaling their belief in Bitcoin’s long-term potential.
Retail investors are also playing a significant role in the rally. Data shows retail interest in Bitcoin has hit a 52-month high, driving increased trading volumes. Sidelined investors are expected to ‘FOMO in’, driving Bitcoin’s price higher.
Bitcoin Halving
The 2024 Bitcoin halving event brought relief and excitement to investors. It is widely believed that it takes Bitcoin at least six months to surge after the halving event. The event, which cuts the supply of new coins in half, is a simple lesson in economics. As the supply decreases and demand rises, prices naturally go up.
Will Bitcoin Reach $100k in 2024? What Experts Say
Bitcoin’s future just got a serious upgrade. Geoff Kendrick, Standard Chartered’s global head of digital assets research, is leading the charge. Kendrick predicts Bitcoin could hit $125,000 by the end of 2024 and $200,000 by late 2025. What’s fueling his optimism? Trump’s bold pro-crypto promises.
Trump plans to clean up the system by dismissing SEC chair Gary Gensler, whose tough stance has cost crypto firms millions in fines. I bet my last satoshi that no crypto fan wants to see Gensler in charge of the SEC given the scale of damage he has done to the crypto industry. He won’t be missed.
“We’ve gone from a regulatory landscape under Biden that was largely adversarial, to one that actively supports the industry,” said Kendrick.
Ryan Lee of Bitget Research expects Bitcoin to touch the $100,000 mark by the end of November.
In Lee’s words, “If history repeats itself, Bitcoin’s projected growth could take it well above $100,000 by month-end,” Lee remarked. Coinshares’ head of research James Butterfly claims that $100,000 by the end of the year “doesn’t sound unreasonable” as it only accounts for 10% of gold’s market cap. Crypto bros see Bitcoin as the ‘digital gold’.
Gabriele Giancola, the CEO of Qiibee says Trump’s re-election could propel BTC to $100,000 in 2024.
“Bitcoin’s performance historically aligns with significant geopolitical and economic events, and the aftermath of the U.S. election could see BTC reaching $100,000 this year or shortly after a potential Trump reelection.”
Not all experts and analysts share this excitement. Some are approaching Bitcoin’s price discovery with a dose of caution. Ki Young Ju, the founder and CEO of the crypto intelligence platform CryptoQuant is taking the caution route. According to Ju, Bitcoin will tumble to nearly $59,000 at the end of the year.
Other analysts believe $100,000 is within sight but not in 2024. Bitfinex’s Head of Derivatives Jag Kooner expects Bitcoin to hit the $100k milestone on Trump’s inauguration on January 20, 2025.
As Bitcoin approaches $100,000, where are we in the four seasons of crypto seven months after the halving?
Conclusion
Bitcoin’s unstoppable journey to $100k makes for a powerful narrative of shifting tides in politics, economics, and investor sentiment. With Trump’s promised pro-crypto policies, a wave of institutional interest, and the ever-powerful force of retail FOMO, the stage is set for Bitcoin to make history.
Whether it happens this week, this year or next, one thing is clear: the excitement, innovation, and potential surrounding Bitcoin are unstoppable. The $100K milestone is somewhere in the near future.
We are witnessing history in the making.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Charles Hoskinson, founder of Cardano and Input Output (IOHK), is stepping into a significant role in shaping cryptocurrency policy in the United States. Hoskinson announced this development on Twitter on November 9, which comes at a critical time for the industry, at the election of pro-crypto presidential candidate Donald Trump over Kamala Harris.
Hoskinson underscored the need for regulatory clarity to move the crypto industry forward and undo the damage that has been done over the years.
Hoskinson calls it the ‘best opportunity’ to influence crypto policy. President Trump campaigned on promises of regulatory clarity for cryptocurrencies, and the new administration seems to be playing ball to deliver on these commitments.
The Republican’s clean sweep aligns with the crypto industry’s need for a supportive regulatory environment. With no clear regulations, many crypto companies suffered at the hands of the U.S. Securities and Exchanges Commission (SEC).
The Importance of Regulatory Clarity
The lack of clear regulations has been a longstanding issue for the crypto industry in the USA. Over the years, companies like Ripple, Kraken and Coinbase have faced significant challenges, such as prolonged legal battles with the SEC, which Ripple CEO Brad Garlinghouse claims has stifled innovation.
Due to the crackdown and regulatory uncertainty, several U.S. crypto companies threatened to relocate overseas. However, the tide is turning in the hope that regulatory clarity will bring more developers and investors to the industry and to the USA.
The Hoskinson’s Vision
Hoskinson declared, “We have to do this, and we have to get it done.”
His vision includes a policy office in Washington, D.C., staffed by experts dedicated to working with lawmakers and administration officials. The goal is a bipartisan group that fosters innovation while addressing sensible regulatory concerns.
This initiative is expected to provide the clarity needed for crypto businesses to thrive without fear of regulatory retaliation. Hoskinson wants to help create a transparent framework that defines what constitutes a security versus a commodity.
Pro-Crypto U.S. Policy is a Watershed Moment
The inclusion of crypto figureheads in the new Trump administration is truly a transformative moment for the entire industry. By advocating for regulations crafted by the crypto community, Hoskinson seeks to ensure that the U.S. remains a global leader in blockchain innovation. He has openly criticized the influence of large financial entities like BlackRock on policy-making, emphasizing the need for industry-driven solutions.
Hoskinson sees this as an opportunity to reverse the damage caused by the SEC due to regulatory overreach. The crypto industry has found a powerful friend in Trump and is now taking the fight to the SEC.
If successful, the pro-crypto U.S. policy could set a precedent for how blockchain companies engage with governments worldwide. This approach not only enhances the legitimacy of the industry but also paves the way for the broader adoption of decentralized technologies. The days of the USA trying to kill crypto seem to be over.
Impact on the Market
The announcement has already had a tangible impact. Cardano’s ADA token surged massively following the news, reaching a seven-month high. This price movement reflects growing investor confidence in the potential alignment of U.S. regulations with the needs of the crypto industry. Market participants are optimistic that clear rules will reduce uncertainty and encourage institutional adoption of blockchain technologies.
Hoskinson’s move also positions Cardano as a leader in bridging the gap between technological innovation and policy. By taking an active role in shaping regulations, Hoskinson aims to foster an environment where decentralized platforms can coexist with traditional financial systems.
Wrapping Up
Hoskinson’s decision to take an active role in shaping U.S. crypto policy is a bold step forward for the industry:
With clear regulations, the blockchain space can unlock its full potential, fostering innovation and economic opportunity.
With industry experts helping shape policy, the crypto space has the best shot to be where it wants to be.
The true impact of these shifts will be known soon.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.