Deep fakes: What’s next? Anticipating new twists and turns in humanity’s oldest struggle
Apr. 09, 2024. 29 mins. read.
41 Interactions
Powerful AI will soon be deployed to make fake news more convincing. AI can also provide part of the solution to the problems arising. But only part of the solution.
Fake news that the Pope endorsed Donald Trump (a story that was shared more widely than any legitimate news story that year). A fake picture of former US VP Michael Pence in his youth seemingly as a gay porn star. Fake audio of UK political leader Keir Starmer apparently viciously berating a young volunteer assistant. Another fake audio of London mayor Sadiq Khan apparently giving priority to a pro-Palestinian march over the annual Remembrance Day walk-past by military veterans. Fake videos of apparent war atrocities. Fake pornographic videos of megastar pop celebrities.
What’s next? And how much does it really matter?
Some observers declare that there’s nothing new under the sun, and that there’s no special need to anticipate worse to come. Society, they say, already knows how to deal with fake news. Fake news may be unpleasant – and it’s sometimes hilarious – but we just have to keep calm and carry on.
I strongly disagree, as I’ll explain below. I’ll review ten reasons why fake news is likely to become worse in the months ahead. Then I’ll suggest ten steps that can be taken to regain our collective sanity.
It remains to be determined whether these ten steps will be sufficient, or whether we’ll all sink into a post-truth swamp, in which sneering suspicion displaces diligent understanding, fake science displaces trustworthy science, fake journalism displaces trustworthy journalism, and fake politicians seize power and impose their dictatorial whims.
Deception: the back story
It’s not flattering to say it, but we humans have been liars since before the dawn of history. And, just as important, we have been self-deceivers as well: we deceive ourselves in order to be more successful in deceiving others.
In case that idea offends you, I invite you to delve into the evidence and analysis offered in, for example:
- Why We Lie: The Evolutionary Roots of Deception and the Unconscious Mind by David Livingstone Smith
- Deceit and Self-Deception: Fooling Yourself the Better to Fool Others by Robert Trivers
- The Elephant in the Brain: Hidden Motives in Everyday Life by Kevin Simler and Robin Hanson
We implore our children to be truthful but also guide them to know when to tell white lies – “thank you for this lovely present, it’s just what I wanted!” And the same ancient books of the Bible that command us “do not bear false witness” appear to celebrate deceit when practiced by figures such as Jacob, Rachel, Rebekah, and Tamar.
I could tell you, as well, that the ancient Greek dramatist Aeschylus, known as ‘the father of tragedy’, made this pithy observation two and half millennia ago: “Truth is the first casualty in war”. One tragedy – war – births another – deception.
As it happens, it seems likely that this quotation is a misattribution. I’ll come back to that point later, when talking, not about deception, but about solutions to deception. But regardless of whoever first uttered that saying, we can appreciate the insight it contains. In times of bitter conflict, there are special incentives to mislead observers – about the casualties we have suffered, about the casualties we have inflicted on opposing forces, about our military plans for the future, and much more.
It’s not just war that provides an incentive to deceive. It’s the same with politics: opposing parties compete to set the narrative, and individual politicians seek to climb past each other on what Benjamin Disraeli dubbed “the greasy pole” of political intrigue. It’s the same with commerce, with companies ready to spread misleading ‘FUD’ (fear, uncertainty, and doubt) regarding the comparative strengths of various forthcoming products and services. And it’s the same in private life, as we seek to portray ourselves in a favorable light in the eyes of family and friends, hiding our physical and psychological warts.
In this sense, deception is old news. We’ve had ‘fake news’ for as long as there has been ‘news’.
It’s tempting, therefore, to yawn when people draw attention to more recent examples of fake news and deception.
But that would be a giant mistake.
It’s technology that’s making the difference. Technology ramps up the possibilities for fake news to be even more deceptive, more credible, more ubiquitous, more personal, and more effective. Led by leaps in capabilities of AI systems, technology is enabling dramatic new twists in the struggle between truth and lies. It’s becoming even harder to distinguish between trustworthy and untrustworthy information.
If we fail to anticipate these developments, we’re likely to succumb to new waves of deception. The consequences may be catastrophic.
But forewarned is forearmed. By drawing on insights from humanity’s better experiences, we should be able to create technologies, processes, and institutions that help us to block these oncoming waves.
Ten twists
1. Fake news at scale
If at first you fail, why not try again?
You tried to deceive your target audience, but they were not swayed. This time, they saw through your lies. Or perhaps they didn’t even pay attention.
But if trying is cheap and quick, you can try again, this time with a different false narrative, expressed in a different voice.
What’s changed is that it’s much cheaper to try again. You can take advantage of automation, always-on networks, social media, and generative AI, to create and distribute new pieces of fake news. It’s mass-production for lies.
You’re not constrained by only creating one bot on social media. You can create armies of them.
You’re not constrained by having to write text yourself, or create suitably misleading images. You can obtain good results from a few clicks of a mouse.
The result is that discussion is being flooded with deliberately false narratives.
2. Fake news that earns money
Some false narratives are designed to try to change people’s minds. They want to change voting decisions, purchasing decisions, relationship decisions, and so on.
But other false narratives have a different purpose: to earn money via advertising clicks or affiliate marketing revenue share.
Viewers are attracted to websites by content that is outrageous, inflammatory, intriguing, or funny. They spend more time on these sites to explore the other content there, enjoying being outraged, inflamed, intrigued, or simply humored. And while on these sites, they may click on other links that generate revenue for the owners of the site.
In this case, the content creators have no special interest in whether the content matches their own political or philosophical outlooks. They produce whatever earns them the most clicks. Indeed, some clickbait merchants set up websites posting contradictory stories, to catch traffic from both sides of the political spectrum.
As a sad side-effect, people’s minds become increasingly confused. Being misled by fake content, they become less able to distinguish fantasy from reality.
3. Fake news with a personal appeal
It’s not just that fake news is being created on a greater scale than ever before. It’s being created with a greater variety than ever before.
Technology makes it easier to create different variants of the same false narrative. Some variants can be sent to people who are supporters of Candidate A within Party P. A different variant can be sent to people who support Candidate B within Party P. Yet other different variants target people whose favored candidates are from Party Q, Party R, and so on.
More than that: once software has learned which kind of pretty face each person is likely to look at – or which kinds of music make each person want to listen – these variants can easily be generated too, and directed to each target.
4. Fake news based on footprints
You might wonder: how does software know that I am likely to be distracted by particular kinds of pretty faces, or particular kinds of music?
That’s where extensive data gathering and analysis come to the fore. We are each constantly generating online footprints.
For example, Facebook notices that when it places a chess puzzle in my timeline, I tend to click on that conversation, to consider the position in more detail. Facebook observes my interest in these puzzles. Soon, more chess puzzles are being shown to me.
That particular inference is relatively straightforward. Other inferences depend on a wider review of my online activity – which posts I ‘like’, which posts I ‘hide’, and so on.
The algorithms make all kinds of deductions from such reviews. They’re not always correct, not even close. But the AI systems that create personalized fake news have greater numbers of positive hits than those that don’t.
5. Fake news that builds on top of truth
The best lies mix truth with untruth. These lies are especially effective if the truth in question is one that much of society likes to suppress.
Consider a simple example. A leaked document here, a whistleblower there – a few hints suggest something fishy is going on: there is bureaucratic corruption and nepotism within a political state. Then the news-faker adds the unjustified conclusion: the government in question is irretrievably corrupt. Hence the conclusion: kick all these politicians out of power!
Again: a narrative might give a number of examples of people experiencing remission from long-standing diseases, despite forecasts from white-coated doctors that the disease was fatal. Then it adds the lie: what matters most in healthcare is your personal attitude, rather than expensive drugs that Big Pharma are trying to sell. Therefore: stop listening to your doctor, and instead purchase my course in positive thinking for $29.99 a month!
Again: members of some minorities suffered appalling abuses in trials of various medical procedures, where there was no informed consent, and where there was an apparent casual disregard for the suffering entailed. And then the lie: present-day society is incorrigibly racist and irredeemably exploitative. Therefore: it’s time to wield pitchforks!
The cleverest fake news combines this principle with the previous one. It works out our belief-systems from our online footprints – it figures out what we already suspect to be true, or hope to be true, even though the rest of society tends to think differently. Then it whips up a fake narrative from beliefs we support plus the new message it’s trying to inject into our minds.
In this way, it flatters us, in order to better mislead us.
No wonder that we often fall for that kind of deception.
6. Fake news that weaponizes friendships
Each of us is more likely to pay attention to a message if it comes from a person that we think we like – someone we perceive as one of our special friends.
If our friend is concerned about a topic, it makes us more likely to be concerned about it too – even if, previously, we might not have given that topic a second thought.
This is where the sinister power of the systems that manufacture fake news reaches higher levels. These systems invest time to create fake personas – people who we welcome as our ‘friends’ on social media.
At first, these friends say nothing out of the ordinary. We forget whether or not we met them in real life. Their names become increasingly familiar to us. We imagine we know lots about them – even though their entire backstory is fictitious.
And that’s when the poisonous messages start seeping into your conversations and then into your thoughts. And without you realizing what has happened, a fake friend has led you into a fake idea.
7. Fake news with amplification support
If we hear the same opinion from multiple sources, we may at first resist the idea, but then start to accept it.
That’s especially true if the opinion receives apparent support from apparent credentialed experts.
Thus when some fake audio is posted to social media, other fake posts soon accompany it. “I’m an expert in audio authentication”, a bot declares. “I’ve studied the clip carefully, and I assure you it’s genuine”.
If we don’t look closely, we’ll fail to spot that the credentials are bogus, and that there’s no real-world audio expert behind these claims.
The greater the number (and the greater the variety) of the apparent endorsements, the easier it becomes for some of these fake endorsements to bypass our critical faculties and to change our minds.
8. Fake news that exploits our pride
We all like to tell ourselves: we’re not the kind of person who falls for a simple conjuring trick.
Other people – those not so smart as us, we think – might be misled by dubious claims in advertisements or social media memes. Not us!
This has been called the bias blind spot – the cognitive bias that says “other people have cognitive biases, but not me!”
But recall that our ability to deceive ourselves is key to our ability to deceive others. If we are conscious of our lies, astute listeners will notice it. That’s why our subconscious needs to mislead our conscious mind before we in turn can mislead other people.
In the same way, it is an inflated self-confidence that we are good reasoners and good observers that can set us up for the biggest failures.
Couple a misplaced pride in our own critical faculties with the warm feelings that we have developed for friends (either fake online personas, as covered above, or real-world friends who have already fallen into social media rabbit holes), and we are set up to be suckered.
9. Fake news that exploits alienation
Pride isn’t the only emotion that can tempt us into the pit of fake news. Sometimes it can be a sense of grievance or of alienation that we cling to.
Unfortunately, although some aspects of the modern world feature greater human flourishing than ever before, other aspects increase the chances of people nurturing grievances:
- The inability of large segments of the population to afford good healthcare, good education, or good accommodation
- The constant barrage of bad news stories from media, 24 hours a day
- A matching barrage of stories that seem to show the “elites” of society as being out-of-touch, decadent, uncaring, and frivolous, wallowing in undeserved luxury.
As a result, fake news narratives can more easily reach fertile soil – unhappy minds skip any careful assessment of the validity of the claims made.
10. Fake news with a lower barrier to entry
Perhaps you’re still thinking: none of the above is truly novel.
In a way, you would be correct. In past times, clever operators with sufficient resources could devise falsehoods that misled lots of people. Traditional media – including radio and newspapers – were spreading destructive propaganda long before the birth of the Internet.
But the biggest difference, nowadays, is how easy it is for people to access the tools that can help them achieve all the effects listed above.
The barrier to entry for purveyors of far-reaching fake news is lower than ever before. This is an age of ‘malware as a service’, dark net tutorials on guerrilla information warfare, and turnkey tools and databases.
It’s an age where powerful AI systems can increasingly be deployed in service of all the above methods.
Happily, as I’ll discuss shortly, these same AI systems can provide part of the solution to the problem of ubiquitous fake news. But only part of the solution.
Interlude: a world without trust
First, a quick reminder of the bad consequences of fake news.
It’s not just that people are deceived into thinking that dangerous politicians are actually good people, and, contrariwise, that decent men and women are actually deplorable – so that electors are fooled into voting the dangerous ones into power.
It’s not just that people are deceived into hating an entire section of society, seeing everyone in that grouping as somehow subhuman.
It’s not just that people are deceived into investing their life savings into bogus schemes in which they lose everything.
It’s not just that people are deceived into rejecting the sound advice of meticulous medical researchers, and instead adopt unsafe hyped-up treatments that have fearful health consequences.
All of these examples of unsound adoption of dangerous false beliefs are, indeed, serious.
But there’s another problem. When people see that much of the public discourse is filled with untrustworthy fake news, they are prone to jump to the conclusion that all news is equally untrustworthy.
As noted by Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab,
A pernicious harm of fake news is the doubt it sows about the reliability of all news.
Thus the frequent lies and distortions of fringe news sites like InfoWars, Natural News, and Breitbart News lead many people to conclude that all media frequently publish lies. Therefore nothing should be trusted. And the phrase “mainstream media” becomes a sneer.
(They find some justification for this conclusion in the observation that all media make some mistakes from time to time. The problem, of course, is in extrapolating from individual instances of mistakes to applying hostile doubt to all news.)
Baroness Onora O’Neill of the Faculty of Philosophy at the University of Cambridge commenced her series of Reith Lectures in 2002 by quoting Confucius:
Confucius told his disciple Tsze-kung that three things are needed for government: weapons, food, and trust. If a ruler can’t hold on to all three, he should give up the weapons first and the food next. Trust should be guarded to the end: ‘without trust we cannot stand’.
Sadly, if there is no trust, we’re likely to end up being governed by the sort of regimes that are the furthest from deserving trust.
It’s as the German historian and philosopher Hannah Arendt warned us in her 1951 book The Origins of Totalitarianism:
The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, in other words, the reality of experience, and the distinction between true and false… people for whom those distinctions no longer exist.
However, the technologies of the 2020s put fearsome possibilities into our grasp that writers in 1951 (like Arendt) and in 2002 (like O’Neill) could hardly have imagined.
In previous generations, people could keep their inner thoughts to themselves, whilst outwardly kowtowing to the totalitarian regimes in which they found themselves. But with ten-fold twisted fake news, even our inner minds will be hounded and subverted. Any internal refuge of independent thinking is likely to be squelched. Unless, that is, we are wise enough to take action now to prevent that downward spiral.
Regaining trust
What can be done to re-establish trust in society?
Having anticipated, above, ten ways in which the problem of fake news is becoming worse, I now offer an equal number of possible steps forward.
1. Education, education, education
Part of growing up is to learn not to trust so-called 419 scam emails. (The number 419 refers to the section of the Nigerian Criminal Code that deals with fraud.) If someone emails us to say they are a prince of a remote country and they wish to pass their inheritance to us – provided we forward them some hard cash first – this is almost certainly too good to be true.
We also learn that seeing is not believing: our eyes can deceive us, due to optical illusions. If we see water ahead of us on a desert road, that doesn’t mean the water is there.
Similarly, we all need to learn the ways in which fake news stories can mislead us – and about the risks involved in thoughtlessly spreading such news further.
These mechanisms and risks should be covered in educational materials for people of all ages.
It’s like becoming vaccinated and developing resistance to biological pathogens. If we see at first hand the problems caused by over-credulous acceptance of false narratives, it can make us more careful on the next occasion.
But this educational initiative needs to do more than alert people to the ways in which fake news operates. It also needs to counter the insidious view that all news is equally untrustworthy – the insidious view that there’s no such thing as an expert opinion.
This means more than teaching people the facts of science. It means teaching people the methods used by science to test hypotheses, the reasons why science assesses various specific hypotheses as being plausible. Finally, it means teaching people, “here are the reasons to assign a higher level of trust to specific media organizations”.
That takes us to the second potential step forward.
2. Upholding trustworthy sources
Earlier, I mentioned that a quote often attributed to the fifth century BC writer Aeschylus was almost certainly not actually said by him.
What gives me confidence in that conclusion?
It’s because of the reliance I place in one online organization, namely Quote Investigator. In turn, that reliance arises from:
- The careful way in which pages on that site reference the sources they use
- The regular updates the site makes to its pages, as readers find additional relevant information
- The fact that, for all the years I’ve been using that site, I can’t remember ever being misled by it
- The lack of any profit motivation for the site
- Its focus on a particular area of research, rather than spreading its attention to wider topics
- Positive commendations for the site from other researchers that have gained and maintained a good reputation.
Other organizations have similar aspirations. Rather than “quote checking”, some of them specialize in “fact checking”. Examples include:
- Snopes
- FactCheck.org
- PolitiFact
- Full Fact
- AFP Fact Check
- The Washington Post Fact Checker
- Lead Stories
- The Conversation FactCheck
- BBC Verify
These sites have their critics, who make various allegations of partisan bias, overreliance on supposed experts with questionable credentials, subjective evaluations, and unclear sources of funding.
My own judgment is that these criticisms are mainly misplaced, but that constant vigilance is needed.
I’ll go further: these sites are among the most important projects taking place on the planet. To the extent that they fall short, we should all be trying to help out, rather than denigrating them.
3. Real-time fact-checking
Fact checking websites are often impressively quick in updating their pages to address new narratives. However, this still leaves a number of problems:
- People may be swayed by a false narrative before that narrative is added to a fact-checking site
- Even though a piece of fake news is soundly debunked on a fact-checking site, someone may not be aware of that debunking
- Even if someone subsequently reads an article on a fact-checking site that points out the flaws of a particular false narrative, that narrative may already have caused a rewiring of the person’s belief systems at a subconscious level – and that rewiring may persist even though the person learns about the flaws in the story that triggered these subconscious changes
- The personalization problem: false narratives tailored to individual targets won’t be picked up by centralized fact-checking sites.
AI could hold part of the answer. Imagine if our digital media systems included real-time fact-checking analyses. That’s part of the potential of AI systems. These real-time notifications would catch the false information before it has a chance to penetrate deeply into our brain.
Our email applications already do a version of this: flagging suspicious content. The application warns us: this email claims to come from your bank, but it probably doesn’t, so take care with it. Or: the attachment to this email purports to be a PDF, but it’s actually an executable file that will likely cause damage.
Likewise, automated real-time fact-checking could display messages on the screen, on top of the content that is being communicated to us, saying things like:
- “The claim has been refuted”
- “Note that the graph presented is misleading”
- “This video has been doctored from its original version”
- “This audio has no reliable evidence as to its authenticity”
- “There is no indication of a cause-and-effect relationship between the facts mentioned”
In each case, ideally the warning message will contain a link to where more information can be found.
4. Decentralized fact-checking
The next question that arises is: how can people be confident in relying on specific real-time fact-checkers?
We can already imagine their complaints:
- “This fact-checker is wokism gone mad”
- “This fact-checker serves Google, not me”
- “This fact-checker serves the government, not me”
- “I prefer to turn off the fact-checker, to receive my news free from censorship”
There’s no one easy answer to these objections. Each step I describe in this list of ten is designed to reduce some of the apprehension.
But an important step forward would be to separate the provision of content from the fact-checking layer. The fact-checking layer, rather than being owned and operated by the commercial entity that delivers the media, would ideally transcend individual corporations. For example, it could operate akin to Wikipedia, although it would likely need more funding than Wikipedia currently receives.
Further developing this model, the fact-checking software could have various settings that users adjust, reflecting their own judgment about which independent sources should be used for cross-checking.
Maybe the task is too dangerous to leave to just one organization: then another model would involve the existence of more than one option in the fact-checking field, with users being able to select one – or a bunch – to run on their devices.
5. Penalties for dangerous fakes
As well as trying to improve the identification of fake news, it’s important to change the incentives under which fake news is created and distributed. There are roles for ‘sticks’ (penalties) as well as ‘carrots’ (rewards).
Regarding penalties, society already imposes penalties:
- When advertisements make misleading or unfounded claims
- When companies make misleading or unfounded claims in their financial statements
- When people make libelous claims about each other.
Fines or other punishments could be used in cases where people knowingly distribute misleading narratives, when the consequences involve clear harm (for example, a riot).
This proposal makes some people nervous, as they see it as an intrusion on freedom of expression, or a block on satire. They fear that governments would use these punishments to clamp down on statements that are embarrassing to them.
That’s why monitoring and prosecuting such cases needs to be done independently – by a police force and judiciary that operates at arms’ length from the government of the day.
This principle of separation of powers already applies to many other legal regulations, and could surely work for policing fake news.
Related, there’s a case for wider collection and publication of statistics of reliability. Just as hospitals, schools, and many other parts of society have statistics published about their performance, media organizations should receive the same scorecard.
In this way, it would be easy to know which media channels have a casual relationship with the truth, and which behave more cautiously. In this way, investment funds or other sources of financing could deny support to organizations whose trustworthiness ratings drop too low. This kind of market punishment would operate alongside the legal punishment that applies to more egregious cases.
6. A coalition for integrity
Some of the creators of fake news won’t be deterred by threats of legal punishment. They already operate beyond the reaches of the law, in overseas jurisdictions, or anonymously and secretly.
Nevertheless, there are still points of crossover, where new content is added into media channels. It is at these points where sanctions can be applied. Media organizations that are lax in monitoring the material they receive would then become liable for damage arising.
This will be hard to apply for communications systems such as Telegram, WhatsApp, and Signal, where content is encrypted from one end of a communication to the other. In such cases, the communications company doesn’t know what is being transmitted.
Indeed, it is via such closed communications systems that fake news often spreads these days, with Telegram a particularly bad offender.
There’s a case to be made for a coalition of every organization that values truthfulness and trustworthiness over the local benefits of spreading false information.
People who support this ‘coalition for integrity’ would share information about:
- Entry points used by fake news providers to try to evade detection
- Identification of fake news providers
- Ways in which providers of fake news are changing their methods – and how these new methods can be combated.
Regardless of differences in political or philosophical outlook among members of this coalition, they have a common interest in defending truthfulness versus deception. They should not allow their differences to hinder effective collaboration in support of that common purpose.
7. Making trust everyone’s business
In recent decades, a variety of new job titles have been created at the highest levels within companies and organizations, such as:
- Chief Design Officer
- Chief Innovation Officer
- Chief Quality Officer
- Chief Security Officer
None of these posts free other members of the company from their responsibility for design, innovation, quality or security. These values are universal to everyone in the organization as they go about their duties. Nevertheless, the new ‘chief’ provides a high-level focus on the topic.
It should be the same with a new set of ‘Chief Trust Officers’. These executives would find ways to keep reminding personnel about:
- The perils arising if the organization gains a reputation for being untrustworthy
- Methods and procedures to follow to build and maintain a trustworthy reputation for the organization
- Types of error that could result in dangerous false narratives being unwittingly transmitted
My assessment is that the organizations who appoint and support Chief Trust Officers (or equivalent) are the ones most likely to succeed in the turbulent times ahead.
8. Encouraging openness
To be clear, education often fails: people resist believing that they can be taken in by false information.
We like to think of ourselves as rational people, but a more accurate description is that we are a rationalizing species. We delight in finding ways to convince ourselves that it is fine to believe the things that we want to believe (even in the face of apparent evidence against these beliefs).
That’s why bombarding people with education often backfires. Rather than listening to these points, people can erect a strong shield of skepticism, as they prepare to lash out at would-be educators.
Indeed, we all know people who are remarkably clever, but they deploy their cleverness in support of profoundly unwise endeavors.
This state of affairs cannot be solved merely by pumping in more facts and arguments. Instead, different approaches are required, to encourage a greater openness of spirit.
One approach relies on the principle mentioned earlier, in which people pay more attention to suggestions from their close friends. Therefore, the best way to warn people they are about to fall for dangerous information is for them to be warned by people they already trust and respect.
Another approach is to find ways to put people in a better mood all round. When they have a compassionate, optimistic mindset, they’re more likely to listen carefully to warnings being raised – and less likely to swat away these warnings as an unwelcome annoyance.
It’s not enough to try to raise rational intelligence – rather, we must raise compassionate intelligence: an intelligence that seeks wisdom and value in interactions even with people previously regarded as a threat or enemy.
This is a different kind of education. Not an education in rationality, but rather an education in openness and compassion. It may involve music, meditation, spending time in nature, biofeedback, and selected mind-transforming substances. Of course, these have potential drawbacks as well as potential benefits, but since the upsides are so high, options need to be urgently explored.
9. A shared positive vision
Another factor that can predispose people to openness and collaboration, over closed-mindedness and stubborn tribal loyalties, is a credible path forward to a world with profound shared benefits.
When people anticipate an ongoing struggle, with zero-sum outcomes and continual scarcity of vital resources, it makes them mentally hostile and rigid.
Indeed, if they foresee such an ongoing conflict, they’ll be inclined to highlight any available information – true or fake – that shows their presumed enemies in a bad light. What matters to them in that moment is anything that might annoy, demoralize, or inflame these presumed enemies. They seize on fake news that does this, and also brings together their side: the set of people who share their sense of alienation and frustration with their enemies.
That is why the education campaign that I anticipate needs a roadmap to what I call a sustainable superabundance, in which everyone benefits. If this vision permeates both hearts and minds, it can inspire people to set and respect a higher standard of trustworthiness. Peddlers of fake news will discover, at that point, that people have lost interest in their untruths.
10. Collaborative intelligence
I do not claim that the nine steps above are likely to be sufficient to head off the coming wave of dangerous fake news.
Instead, I see them as a starting point, to at least buy us some time before the ravages of cleverer deep fakes run wild.
That extra time allows us to build a stronger collaborative intelligence, which draws on the insights and ideas of people throughout the coalition for integrity. These insights and ideas need time to be evolved and molded into practical solutions.
However, I anticipate not just a collaboration between human minds, but also a rich collaboration involving AI minds too.
Critically, AI systems aren’t just for ill-intentioned people to use to make their deep fakes more treacherous. Nor are they just something that can power real-time fact-checking, important though that is. Instead, they are tools to help us expand our thinking in multiple dimensions. When we use them with care, these systems can learn about our concerns regarding worse cases of deep fakes. They can consider multiple possibilities. Then they can offer us new suggestions to consider – ways probably different from any I’ve listed above.
That would be a striking example of beneficial artificial intelligence. It would see deep fakes defeated by deep benevolence – and by a coalition that integrates the best values of humans with the best insights of AIs.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
11 Comments
11 thoughts on “Deep fakes: What’s next? Anticipating new twists and turns in humanity’s oldest struggle”
Very informative article!!?
🟨 😴 😡 ❌ 🤮 💩
joss
🟨 😴 😡 ❌ 🤮 💩
Very good idea
🟨 😴 😡 ❌ 🤮 💩
from the very beginning all your developments are good.
🟨 😴 😡 ❌ 🤮 💩
very good article. With the advancement of AI, there is now a lot of fake news circulating. So we as readers have to be smarter in looking for reliable information
🟨 😴 😡 ❌ 🤮 💩
???
🟨 😴 😡 ❌ 🤮 💩
I can't remember which book; it might be in one of Yuval Hariri's cool history books but I'm not sure. Dang, my memory's slipping.There's this theory that lying is a sign of smarts. People can fake anything to sway other people, even other animals.
One neat example I read is about certain monkey types having different alarm calls for various dangers. Like, they've got distinct yells for different predators, separate from their warnings for venomous snakes. And unless they actually spot these threats, the monkeys can't make those sounds, even if humans try to teach them. For these monkeys yelling snake actually depends on the presence of a snake.
But us humans? We can scream "snake" and freak someone out just for kicks whenever we feel like it.The whole deal is, lying shows smarts 'cause it takes grasping abstract ideas, time and space, and having a solid memory to track the facts. It definitely needs some top-notch cognitive skills, a wild imagination, and understanding that reality isn't the only option.
Yeah, David, your take that humans are fibbers isn't offensive at all! And your point about dealing with fake news before it gets worse is spot-on. I could share a bunch of cool history stuff that breaks down how powerful false narratives were during colonization. Actually, fake news and false narrative are the main tools that wrecked many black communities, both now and in the past.
Like, take Belgium winning a major chiefdom in colonial-day Congo. While Belgian soldiers had real fights with some chiefdoms, they didn't even shoot a single bullet in one big chiefdom. The leader of the colonial solders just used a fake story. When the Belgian colonial solders/slave traders reached the capital of this particular chiefdom, they knew a solar eclipse was happening the next day. So, this sharp bunch went to the chief and his counsel and spun a tale that their king was the actual son of the sun or a sun god. They said if the chief didn't surrender, they'd block out the sun the next day. The chief and his counsel, being curious, fell for it and agreed to test it out. Next day, when the eclipse happened, the chief and his counsel along with rest of the common folks panicked and begged to accept King Leopold's terms. This chiefdom played a big role in helping the then Belgium conquer other chiefdoms as allies. The rest is history: Belgium did some unspeakable evil stuff in Congo, shaping the image of black people as subhuman. The rest is history but we are paying the bill still today!
There's another cool story from Sudan a decade later. The Mahdists, a liberation movement led by a Muslim leader called Mehadi, rebelled against their British colonizers, kicked them out, killed a bunch of soldiers and officers, including the infamous Gordon, and captured a couple of high-ranking officers along with hundreds of war prisoners. Guess what? One of the prisoners remembered the Belgian trick and started claiming he'd make the sun disappear unless the Mahdists freed them and submitted to the British Empire. When the Mahdists heard this, they were like, "Sure, if you can pull that off, we'll agree." Next day, the British guy tried to pull off some stunts for over three hours, amusing the Sudanese, but the sun didn't vanish. Finally, the Sudanese rebels got tired of the show and stopped it. Their leader then said, "Kid, I don't know what you're trying to pull, but if you're talking about the eclipse, you messed up your calendar 'cause it's happening after two days!"
If you dig into these two historical events, you'll see how much of a diff writing and education make. Congo, without a writing system, couldn't effectively record history or predict events like solar eclipses. Meanwhile, Sudan, with exposure to Arabic writing, had a solid system for recording history and a modernized calendar that tracked eclipses.
Let's move to the history of present day "Latinos" who in truth has nothing to do with Latin. When the Spaniards conquered South America, they spread fake stories and claimed to be actual gods. And you know how they wiped out all south American civilizations, including the last of the Mayans.Same History has recorded how they have wiped millions of Native south Americans and molded a new nation we call Latinos today. Yes, fake news and false narrations had always been powerful weapons of mass distrucution in the past.
Same deal today. I can't stress enough how much fake news and false narratives are wrecking black Americans. Yeah, deep fake powered by generative AI will be a huge danger to our future, no matter the race.
David's right, man. Education, education, education! Thanks, David.
🟨 😴 😡 ❌ 🤮 💩
???
🟨 😴 😡 ❌ 🤮 💩
Fantastic David. This is a very thorough article on this issue. I urge the readers to bear the length. I agree that 100% deep fake in the age of generative AI will be a huge problem. People are only considering fake news. Well, that is half of it. David, you have also focused on the feek content part, either used by advertisers, political parties, governments, etc.
There is a huge risk when it comes to using deep fakes. Imagine how hard it will be for my old mother, father, or even brothers if someone sends a fake photo of me tied to a chair and a knife under my bed, demanding an immediate wire transfer for my release. From ransom to blackmail and to all the other imagined and unimagined but sure-to-come scam techniques, deep fake can and will be applied in the scam industry, and for me, that is equally concerning.
As a footnote, when I discovered Mindplex, the reputation system was a promise that was pitched as one method to fight deep fakes. What happened to it's phase two guys? Hopefully, you are cooking something on it. On the road map, it says Mindplex will build a fallacy identifier and fact-checking AI. In fact, the promise of neutral but powerful Reputation AI was one of the reasons I opened an account on Mindplex.
🟨 😴 😡 ❌ 🤮 💩
Fantastic article David! See also my extensive set of videos on this subject: Fighting Disinformation
https://youtu.be/ctrqpgwk8OA
🟨 😴 😡 ❌ 🤮 💩
I will take a look at it soon. I am interested to see what you think on this subject as well. I think this is also related to the BGI concept.
🟨 😴 😡 ❌ 🤮 💩