Whisk anyone from the turn of the millennium to the present day – before deepfakes and ChatGPT and LLMs were part of the world’s lexicon – and they would find themselves lacking an important set of survival skills, defences against their phone beeping to tell them, “You have a virus on your computer!”, “There is a security issue on your bank account, click here to fix it!”.
Scamming is all too commonplace in our communication-overloaded world and, with AI, it’s getting scarier than ever – and ever more difficult to distinguish reality from fiction.
The History of Scamming
Scams are nothing new. Parlour tricks to deprive honest people of their money are as old as humanity itself. From thimblerig (the cups and ball trick) to Hegestratos to Ponzi, deceiving others for your gain is one of humanity’s saddest skills. Homo sapiens’ ability to trick and to swindle has been theorised as the reason for our ultimate ascension over the Neanderthals. Neanderthals were stronger, faster and smarter than us, but they fell for it everytime we cried, ‘Look, a mammoth!’ before burying our flint axe in their back.
Every new technology has been used as a new way to deceive. When mass telecommunication was first developed, it wasn’t long before auto diallers began being used for fraud. They still happen today, with the ‘Can You Hear Me?’ scam causing widespread misery as late as 2020.
Phishing emails were around as soon as the internet began. Early online users often had to reformat their computers multiple times due to a vicious virus ransacking data on the computer – before we all gained the collective consciousness to not trust everything you read online, or click on anything whose source you don’t know. Scams were an irritant to most, but an unlucky few saw themselves bankrupted by scammers draining their accounts entirely.
Why We’re Not as Scam-Resistant as We Think
We like to think we’ve evolved beyond being deceived so easily, but the numbers tell a different story. In 2022, reported consumer losses to fraud totaled $8.8 billion, a 30% increase on the year before. Our online hygiene isn’t getting better, it’s getting worse. And this is before we even factor in crypto where, with transactions being irreversible, there are fewer safeguards in place to prevent losses to fraud. Even power users with burgeoning wallets have their wealth drained with just a few misguided clicks.
Why? Are we all just suckers? No, we are not. We’re wiser, more educated, more perceptive and more technologically integrated than at any previous time in history. So what is going on?
First, we are more lonely than ever, more distanced from each other in a more deracinated society. Our individual experience is more atomised and more online – and thus more prone to blindspots. Second, information overload and the increasingly simulated world make it harder to distinguish the real from the unreal – even for the sophisticated. The sheer amount of processing we have to do when traversing any public forum is beyond what previous societies ever had to do. Voices and opinions, truth and lies from all sides surround us like a fog, making it hard to see the straight-and-narrow road of consensus society (a fact illustrated by the bitter political polarisation we see today).
How AI Is Used to Scam Everyone
And thirdly, but most importantly, scams are getting really good. Scary good. A large professional industry has now formed around trying to scam people from their money. It’s not just shysters and the morally bankrupt stealing from grandma. It’s state-sponsored aggression by malevolent actors targeting enemy citizenry. Certain countries have teams devoted to scamming the rich citizens of their rivals in order to keep their economy afloat.
And they have the tools to do it. There is a theory in robotics of the ‘uncanny valley’, the unheimlich of gothic literature, of what is so close to real it unnerves you. Yet as generative AI becomes able to produce images, video, and speech closer to the real thing, and LLMs become more able to produce, orchestrate and distribute text with ever greater verisimilitude to a normal human’s presentation, the more this uncanny valley fills, the harder it is to take anyone on trust – even the ones you love.
Stories of scammers emulating loved one’s voices to make desperate phone calls pleading for help and money are truly chilling. If your mother or brother or partner or child rang you screaming, would you think twice? What if they facetimed you, a fully deepfake visual imprint in distress, the horror would have you sending money before you analysed the render too closely.
Even if such depraved tactics are not used, with AI tools, there is no need for a scammer to spend hours, days, or weeks building trust with a target before finally getting the information they need. It’s point-and-click fraud. Stories of blackmail with teenagers being sent deepfaked nudes of themselves are just the latest in a litany of horror that many seemed determined to use our powerful new technological advances to create.
How to Stay Safe Online
We live, then, in scary times. It is the responsibility of everyone to understand what kind of dark valley our new AI tools can dig, and it’s the responsibility of governments to educate and inform their citizenry of the dangers they face. Despite this black mirror, there is hope. AI sentinels protecting each individual, helping them discern the real from the fake, more powerful forms of self-custody of wealth and better banking infrastructure to protect against fraud, and a wider understanding of the brave new world we live in can hopefully, in time, stop scammers in their tracks.
Until then, stay safe out there. Trust me when I tell you to not trust anyone.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.