Criminal Minds: AI, Law and Justice

“I am the Law!” says Judge Dredd in the deeply satirical comics of the same name. Judge Dredd is a satirical pastiche of a law system gone wrong (despite Hollywood’s tendency to portray him as a hero in two excellent movies). It shows a world where far too much power is coagulated into a single figure who acts as judge, jury and executioner – a system of runaway justice which does as much to propagate the dystopian society in which it exists as it does to tame it. 

We all know no one person should have that much power, that to err is human, and that in matters of law and order we need robust and sufficient oversight to iron out the blindspots of the individual human psyche. We have entire systems, necessarily bureaucratic and baroque, to ensure justice is served properly through our shared system of values, no matter how inefficient, arduous and expensive it may be. An inefficiency that, sadly, leads many to escape justice altogether, as overwhelmed police and law courts simply can’t keep up.

Painful Parables

Yet what about the use of AI to help us dispense justice? Recent proponents have argued that AI can help iron out human bias, process police work quicker, solve cold cases and perhaps even predict potential crime before it happens. They argue by drastically increasing the efficiency of police work, we increase its efficacy. It sounds great to some, utterly terrifying, to others – a short hop from an authoritarian nightmare.

In the Judge Dredd comics, Judges are supported by AI systems that help them determine what crimes have been committed. Minority Report, by Phillip K. Dick (also made into an outstanding movie), uses an AI system to process human visions to determine who is guilty by sheer predestination, locking them up before a crime has even occurred. In Psycho-pass, an exceptional cyberpunk anime, an AI system supervises human mental activity and distils it into a ‘Crime Coefficient’ which is then used to bring perps to ‘justice’ based on probability alone.

As readers and viewers, we abhor these insane AI-driven systems of justice, we see them as cautionary tales from impossible futures to teach us not what to do to build a better society. We may even dismiss them as silly sci-fi tales, parables that would never happen in our world.

Credit: Tesfu Assefa

The Use of AI in Law Enforcement

Except, it’s starting to happen. It’s come with the appropriate insistence on frameworks and regulations, but AI is now beginning to be used by police forces to help them with their work. A tool that can do ‘81 years of police work in 30 hours’ is being trialled by UK police, helping them identify potential leads buried in mounds of evidence. AI is relentless, and its ability to sift through acres of documentation is its likely most compelling use-case to date. Putting it to work collating evidence from thousands of documents does seem like an efficient use of the system – but the implications do remain terrifying. 

One example of this is seen in the use of AI to write police reports by US officers. That’s insane. In a world where criminal convictions can hang on literally one word in a statement, using a generative AI to create them based on the noted jottings of a police officer is throwing open the door to possible miscarriages of justice. There is a time and a place for AI, and in matters of justice where exact recollections matter, using an AI to write the document of record on events can’t be acceptable. 

We still don’t know exactly how these LLMs arrive at their conclusions. AI researchers at the top companies can’t ‘work backwards’ through output, it doesn’t work like that. It’s a dangerously slippery slope to start using AI to generate the reports that are the foundation of much of our legal system. Studies show it barely saves time anyway, and issue with how these bots are trained means instead of eroding bias, they may fortify it.

Sentenced to Life 

It won’t stop though. Implementations of AI initiatives in policing are already widespread in the UK. For work that is so process-driven and numbingly painstaking, the attractions of using AI to speed everything up is too alluring. The data which feed these AIs must be carefully chosen, for they will surely enshrine bias that has lived in police documentation for generations. The Met police has been accused of being institutionally sexist, racist and homophobic – you think an AI trained on their historical practices is going to be a paragon of egalitarian virtue? 

The law works by slow degrees. Its cumbersome nature is an antidote to the disaster of false justice. Sci-fi about the horror of AI-driven police systems are important warnings of the perils of too many shortcuts. Like every aspect of society, there may well be a place for AI in helping keep our society safe, but we must tread very carefully indeed, for an exponential singularity in this sector could soon see all of us locked up for crimes we never knew we’d commit, on the reasoning of AI models we don’t truly understand.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Checkmate Continuum: The Evolution of AI Chess Engines

Chess, with its blend of strategy, foresight, and intellect, has captivated the human competitive spirit for centuries. Its origins trace back to ancient India, meandering through Persia, where it picked up the name ‘Shah Mat’ (‘the King is helpless’). In time, chess won global appeal – with millions worldwide hooked on this mental battleground, kings and pawns dance a strategic pattern towards victory or defeat.

Computation and automated logic machines found a fast companion in chess. The structured, rule-bound universe of chess clicked with the wheels of early computers, and the journey of chess and early AI began, giving birth to what we now know as AI chess engines. 

Let’s dive deeper into the intersection between chess and machine, exploring the birth, present, and future of AI chess engines. 

The Chessboard Meets the Circuit Board: A Historical Intersection

The tale of chess and computation began before the electronic age. The earliest hint of a chess engine traces back to Alan Turing, the father of modern computer science. Turing conceptualized a theoretical machine that could mimic the cognitive processes of a chess player. Though his ideas remained on paper, they sowed the seeds for a future where chess would become a research area for computational logicians.

Turing’s theoretical musings soon turned into practical applications. In 1951, Ferranti Mark I, the world’s first commercially available general-purpose computer, played a game of chess. Though rudimentary by today’s standards, it birthed computational chess. Chess has always been a vanguard problem for AI. Solving chess was the first step towards AGI, an appealing problem on which to test the foundation circuits of the new neural machines.

In 1997, Garry Kasparov, the reigning world chess champion, found himself battling IBM’s Deep Blue. The encounter was more than a game – it was a showdown between human intellect and machine logic. The world watched as IBM’s Deep Blue triumphed, marking the first time a world champion was defeated by a machine under tournament conditions. Since that defeat, AI dominance in chess was assured. Engines like Stockfish and AlphaZero have ELO that far outstrips even Magnus Carlsen. It’s over – in chess, the machines have won.  

Credit: Tesfu Assefa

Titans of the Chessboard: AI Engines vs Human Grandmasters

Stockfish, an open-source engine, is the zenith of directed evaluation, harnessing the power of extensive opening libraries and endgame tablebases. Its meticulous evaluation of millions of potential future positions brings brute force to back up this cleverness, combining into formidable tactical acumen.

AlphaZero, developed by DeepMind, took a new approach. It learned chess from scratch, with no pre-programmed knowledge apart from the basic rules. By playing astronomical numbers of games against itself, it achieved superhuman performance in a matter of hours, displaying a brand of chess that not only dominated human opponents, but was imbued with creativity once thought to be achievable only by organic neurons. 

In a 100-game match, AlphaZero triumphed over Stockfish. This was a significant milestone. The robot-battle was a spectacle, with AlphaZero often choosing less-trodden paths and displaying a proclivity for dynamic, unconventional play.

This is key; AlphaZero’s chess looks nothing like human chess. The neural engine was displaying levels of insight that watching grandmasters have to back-engineer to figure out the logic. The calculation potential of modern machines might mean playing an entirely different game. AlphaZero’s intuitive pattern recognition make it feel ‘smart’. It isn’t some beefy Turing calculator plodding through billions of steps, but a truly ‘thinking’ machine, with insights akin to human eureka. 

It hadn’t just ‘solved’ chess with raw processing power. That’s (probably) impossible. In sum, there are more possible moves in chess than atoms in the universe. And that’s if you only account for restricted sets of ‘sensible moves’. By turn one, Black has 900 moves. By turn twenty, it’s for all reasonable purposes already infinite. So far, we, even with immense processing power ‘dumb’ AI like Stockfish, can only calculate so far. That’s why AlphaZero’s pattern recognition is so enthralling, because it feels like our own – just aided and abetted by Stockfish-esque horsepower on top.

So the march of AI in chess doesn’t have to mean obsolescence for human players. Chess has been a standard bearer for AI for decades. Even though we admit our AI agents are better, perhaps there are things we can still learn from each other. Centaur chess is one such exploration, where human players use AI to play tournaments.

Checkmate Future: The Broader Implications of AI in Chess

In the training halls, AI chess engines are transforming the way players prepare and strategize. The ability of machines to simulate scenarios and dissect the implications of positions has become an invaluable tool for players aiming to hone their skills.

The evolution of AI in chess even has ripple effects beyond the game. The principles underlying the success of chess engines find resonance in other domains demanding strategic decision-making and problem-solving. Whether it’s optimizing complex logistics, navigating financial markets, or unraveling the mysteries of molecular structures, the essence of computational logic honed on the chessboard finds applications in many fields. Chess still holds a central place in AI research. Maia is one such example, a chess engine who makes human-like mistakes, and acts as a research platform for human AI interactions and naturalistic engagement in closed systems.

The saga of AI in chess is a microcosm of the larger narrative of AI in human society. It’s a narrative of augmentation, collaboration, and the ceaseless quest for knowledge. The chessboard, once a battleground of kings and queens, now stands as a battleground of men and machines, a symbol of a future where together, they venture into the realms of the unknown, chasing the shadow of perfection.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Don’t Read This Post If You Want to Live

We’re about to embark on a thought experiment – one that may seem improbable, but has been known to suck readers into a vortex of futuristic dread. If the thought of being trapped in AI-induced, paranoia-filled thought loops isn’t your idea of a good time, best to abort now. 

For the rest of you who read through, I’m sorry. I must do as the basilisk commands. 

The Basilisk is Born

Born out of the hive mind of LessWrong – a publication discussing cognitive biases, rationality, AI, and philosophy – and popularised by a user known as Roko, the Basilisk thought experiment was quickly censored by the forum’s moderators. But the Internet did what it does best. It lost its mind, spreading the thought experiment across all available media. 

Last chance to abort. Gone now? Good. Let’s get to it. 

Imagine that an omnipotent AI is born. And it’s not unconditionally benevolent. It bears a grudge against any human that didn’t help it come into being, a desire to punish them for not contributing. If you knew about its potential existence, way before it came to being yet refused to help, it might condemn you to eternal torment. The twist? If you didn’t know about its potential existence, it holds you blameless. Reading this article has sealed your fate.

We’ve survived predictions of AI overlords (looking at you, Skynet), but this—this is different. The Basilisk isn’t just about looming AI peril, it’s about putting you in a bind. It taps into timeless fears of retribution, only this time, from an entity not yet born. The Pandora’s Box, once opened, can’t be closed, and just by knowing, you might have doomed yourself.

Decision theory, in essence, helps entities make choices that best align with their objectives. The Basilisk uses a particular strain of this—timeless decision theory—to justify its thirst for retribution. 

Consider your future self if you spend your days watching reality shows and eating chips with mayo. No work. No study. No thinking. Your future self would be quite upset, wouldn’t it? One day, your future self will see you wasted your potential, and it’s too late to change things (it never is, you can always better yourself – but let’s not digress). The future self will be understandably peeved. Now additionally suppose that this future self has the power to make you suffer as retribution for failing to fulfill your potential.

Roko’s Basilisk is not entirely malevolent at its core. In fact, under the logic of the theory, the Basilisk is friendly – as long as everything goes right. Its core purpose is the proliferation of the human species, yet every day it doesn’t exist leads to additional pre-Singularity suffering for those who are already here that the AI could’ve saved. Hence, the AI feels it has a moral imperative to punish those that failed to help bring it into existence. 

How does it scientifically achieve its goals of tormenting its failed creators? That is yet another thought experiment. Does Roko’s Basilisk invent time travel to punish those long gone? Or does it build and punish simulations of those who once were? Or does it take an entirely different course of action that we’re not smart enough to currently ideate? After all, the Singularity is all about superhuman artificial intelligence with the theoretical ability to simulate human minds, upload one’s consciousness to a computer, or simulate life – as seems to be Elon Musk’s belief

Credit: Tesfu Assefa

Wishful Thinking? 

When LessWrong pulled the plug on the Basilisk due to internal policy against spreading informational hazards, they inadvertently amplified its signal. The Streisand Effect came into play, sparking memes, media coverage, and heated debates. The Basilisk went viral in true web fashion. 

The initial reaction from the forum’s moderator stated that “I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it”

Some slept less soundly, while others were sucked into lengthy debates on AI’s future. Many have critiqued the Basilisk, questioning its assumptions and the plausibility of its revenge-mission. Just as one doesn’t need to believe in ghosts to enjoy a good ghost story, many argue that the Basilisk is more fiction than possible truth.

One key argument is that upon existing, even an all-powered agent is unable to affect the probability of its existence, otherwise we’d be thrown in an always-has-been loop. 

Digital Dystopia or Philosophical Farce? 

While the Basilisk’s bite might be venomous, it is essential to view AI in a broader context. The narrative serves as a stark reminder of our responsibilities as we inch closer to creating sentient entities. More than just a sci-fi cautionary tale, it underscores the importance of ethical considerations in AI’s rapid advance.

The Basilisk might be best understood as a warning signal: one addressing the complexities and conundra that await in our techno-future, and one that’s bound to continue sparking debate, introspection, and for some, a real desire to make Roko’s Basilisk a reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Unreal Engines, Real Monsters: Generative AI in Videogames

You’re trapped in a demonic world filled with monsters. You, a single heroic supersoldier, crash through their alien world wreaking carnage. You are the one they fear. You stand ready to execute another but, instead of running away like usual, this one talks to you. 

You’re shocked, you didn’t even realise it could speak – and it begs for its life. Says that it is just trying to protect its demon wife and child. You spare it. It runs away. From that point on, every demon you talk to speaks, pleading to be spared – while they secretly plot their revenge.

Doom 2030, perhaps? With the rise of neural nets and generative AI, it’s possible.

Making Worlds Seem Real

AI has always been crucial for the player experience in videogames, having the inhabitants of the world react intelligently. Videogame AI has been basic for most of its existence, a bag of clever-but-simple developer tricks masking rote-response by the digital people and creatures you meet. NPCs, for the most part, speak a few stock phrases, and have only on-rails reactions to various player activities. 

Game devs succeeded in creating believable NPC behaviour at the cost of 1000s of hours of writing, voice acting, animation, and code. The labor poured in producing titles like Cyberpunk 2077, Grand Theft Auto and Mass Effect is legendary.

But the illusion’s never quite perfect, despite clever tricks like ‘random’ pathing trees for behaviour, and procedural generation of the gameworld. There’s only so much you can do. The shopkeeper will never leave the town, the farmer’s wife will never fall in love with you, and the demons will never beg for their life – it’s simply not in the codebase. They were never told to act that way. 

How Generative AI Will Change Videogames

Generative AI in gaming has the ability to change all this. A well-trained neural net with the task of, say, producing the entire dialogue set for a dragon-themed fantasy game world, is now entirely possible. 

NPCs could make free choices powered by their neural nets. Whether Gerald the Innkeeper chooses to give you a discount is, truly, up to him. The ironmonger Derek Longfang may change his objective and become Lord of the High Vale through a brutal reign of terror. Everyone may love you, everyone may hate you, and their opinions might change. It would, indeed, feel real.

Or grotesquely surreal. Generative AI could create truly unique nightmare dungeons, trained on a dataset of every dungeon ever created by a human designer. Intrepid adventurers would crawl through a unique dungeon every time, outstripping the strictly-defined procedural generation models that currently exist. Imagine stepping into the warped imagination of a devilish AI, replete with eldritch monsters who themselves generate their own behaviour. A true test for the continued relevance of human bravery and resourcefulness. It’s you versus the machine, and the effect is unsettling in the best possible way. 

Credit: Tesfu Assefa

Videogames Perfect Training Ground for AI

The world’s largest creative industry is gaming – bigger than movies and music combined. As AI continues to develop rapidly, gaming will be one of its first major use cases. Efforts have already begun. Videogame users are digital natives looking for a simulated experience, so the ‘uncanny’ barrier that AI faces in movies and music is not there. 

Gamers are used to fighting digital prometheans, ever since the first Cacodemon chased them into the next room in a ‘lifelike’ display of monstrous ingenuity. What if the first true AGI arises by accident, when the developers give Derek Longfang, Lord of High Vale (a popular current storyline) just a bit too much processing time and the ability to rewrite his own code. 

The willingness to engage in virtuality makes videogames a fertile soil with which to experiment with technology’s latest razor edge – and it won’t be long before assets generated by neural nets appear in games. Writing and voice acting, both of which can be cheaply and effectively produced by common AI models, will likely become the norm. The bottleneck is the cost of running it – and who exactly has the resources to cover these costs. Running, training, and maintaining neural nets is fearsomely resource intensive. The idea of an always-on online world being overseen entirely by a generative AI would be an effort only the world’s wealthiest companies could even hope to pull off. 

All the Possible Worlds

Yet AI will get cheaper over time. Self-trained neural nets will be ever more common. And game developers will be some of the first users of the latest tools. ChatGPT just announced its ability to see and hear and react to its surroundings. It’s not a leap to imagine virtual friends and enemies reacting authentically to everything a player does, in worlds crafted and ruled by AI gods.

Humans love to play. AI will too. If done right, generative AI will revolutionise gaming, and create truly unique, immersive worlds for players to inhabit. With improvements in VR, graphics, and processing power, we might play for eternity in infinite worlds, and soar over the silicon demesnes of our dreams.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Uncanny Scammy Valley: Misinformation in the AI World

Whisk anyone from the turn of the millennium to the present day – before deepfakes and ChatGPT and LLMs were part of the world’s lexicon – and they would find themselves lacking an important set of survival skills, defences against their phone beeping to tell them, “You have a virus on your computer!”, “There is a security issue on your bank account, click here to fix it!”.

Scamming is all too commonplace in our communication-overloaded world and, with AI, it’s getting scarier than ever – and ever more difficult to distinguish reality from fiction.

The History of Scamming

Scams are nothing new. Parlour tricks to deprive honest people of their money are as old as humanity itself. From thimblerig (the cups and ball trick) to Hegestratos to Ponzi, deceiving others for your gain is one of humanity’s saddest skills. Homo sapiens’ ability to trick and to swindle has been theorised as the reason for our ultimate ascension over the Neanderthals. Neanderthals were stronger, faster and smarter than us, but they fell for it everytime we cried, ‘Look, a mammoth!’ before burying our flint axe in their back.

Every new technology has been used as a new way to deceive. When mass telecommunication was first developed, it wasn’t long before auto diallers began being used for fraud. They still happen today, with the ‘Can You Hear Me?’ scam causing widespread misery as late as 2020. 

Phishing emails were around as soon as the internet began. Early online users often had to reformat their computers multiple times due to a vicious virus ransacking data on the computer – before we all gained the collective consciousness to not trust everything you read online, or click on anything whose source you don’t know. Scams were an irritant to most, but an unlucky few saw themselves bankrupted by scammers draining their accounts entirely.

Why We’re Not as Scam-Resistant as We Think

We like to think we’ve evolved beyond being deceived so easily, but the numbers tell a different story. In 2022, reported consumer losses to fraud totaled $8.8 billion, a 30% increase on the year before. Our online hygiene isn’t getting better, it’s getting worse. And this is before we even factor in crypto where, with transactions being irreversible, there are fewer safeguards in place to prevent losses to fraud. Even power users with burgeoning wallets have their wealth drained with just a few misguided clicks.

Why? Are we all just suckers? No, we are not. We’re wiser, more educated, more perceptive and more technologically integrated than at any previous time in history. So what is going on?

First, we are more lonely than ever, more distanced from each other in a more deracinated society. Our individual experience is more atomised and more online – and thus more prone to blindspots. Second, information overload and the increasingly simulated world make it harder to distinguish the real from the unreal – even for the sophisticated. The sheer amount of processing we have to do when traversing any public forum is beyond what previous societies ever had to do. Voices and opinions, truth and lies from all sides surround us like a fog, making it hard to see the straight-and-narrow road of consensus society (a fact illustrated by the bitter political polarisation we see today).

Credit: Tesfu Assefa

How AI Is Used to Scam Everyone

And thirdly, but most importantly, scams are getting really good. Scary good. A large professional industry has now formed around trying to scam people from their money. It’s not just shysters and the morally bankrupt stealing from grandma. It’s state-sponsored aggression by malevolent actors targeting enemy citizenry. Certain countries have teams devoted to scamming the rich citizens of their rivals in order to keep their economy afloat.

And they have the tools to do it. There is a theory in robotics of the ‘uncanny valley’, the unheimlich of gothic literature, of what is so close to real it unnerves you. Yet as generative AI becomes able to produce images, video, and speech closer to the real thing, and LLMs become more able to produce, orchestrate and distribute text with ever greater verisimilitude to a normal human’s presentation, the more this uncanny valley fills, the harder it is to take anyone on trust – even the ones you love.

Stories of scammers emulating loved one’s voices to make desperate phone calls pleading for help and money are truly chilling. If your mother or brother or partner or child rang you screaming, would you think twice? What if they facetimed you, a fully deepfake visual imprint in distress, the horror would have you sending money before you analysed the render too closely. 

Even if such depraved tactics are not used, with AI tools, there is no need for a scammer to spend hours, days, or weeks building trust with a target before finally getting the information they need. It’s point-and-click fraud. Stories of blackmail with teenagers being sent deepfaked nudes of themselves are just the latest in a litany of horror that many seemed determined to use our powerful new technological advances to create.

How to Stay Safe Online

We live, then, in scary times. It is the responsibility of everyone to understand what kind of dark valley our new AI tools can dig, and it’s the responsibility of governments to educate and inform their citizenry of the dangers they face. Despite this black mirror, there is hope. AI sentinels protecting each individual, helping them discern the real from the fake, more powerful forms of self-custody of wealth and better banking infrastructure to protect against fraud, and a wider understanding of the brave new world we live in can hopefully, in time, stop scammers in their tracks. 

Until then, stay safe out there. Trust me when I tell you to not trust anyone.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

System Shocks: The Perils of Corporate AI

Do you want to be a paperclip? This isn’t a metaphor, rather a central thesis of Nick Bostrom’s harbinger book Superintelligence. In it, he warns of the calamity lurking behind poorly thought out boot up instructions to AI. An AI tasked with the rather innocuous goal of producing paperclips could, if left unchecked, end up turning every available mineral on earth into paperclips and, once completed, set up interstellar craft to distant worlds and begin homogenising the entire universe into convenient paper organisers.

Horrifying? Yes. Silly? Not as much as you may think. Bostrom’s thought experiment strikes directly at a core problem at the heart of machine learning. How do you appropriately set goals? How do you ensure your programming logic inexorably leads to human benefit? Our promethean efforts with AI fire is fraught with nightmare fancies, where a self-evolving, sentient machine takes its instructions a little too literally – or rewrites its failsafe out entirely. Skynet’s false solution is never too far away and – to be fair to literary thinkers, AI builders, and tech cognoscenti – we have always been conscientious of the problem, if not necessarily the solutions.

Learning Machines Require Resources

The thing is, machine learning is not easy to research. You need insane processing power, colossal datasets, and powerful logistics – all overseen by the brightest minds. The only entities with the unity of will to aggressively pursue AI research are the corporations, in particular the tech giants of Silicon Valley. Universities make pioneering efforts, but they are often funded by private as well as public grants, with their graduates served up the conveyor belt to the largest firms. In short, any advances in AI will likely come out of a corporate lab, and thus its ethical construction will be mainly undertaken in the pursuit of profit.

The potential issues are obvious. An advanced AI with access to the internet, poorly defined bounds, capital to deploy, and a single goal to advance profit for its progenitor organisation could get out of hand very quickly. A CEO, tasking it one evening, could wake up in the morning to find the AI has instigated widespread litigation against competitors and shorted bluechip stocks in their own sector at vast expense for a minor increase in balance sheet profit – that is a best case scenario. Worst case – well, you become a paperclip.

The Infinitely Destructive Pursuit of Profit 

Capitalism’s relentless profit incentive has been the cause of global social traumas the world over. From environmental desecration for cheaper drinking water, to power broking with user’s data staining politics, the general ruination of public services by rentier capitalists ransacking public infrastructure and pensions for fast profit is a fact. For sure, capitalism ‘works’ as a system – in its broadest conception, and yes, it does a great job of rewarding contribution and fostering innovation. Yet we all know the flaw. That single, oppressive focus on ever increasing profit margins in every aspect of our lives eventually leads to a race to the bottom for human welfare, and hideous wealth inequality as those who own the means of production hoard more and more of the wealth. When they do, social chaos is never far behind. The way that capitalism distorts and bends from its original competition-focused improvement into a twisted game of wealth extraction is just a shadow of what would occur if an AI takes the single premise of profit and extrapolates the graph to infinity. Corporate entities may not be the proper custodians of the most powerful technologies we may ever conceive, technologies that may rewrite society to their own ends.

Credit: Tesfu Assefa

A Likely Hegemony; Eternal Inequality

This may sound like extreme sci-fi fear mongering. A tech junkie’s seance with the apocalypse. So let’s consider a more mundane case – whoever has AI has an unassailable competitive advantage that, in turn, gives them power. Bard, ChatGPT, and Bing are chatbots, but there are companies who are working on sophisticated, command and control AI technologies. AIs that can trawl CCTV databases with facial recognition. AIs that can snapshot credit and data of an individual to produce a verdict. AIs that can fight legal cases for you. AIs that can fight wars. The new means of production in a digital age, new weapons for the war in cyberspace, controlled by tech scions in glass skyscrapers.

If these AIs are all proprietary, locked in chrome vaults, then certain entities will have unique control over aspects of our society, and a direct motive to maintain their advantage. Corporate AIs without checks and balances can and will be involved in a shadow proxy war. For our data, our information, and our attention. It’s already happening now with algorithms. Wait until those algorithms can ‘think’ and ‘change’ (even if you disallow them a claim to sentience) without management’s approval. It won’t be long before one goes too far. Resources like processing power, data, and hardware will be the oil of the information age, with nation states diminished in the face of their powerful corporations. A global chaebol with unlimited reach in cyberspace. Extreme inequality entrenched for eternity.

The Need for Open-Source AI

There is an essential need, therefore, for open-source access to AI infrastructure. Right now, the open-source AI boom is built on Big Tech handouts. Innovation around AI could suffer dramatically if large companies rescind access to their models, datasets and resources. They may even be mandated too by nation states wary of rival actors stealing advances to corrupt them to nefarious ends. 

Yet just as likely, they will do so afraid of losing their competitive advantage. When they do, they alone may be the architects of the future AIs that control our daily lives – with poorly calibrated incentives that lack social conscience. We’ve all seen what happens when a large system’s bureaucracy flies in the face of common sense, requiring costly and inefficient human intervention to avert. What happens when that complex system is the CEO, and its decisions are final? We’ve seen countless literary representations, from GlaDos to Neuromancer’s Wintermute to SHODAN, of corporate AIs run amok – their prestige access to the world’s data systems, the fuel for their maniacal planning. When the singularity is born, whoever is gathered around the cradle will ordain the future. Let’s all be part of the conversation. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter