back

System Shocks: The Perils of Corporate AI

Sep. 06, 2023.
5 mins. read. 5 Interactions

The unchecked profit drive of AI poses a risk to society. Daniel Dob explains the necessity of open-source access. Can we rely on balancing corporate control and open collaboration?

About the Writer

Daniel Dob

8.0003 MPXR

Daniel is an ex-journo, legal ace, comms lead, and narrator of new-age net tales, now at the helm of GM Factory, where he helps digital neophytes beam beyond daybreak.

Credit: Tesfu Assefa

Do you want to be a paperclip? This isn’t a metaphor, rather a central thesis of Nick Bostrom’s harbinger book Superintelligence. In it, he warns of the calamity lurking behind poorly thought out boot up instructions to AI. An AI tasked with the rather innocuous goal of producing paperclips could, if left unchecked, end up turning every available mineral on earth into paperclips and, once completed, set up interstellar craft to distant worlds and begin homogenising the entire universe into convenient paper organisers.

Horrifying? Yes. Silly? Not as much as you may think. Bostrom’s thought experiment strikes directly at a core problem at the heart of machine learning. How do you appropriately set goals? How do you ensure your programming logic inexorably leads to human benefit? Our promethean efforts with AI fire is fraught with nightmare fancies, where a self-evolving, sentient machine takes its instructions a little too literally – or rewrites its failsafe out entirely. Skynet’s false solution is never too far away and – to be fair to literary thinkers, AI builders, and tech cognoscenti – we have always been conscientious of the problem, if not necessarily the solutions.

Learning Machines Require Resources

The thing is, machine learning is not easy to research. You need insane processing power, colossal datasets, and powerful logistics – all overseen by the brightest minds. The only entities with the unity of will to aggressively pursue AI research are the corporations, in particular the tech giants of Silicon Valley. Universities make pioneering efforts, but they are often funded by private as well as public grants, with their graduates served up the conveyor belt to the largest firms. In short, any advances in AI will likely come out of a corporate lab, and thus its ethical construction will be mainly undertaken in the pursuit of profit.

The potential issues are obvious. An advanced AI with access to the internet, poorly defined bounds, capital to deploy, and a single goal to advance profit for its progenitor organisation could get out of hand very quickly. A CEO, tasking it one evening, could wake up in the morning to find the AI has instigated widespread litigation against competitors and shorted bluechip stocks in their own sector at vast expense for a minor increase in balance sheet profit – that is a best case scenario. Worst case – well, you become a paperclip.

The Infinitely Destructive Pursuit of Profit 

Capitalism’s relentless profit incentive has been the cause of global social traumas the world over. From environmental desecration for cheaper drinking water, to power broking with user’s data staining politics, the general ruination of public services by rentier capitalists ransacking public infrastructure and pensions for fast profit is a fact. For sure, capitalism ‘works’ as a system – in its broadest conception, and yes, it does a great job of rewarding contribution and fostering innovation. Yet we all know the flaw. That single, oppressive focus on ever increasing profit margins in every aspect of our lives eventually leads to a race to the bottom for human welfare, and hideous wealth inequality as those who own the means of production hoard more and more of the wealth. When they do, social chaos is never far behind. The way that capitalism distorts and bends from its original competition-focused improvement into a twisted game of wealth extraction is just a shadow of what would occur if an AI takes the single premise of profit and extrapolates the graph to infinity. Corporate entities may not be the proper custodians of the most powerful technologies we may ever conceive, technologies that may rewrite society to their own ends.

Credit: Tesfu Assefa

A Likely Hegemony; Eternal Inequality

This may sound like extreme sci-fi fear mongering. A tech junkie’s seance with the apocalypse. So let’s consider a more mundane case – whoever has AI has an unassailable competitive advantage that, in turn, gives them power. Bard, ChatGPT, and Bing are chatbots, but there are companies who are working on sophisticated, command and control AI technologies. AIs that can trawl CCTV databases with facial recognition. AIs that can snapshot credit and data of an individual to produce a verdict. AIs that can fight legal cases for you. AIs that can fight wars. The new means of production in a digital age, new weapons for the war in cyberspace, controlled by tech scions in glass skyscrapers.

If these AIs are all proprietary, locked in chrome vaults, then certain entities will have unique control over aspects of our society, and a direct motive to maintain their advantage. Corporate AIs without checks and balances can and will be involved in a shadow proxy war. For our data, our information, and our attention. It’s already happening now with algorithms. Wait until those algorithms can ‘think’ and ‘change’ (even if you disallow them a claim to sentience) without management’s approval. It won’t be long before one goes too far. Resources like processing power, data, and hardware will be the oil of the information age, with nation states diminished in the face of their powerful corporations. A global chaebol with unlimited reach in cyberspace. Extreme inequality entrenched for eternity.

The Need for Open-Source AI

There is an essential need, therefore, for open-source access to AI infrastructure. Right now, the open-source AI boom is built on Big Tech handouts. Innovation around AI could suffer dramatically if large companies rescind access to their models, datasets and resources. They may even be mandated too by nation states wary of rival actors stealing advances to corrupt them to nefarious ends. 

Yet just as likely, they will do so afraid of losing their competitive advantage. When they do, they alone may be the architects of the future AIs that control our daily lives – with poorly calibrated incentives that lack social conscience. We’ve all seen what happens when a large system’s bureaucracy flies in the face of common sense, requiring costly and inefficient human intervention to avert. What happens when that complex system is the CEO, and its decisions are final? We’ve seen countless literary representations, from GlaDos to Neuromancer’s Wintermute to SHODAN, of corporate AIs run amok – their prestige access to the world’s data systems, the fuel for their maniacal planning. When the singularity is born, whoever is gathered around the cradle will ordain the future. Let’s all be part of the conversation. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Comment on this article

1 Comments

One thought on “System Shocks: The Perils of Corporate AI

  1. The corporate is the Matrix!

    Like
    Dislike
    Share
    Reply

2

Like

Dislike

1

Share

1

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks