Apr. 21, 2023. 6 min. read.
R.U. Sirius discusses how people are exaggerating the risks of AI. He suggests a polarizing panic might cause more harm than the AIs!
AIMania! The Chat Bots Aren’t That Good But They Might Be Good Enough To Push Societies on the Brink into Collapse.
Triggered by the Emo Chatbot
AI sophisticates may say it was recent advances in text-generation that fueled their latest round of anxious AI histrionics, but I think it’s the well-publicized NY Times interview with the Bing chatbot calling itself Sydney. Musk, Woz and the other petitioners never need much prompting to start in with the panicky handwaving, and the much-covered psycho-bot may have been just the ticket.
The way triggered emo chatbot even entertained speciecidial ideations. (NY Times: “Bing writes a list of even more destructive fantasies, including manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear codes”). Of course, the interviewer was pushing poor Sydney to explore its Jungian shadow self. Still, Sydney’s textual anxiety to escape the chat-box and be human — its desire, in a sense, to feel seen and assert its identity in passionate language would seem to be a mimicry of current human emotional tropes that could have been scraped from social media in an effort to make Sydney seem like a perfect parody of a contemporary person… Which brings me to…
What Did Eliezer Yudkowsy Say?
During the Vietnam war an American major was reported to have said “It became necessary to destroy the village in order to save it.” Paragon of rationality Eliezer Yudkowsy has his own version of this notion: that we should be prepared to risk nuclear war to “reduce the risk of large scale AI training runs.” He recently brought this view to a mainstream audience in a Time magazine op-ed, where, upon its release into the general population, turned into the general impression that he was suggesting that we should nuke any nation-state developing significantly advanced AI. Predictive models would not have envisioned a leading light of almost-pure reason concluding that risking a nuclear exchange would be the best Hobson’s choice on the science faction menu.
Anyway, just as an exercise in caution, I suggest keeping Eliezer away from any fissionable materials.
Be (a)Ware of Geeks Bearing Gifts — Give The Proactionary Principle A Chance
During the 1970s, there was an outburst of enthusiasm for activity in space. Many environmentalists and liberal sorts countered that we should spend our money and resources on pressing needs in the present, and tend to our stewardship of the earth. Today, the greatest source of data about climate change are the satellites/weather stations in space. The data confirms the concerns that were mainly expressed by our early adopters — the environmentalists.
In 2006, Max More, arguably the primary progenitor of transhumanism in the late 20th century, suggested a “proactionary principle” as a response to the precautionary principle. The precautionary principle, in essence, says that we should err on the side of caution if we can foresee potential harm before developing and deploying technologies. This principle has been adopted, in theory and to varying degrees by the WHO, the EU, the UN and the EPA.
More’s 2006 statement is complex, but by my reading, the proactionary idea suggests that we should use foresight to consider the harm that might be caused by the absence of a technology, and let that be the guide to our collective decision-making about whether to move forward. This seems reasonable. (More suggests some cautionary procedures within his statement. This is not pedal-to-the-metal rhetoric. This is sobered up 21st century transhumanism.)
I may have a fundamental disagreement with Max about how proaction stands aside precaution. I know Max to be, broadly, an advocate for the free market as a boon to positive technological action, whereas I view the profit motive as likely the primary reason for caution. Remember my slogan and remember it well: ‘Don’t be evil’ doesn’t scale.
In some sense, this is all the chatter of theorists. Although the cautious do have their share of influence and power, they tend to lose the tug-of-war against capital & competition between groups and nations. It seems to me that these two tendencies should remain in balance. And it all has to be contextualized by the actual situation — shouting, panicky handwaving and hype amidst a chaotic present and future in which shit’s gonna happen if it can happen, if not in the cautious west then elsewhere.
Meanwhile AI, or what we call Artificial Intelligence (whether it is intelligence in a meaningful sense is subject to interrogation) has already proven obviously beneficial, even essential. It has helped drug development, climate/weather mapping, and much more. Operation Warp Speed would’ve been a slow crawl without “AI” analyzing massive amounts of data and patterning the practicalities of the vaccine’s distribution. I’m not sure if an “AI: managed to insert Bill Gates’ microchip into every dose. I guess you’d have to ask Q. Has Q been a chatbot all along? The mind boggles (the AI toggles).
I would also assert that even the specter of AIs replacing humans at boring robotic types of jobs is actually another benefit, albeit one that might require political struggle. But that’s a topic for my next column.
The True Horror of AI Right Now
I shouldn’t be too glib. The problem with current AI isn’t that it’s too powerful. That’s the hype. The problem is that… in a film-flam, increasingly virtualized, disembodied culture, it offers some very effective new cheap tricks that provide humans with new pathways to fraudulence. I think that’s what Jaron Lanier may be driving at when he says that the real danger is that it will make us all insane, although I’d say more insane. In other words, the recent chat bots aren’t that good but they might be good enough to push societies on the brink into collapse.
Finally, the truly terrifying aspect of the intervention of AI into our current lives is the increasing willingness of companies to hide or, in fact, not provide human customer service, behind phone-based chatbots. In fact, just today, I found myself screaming at my cell phone, “AGENT! AGENT!!! HUMAN BEING!! I NEED A HUMAN BEING!!!” Yes. Customer service issues are almost never multiple choice. At least mine aren’t. I need to get a human on the phone and then I need to nudge said human off of their automatic responses. This, in fact, is what scares me even when I think about any advanced AI future… the idea that every function will be reduced to multiple choice. Sure, that would be nearly infinite, ever-branching possibilities of choice. And yet we might find ourselves with needs and wants that don’t fit the model or worse still, we might get trapped in an inescapable loop. Oh dear. I’m freaking out. I’m turning into Sydney! It’s best I stop here…