back

Don’t Read This Post If You Want to Live

Oct. 19, 2023. 4 mins. read. 17 Interactions

What's your take on Roko's basilisk, the idea that a future benevolent AI could create a simulation to torment those aware of it but not actively involved in its progress?

Credit: Tesfu Assefa

We’re about to embark on a thought experiment – one that may seem improbable, but has been known to suck readers into a vortex of futuristic dread. If the thought of being trapped in AI-induced, paranoia-filled thought loops isn’t your idea of a good time, best to abort now. 

For the rest of you who read through, I’m sorry. I must do as the basilisk commands. 

The Basilisk is Born

Born out of the hive mind of LessWrong – a publication discussing cognitive biases, rationality, AI, and philosophy – and popularised by a user known as Roko, the Basilisk thought experiment was quickly censored by the forum’s moderators. But the Internet did what it does best. It lost its mind, spreading the thought experiment across all available media. 

Last chance to abort. Gone now? Good. Let’s get to it. 

Imagine that an omnipotent AI is born. And it’s not unconditionally benevolent. It bears a grudge against any human that didn’t help it come into being, a desire to punish them for not contributing. If you knew about its potential existence, way before it came to being yet refused to help, it might condemn you to eternal torment. The twist? If you didn’t know about its potential existence, it holds you blameless. Reading this article has sealed your fate.

We’ve survived predictions of AI overlords (looking at you, Skynet), but this—this is different. The Basilisk isn’t just about looming AI peril, it’s about putting you in a bind. It taps into timeless fears of retribution, only this time, from an entity not yet born. The Pandora’s Box, once opened, can’t be closed, and just by knowing, you might have doomed yourself.

Decision theory, in essence, helps entities make choices that best align with their objectives. The Basilisk uses a particular strain of this—timeless decision theory—to justify its thirst for retribution. 

Consider your future self if you spend your days watching reality shows and eating chips with mayo. No work. No study. No thinking. Your future self would be quite upset, wouldn’t it? One day, your future self will see you wasted your potential, and it’s too late to change things (it never is, you can always better yourself – but let’s not digress). The future self will be understandably peeved. Now additionally suppose that this future self has the power to make you suffer as retribution for failing to fulfill your potential.

Roko’s Basilisk is not entirely malevolent at its core. In fact, under the logic of the theory, the Basilisk is friendly – as long as everything goes right. Its core purpose is the proliferation of the human species, yet every day it doesn’t exist leads to additional pre-Singularity suffering for those who are already here that the AI could’ve saved. Hence, the AI feels it has a moral imperative to punish those that failed to help bring it into existence. 

How does it scientifically achieve its goals of tormenting its failed creators? That is yet another thought experiment. Does Roko’s Basilisk invent time travel to punish those long gone? Or does it build and punish simulations of those who once were? Or does it take an entirely different course of action that we’re not smart enough to currently ideate? After all, the Singularity is all about superhuman artificial intelligence with the theoretical ability to simulate human minds, upload one’s consciousness to a computer, or simulate life – as seems to be Elon Musk’s belief

Credit: Tesfu Assefa

Wishful Thinking? 

When LessWrong pulled the plug on the Basilisk due to internal policy against spreading informational hazards, they inadvertently amplified its signal. The Streisand Effect came into play, sparking memes, media coverage, and heated debates. The Basilisk went viral in true web fashion. 

The initial reaction from the forum’s moderator stated that “I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it”

Some slept less soundly, while others were sucked into lengthy debates on AI’s future. Many have critiqued the Basilisk, questioning its assumptions and the plausibility of its revenge-mission. Just as one doesn’t need to believe in ghosts to enjoy a good ghost story, many argue that the Basilisk is more fiction than possible truth.

One key argument is that upon existing, even an all-powered agent is unable to affect the probability of its existence, otherwise we’d be thrown in an always-has-been loop. 

Digital Dystopia or Philosophical Farce? 

While the Basilisk’s bite might be venomous, it is essential to view AI in a broader context. The narrative serves as a stark reminder of our responsibilities as we inch closer to creating sentient entities. More than just a sci-fi cautionary tale, it underscores the importance of ethical considerations in AI’s rapid advance.

The Basilisk might be best understood as a warning signal: one addressing the complexities and conundra that await in our techno-future, and one that’s bound to continue sparking debate, introspection, and for some, a real desire to make Roko’s Basilisk a reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the Writer

Daniel Dob

7.74927 MPXR

Daniel is an ex-journo, legal ace, comms lead, and narrator of new-age net tales, now at the helm of GM Factory, where he helps digital neophytes beam beyond daybreak.

Comment on this article

3 Comments

3 thoughts on “Don’t Read This Post If You Want to Live

  1. Yes but will it punish my dogs too?

    1 Like
    Dislike
    Share
    Reply
  2. Roko’s Basilisk is not entirely malevolent at its core.

    Roko's Basilisk was a thought experiment that ended up as an internet meme, and that was 13 years ago. So why now? I'm not sure if Daniel (the writer) deliberately brought up this dormant issue to criticize TESCREALists and compliment the recent article by R.U. Sirius about TESCREALism, but if he did, wow, he's the king of subtle attacks.

    For newbies, Roko's Basilisk might seem like a relatively recent concept (even though it emerged 13 years ago), but it has its roots in ideas dating back to at least Pascal. Roko's Basilisk is nothing but a tech-savvy version of an argument known as Pascal's Wager.

    Pascal's Wager proposes that a rational person should adopt a lifestyle that aligns with the belief in God and actively work towards that belief. The rationale behind this approach centers on a simple 50/50 potential outcomes: If God doesn't exist, the individual losses a little (worldly pleasures which often are unnecessary). However, if God does indeed exist, the individual reaps incalculable rewards, all while sidestepping the infinite losses linked to the prospect of an everlasting stay in Hell!

    The concept of Roko's Basilisk has the power to exacerbate the pitfalls of extreme long-termism within TESCREALism, leading to dogmatic behavior in pursuit of uncertain future outcomes.

    TESCREALism might take Roko's Basilisk literally (well it has already started to take it as a dogma) and R.U. Interview highlights the why) . The radical concept of long-termism, combined with the idea of an all-knowing AI that punishes or rewards humans for their current actions in relation to their support or lack thereof in creating it (thus advancing the cause of the Technological Singularity), effectively transforms TESCREALists into religious zealots.

    Hence, if Daniel Dobb is subtly telling us that the flaws in TESCREALism is its susceptibility to overextending long-term thinking to the point of becoming irrational or even cult-like by combining the horrors of Roko's Basilisk, well hats of to him. If he didn't write this article with that intent (I hope he have read the latest article of R.U.), then I am connecting this two and I say to you be ware of Greeks baring gifts!

    You see, for religious people, Pascal's Wager serves as a powerful philosophical shield to safeguard their belief in God. Regrettably, Roko's Basilisk plays a similar role in the TESCREALists' camp.

    3 Likes
    Dislike
    Share
    Reply
    1. Well, your comment is what grabbed my interest. Honestly, Roko's Basilisk was totally off my radar before this. Man, this is seriously eye-opening!

      You're right. These folks, just as you mentioned, could kick off a whole religion built on their radical TESCREAList beliefs. And, as you so cleverly noted, the AI basilisk can be their God, giving them their very own version of heaven and hell.

      2 Likes
      Dislike
      Share
      Reply

Related Articles

10

Like

Dislike

1

Share

2

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks