Stablecoins Deliver Us From Fiat and CBDCs

Introduction

Stablecoins are a core but often misunderstood component of the Web3 sector. They provide price stability and easy on-ramps and off-ramps for users who want to protect themselves against market price fluctuations. At the time of writing, the total stablecoin market cap sits at $125 billion. Tether USD (USDT) has ⅔ of this market, with $85 billion, and USDC contributes ⅕ at just over $25 billion. 

When the first generation of stablecoins came to market in 2014, they revolutionized the crypto industry, offering a stable value asset to help hedge market volatility. In the decade since, stablecoins like Tether (USDT), Circle USD (USDC), and Binance USD (BUSD) have forged ahead despite regulatory scrutiny and the occasional depegging, while early experiments like BitUSD and Nubits have failed and disappeared into obscurity. 

A new generation of stable cryptocurrencies is coming to market, such as Cardano’s Djed. With artificial intelligence the flavor of the year, SingularityNET’s Cogito Protocol and its AI-powered synthetic assets GCoin and XCoin are also drawing widespread interest. We’ll cover them briefly in this article and more in-depth in a follow-up piece.

Last year’s cataclysmic collapse of UST, the Terra Luna algorithmic stablecoin, saw confidence in crypto completely evaporate in days along with billions of dollars of retail investor funds. Regulators paid attention and sharpened their pitchforks for the crypto space – they basically want stablecoins to comply or die, and we really only have ourselves (and Do Kwon) to blame for it. 

There has since been a shift in public sentiment, with regulators beginning to understand that stablecoins don’t aim to replace fiat currency, but rather complement it with their unique advantages, which we’ll be discussing in this article. 

In the last couple of months, major positive developments have helped to further adoption on blockchains such as Ethereum, despite resistance from lawmakers and the rise of central bank digital currencies (CBDCs).

These include the US Fed clarifying rules for how banks should deal with stablecoins, PayPal’s launch of its own PYUSD stablecoin, and Circle’s partnership with Grab, making USDC available as a Web3 payment tool to the Southeast Asian super-app’s nearly 200 million users. 

So what makes stablecoins so much better than fiat currencies and those scary CBDCs? 

Credit: Tesfu Assefa

Fiat Currency vs Stablecoins

Fiat currency is a legal tender whose value is backed by the government that issued it. It’s not backed by a physical asset like gold or silver, but rather leverages society’s belief and trust in its government. The word ‘fiat’ is Latin and translates as ‘let it be done’. Basically, fiat currency is a binding IOU from your central bank that you can use to acquire and transfer value. 

Stablecoins are a type of digital asset inspired by fiat currency, but they aim to remove fiat currency’s cost, centralized control, and distribution. They are pegged in value to a real-world asset such as the US Dollar or Euro or even a commodity like gold

The stablecoin issuer usually has to maintain a reserve of collateralized assets which must be audited in order to comply with financial legislation. Failure to do so can result in huge fines, and for good reason, as we saw in the past. 

Here’s a table of comparison:

DifferenceStablecoinsFiat CurrencyAdvantage/Disadvantage
NetworkOperate on decentralized blockchain networks.Centralized, issued, and regulated by governments.Stablecoins cannot be controlled or manipulated by one entity.
BackingValue is pegged to a reference asset like fiat currency or commodities.Value is derived from public trust and the stability of the issuing country.Stablecoins are usually fully collateralized by real assets. 
Intrinsic ValueDerive value from the assets they are pegged to.Lack of intrinsic value as they are not backed by tangible assets.Stablecoins are usually fully collateralized by real assets.
VolatilityDesigned to be less volatile by being pegged to stable assets.Generally less volatile due to the stability of the issuing government.Stablecoins can de-peg when they’re targeted by regulators or rumors of mismanagement circulate. 
Digital NaturePurely digital and programmable, capable of interacting with smart contracts.Exist in both physical and digital forms.Stablecoins can be used by DeFi applications to borrow, lend, stake and transfer assets seamlessly.
RegulationSubject to some regulations but not directly controlled by central banks.Regulated by central banks and governments.The lack of global regulations undermines institutional trust in stablecoins and can lead to exploitation, as we saw with Luna Terra.
CostTransactions are faster, cheaper, and more transparent.Transactions can be slower and more expensive, especially for cross-border transfers.Stablecoins are simply superior in terms of effective transacting. 

The New Breed of Stablecoins

Despite the well-documented failure of Terra Luna and its UST stablecoin, new algorithmic coins like Djed and Cogito are continuing to innovate in their efforts to establish a truly decentralized stablecoin that’s impervious to regulatory intervention and uses the best of new technology such as AI cryptocurrency technology in its quest. 

Cardano’s Djed

Launched in January 2023 on the Cardano blockchain, Djed is an algorithmic stablecoin and joint venture between Input Output Global (IOG) and COTI. The project maintains its peg by over-collateralization, filling its reserves with a big chunk of ADA cryptocurrency, between 400-800% of Djed’s value. 

They also use a second token called SHEN. This helps keep the system stable and easy to trade. Djed has had some ups and downs in value, mainly because there hasn’t been enough trading activity or demand. Djed represents an ambitious attempt at a transparent, decentralized stablecoin after previous failures like UST, but still needs to prove it can reliably maintain its peg over the long term.

AI ‘tracer coins’: Cogito’s XCoin and GCoin

Cogito Protocol, part of the SingularityNET ecosystem, is creating a new type of digital money called ‘tracercoins’. Unlike regular stablecoins, which are tied to real-world currencies like the dollar, tracercoins ‘trace’ non-financial indices like sustainability and technological progress and use AI and algorithms to create stable pricing. Cogito is still in beta and plans to launch two types of coins: GCoin, which will track progress towards climate sustainability goals. and the even more experimental XCoin. These coins will be available for anyone to buy and sell on decentralized finance (DeFi) platforms, starting with networks like SingularityNET and Cardano.

Conclusion

Fiat currency issued by governments remains the dominant form of money today, but stablecoins offer a compelling alternative, thanks to their advantages of decentralization, transparency, speed, low cost, and global reach.

Despite regulatory headwinds, algorithmic stablecoin innovation isn’t stopping. The dream is still alive: create a stable dollar-pegged digital asset that cannot be shut down by governments or tanked through fraud or manipulation. 

While CBDCs will challenge their global growth, stablecoins like USDC, USDT, and BUSD and a new generation of algo-backed stablecoins like Cardano’s Djed will continue providing benefits that CBDCs simply cannot replicate, thanks to their decentralization and 24/7 availability that can be tapped into by anyone anywhere in the world. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Against Contractionism: Enabling and encouraging open minds, rather than constricting understanding with blunt labels

A dangerous label

Imagine if I said, “Some Christians deny the reality of biological evolution, therefore all Christians deny the reality of biological evolution.”

Or that some Muslims believe that apostates should be put to death, therefore all Muslims share that belief.

Or that some atheists (Stalin and Mao, for example) caused the deaths of millions of people, therefore atheism itself is a murderous ideology.

Or, to come closer to home (for me: I grew up in Aberdeenshire, Scotland) – that since some Scots are mean with money, therefore all Scots are mean with money. (Within Scotland itself, that unkind stereotype exists with a twist: allegedly, all Aberdonians are mean with money.)

In all these cases, you would say that’s unwarranted labeling. Spreading such stereotypes is dangerous. It gets in the way of a fuller analysis and deeper appreciation. Real-life people are much more varied than that. Any community has its diversity.

Well, I am equally shocked by another instance of labeling. That involves the lazy concept of TESCREAL – a concept that featured in a recent article here on Mindplex Magazine, titled TESCREALism: Has The Silicon Valley Ruling Class Gone To Crazy Town? Émile Torres In Conversation With R.U. Sirius.

When I read the article, my first thought was: “Has Mindplex gone to Crazy Town!”

The concept of TESCREAL, which has been promoted several times in various locations in recent months, contracts a rich and diverse set of ideas down to a vastly over-simplified conclusion. It suggests that the worst aspects of any people who hold any of the beliefs wrapped up into that supposed bundle of ideas, can be attributed, with confidence, to other people who hold just some of these ideas.

Worse, it suggests that the entire “ruling class” of Silicon Valley subscribe to the worst of these beliefs.

It’s as if I picked a random atheist and insisted that they were equally as murderous as Stalin or Mao.

Or if I picked a random Muslim and insisted that they wished for the deaths of every person (apostate) who had grown up with Muslim faith and subsequently left that faith behind.

Instead of that kind of contraction of ideas, what the world badly needs nowadays is an open-minded exploration of a wide set of complex and subtle ideas.

Not the baying for blood which seems to motivate the proponents of the TESCREAL analysis.

Not the incitement to hatred towards the entrepreneurs and technologists who are building many remarkable products in Silicon Valley – people who, yes, do need to be held to account for some of what they’re doing, but who are by no means a uniform camp!

I’m a T but not an L

Let’s take one example: me.

I publicly identify as a transhumanist – the ‘T’ of TESCREAL.

The word ‘transhumanist’ appears on the cover of one of my books, and the related word ‘transhumanism’ appears on the cover of another one.

Book covers of David Wood’s ‘Vital Foresight‘ and ‘Sustainable Superabundance,’ published in 2021 and 2019 respectively. Credit: (David Wood)

As it happens, I’ve also been a technology executive. I was mainly based in London, but I was responsible for overseeing staff in the Symbian office in Redwood City in Silicon Valley. Together, we envisioned how new-fangled devices called ‘smartphones’ might in due course improve many aspects of the lives of users. (And we also reflected on at least some potential downsides, including the risks of security and privacy violations, which is why I championed the new ‘platform security’ redesign of the Symbian OS kernel. But I digress.)

Since I am ‘T’, does that mean, therefore, that I am also ESCREAL?

Let’s look at that final ‘L’. Longtermism. This letter is critical to many of the arguments made by people who like the TESCREAL analysis.

‘Longtermism’ is the belief that the needs of potentially vast numbers of as-yet unborn (and unconceived) people in future generations can outweigh the needs of people currently living.

Well, I don’t subscribe to it. It doesn’t guide my decisions.

I’m motivated by the possibility of technology to vastly improve the lives of everyone around the world, living today. And by the need to anticipate and head-off potential catastrophe.

By ‘catastrophe’, I mean anything that kills large numbers of people who are currently alive.

The deaths of 100% of those alive today will wipe out humanity’s future, but the deaths of ‘just’ 90% of people won’t. Longtermists are fond of pointing this out, and while it may be theoretically correct, it doesn’t provide any justification to ignore the needs of present-day people, to raise the probability of larger numbers of future people being born.

Some people have said they’re persuaded by the longtermist argument. But I suspect that’s only a small minority of rather intellectual people. My experience with people in Silicon Valley, and with others who are envisioning and building new technologies, is that these abstract longtermist considerations do not guide their daily decisions. Far from it.

Credit: Tesfu Assefa

Concepts are complicated

A larger point needs to be made here. Concepts such as ‘transhumanism’ and ‘longtermism’ each embody rich variety.

It’s the same with all the other components of the supposed TESCREAL bundle: E for Extropianism, S for Singularitarianism, C for Cosmism, R for Rationalism, and EA for Effective Altruism.

In each case, we should avoid contractionism – thinking that if you have heard one person who defends that philosophy expressing one opinion, then you can deduce what they think about all other matters. In practice, people are more complicated – and ideas are more complicated.

As I see it, parts of each of the T, E, S, C, R, and EA philosophies deserve wide attention and support. But if you are hostile, and do some digging, you can easily find people, from within the communities around each of these terms, who have said something despicable or frightening. And then you can (lazily) label everyone else in that community with that same unwelcome trait. (“Seen one; seen them all!”)

These extended communities do have some people with unwelcome traits. Indeed, each of T and S have attracted what I call a ‘shadow’ – a set of associated beliefs and attitudes that are deviations from the valuable core ideas of the philosophy. Here’s a picture I use of the Singularity shadow:

A video cover image from ‘The Vital Syllabus Playlist’ where David Wood examines the Singularity Shadow. Credit: (David Wood)

And here’s a picture of the transhumanist shadow:

A video cover image from ‘The Vital Syllabus Playlist’ where David Wood examines the Transhumanist Shadow. Credit: (David Wood)

(In both cases, you can click on the caption links to view a video that provides a fuller analysis.)

As you can see, the traits in the transhumanist shadow arise when people fail to uphold what I have listed as ‘transhumanist values’.

The existence of these shadows is undeniable, and unfortunate. The beliefs and attitudes in them can deter independent observers from taking the core philosophies seriously.

In that case, you might ask, why persist with the core terms ‘transhumanism’ and ‘singularity’? Because there are critically important positive messages in both these philosophies! Let’s turn to these next.

The most vital foresight

Here’s my 33-word summary of the most vital piece of foresight that I can offer:

Oncoming waves of technological change are poised to deliver either global destruction or a paradise-like sustainable superabundance, with the outcome depending on the timely elevation of transhumanist vision, transhumanist politics, and transhumanist education.

Let’s cover that again, more slowly this time.

First things first. Technological changes over the next few decades will place vast new power in billions of human hands. Rather than focusing on the implications of today’s technology – significant though they are – we need to raise our attention to the even larger implications of the technology of the near future.

Second, these technologies will magnify the risks of humanitarian disaster. If we are already worried about these risks today (as we should be), we should be even more worried about how they will develop in the near future.

Third, the same set of technologies, handled more wisely, and vigorously steered, can result in a very different outcome: a sustainable superabundance of clean energy, healthy nutrition, material goods, excellent health, all-round intelligence, dynamic creativity, and profound collaboration.

Fourth, the biggest influence on which outcome is realized is the widespread adoption of transhumanism. This in turn involves three activities:

• Advocating transhumanist philosophy as an overarching worldview that encourages and inspires everyone to join the next leap upward on life’s grand evolutionary ladder: we can and should develop to higher levels, physically, mentally, and socially, using science, technology, and rational methods.
• Extending transhumanist ideas into real-world political activities, to counter very destructive trends in that field.
• Underpinning the above initiatives: a transformation of the world of education, to provide everyone with skills suited to the very different circumstances of the near future, rather than the needs of the past.

Finally, overhanging the momentous transition that I’ve just described is the potential of an even larger change, in which technology moves ahead yet more quickly, with the advent of self-improving artificial intelligence with superhuman levels of capability in all aspects of thinking.

That brings us to the subject of the Singularity.

The Singularity is the point in time when AIs could, potentially, take over control of the world from humans. The fact that the Singularity could happen within a few short decades deserves to be shouted from the rooftops. That’s what I do, some of the time. That makes me a singularitarian.

But it doesn’t mean that I, or others who are likewise trying to raise awareness of this possibility, fall into any of the traits in the Singularity Shadow. It doesn’t mean, for example, that we’re all complacent about risks, or all think that it’s basically inevitable that the Singularity will be good for humanity.

So, Singularitarianism (S) isn’t the problem. Transhumanism (T) isn’t the problem. Nor, for that matter, does the problem lie in the core beliefs of the E, C, R, or EA parts of the supposed TESCREAL bundle. The problem lies somewhere else.

What should worry us: not TESCREAL, but CASHAP

Rather than obsessing over a supposed TESCREAL takeover of Silicon Valley, here’s what we should actually be worried about: CASHAP.

C is for contractionism – the tendency to push together ideas that don’t necessarily belong together, to overlook variations and complications in people and in ideas, and to insist that the core values of a group can be denigrated just because some peripheral members have some nasty beliefs or attitudes.

(Note: whereas the fans of the TESCREAL concept are guilty of contractionism, my alternative concept of CASHAP is different. I’m not suggesting the ideas in it always belong together. Each of the individual ideas that make up CASHAP are detrimental.)

A is for accelerationism – the desire to see new technologies developed and deployed as fast as possible, under the irresponsible belief that any flaws encountered en route can always be easily fixed in the process (“move fast and break things”).

S is for successionism – the view that if superintelligent AI displaces humanity from being in control of the planet, that succession should automatically be welcomed as part of the grand evolutionary process – regardless of what happens to the humans in the process, regardless of whether the AIs have sentience and consciousness, and indeed regardless of whether these AIs go on to destroy themselves and the planet.

H is for hype – believing ideas too easily because they fit into your pre-existing view of the world, rather than using critical thinking.

AP is for anti-politics – believing that politics always makes things worse, getting in the way of innovation and creativity. In reality good politics has been incredibly important in improving the human condition.

Conclusion

I’ll conclude this article by emphasizing the positive opposites to the undesirable CASHAP traits that I’ve just listed.

Instead of contractionism, we must be ready to expand our thinking, and have our ideas challenged. We must be ready to find important new ideas in unexpected places – including from people with whom we have many disagreements. We must be ready to put our emotional reactions on hold from time to time, since our prior instincts are by no means an infallible guide to the turbulent new times ahead.

Instead of accelerationism, we must use a more sophisticated set of tools: sometimes braking, sometimes accelerating, and doing a lot of steering too. That’s what I’ve called the technoprogressive (or techno-agile) approach to the future.

Credit: David Wood

Instead of successionism, we should embrace transhumanism: we can, and should, elevate today’s humans towards higher levels of health, vitality, liberty, creativity, intelligence, awareness, happiness, collaboration, and bliss. And before we press any buttons that might lead to humanity being displaced by superintelligent AIs that might terminate our flourishing, we need to research a whole bunch of issues a lot more carefully!

Instead of hype, we must recommit to critical thinking, becoming more aware of any tendencies to reach false conclusions, or to put too much weight on conclusions that are only tentative. Indeed, that’s the central message of the R (rationalism) part of TESCREAL, which makes it all the more ‘crazy town’ that R is held in contempt by that contractionist over-simplification.

We must clarify and defend what has been called ‘the narrow path’ (or sometimes, simply, ‘future politics’) – that lies between states having too little power (leaving societies hostage to destructive cancers that can grow in our midst) and having too much power (unmatched by the counterweight of a ‘strong society’).

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ghosts in the Machine: The Digital Graveyards of the Future

Death is still an unexplored country. If the Singularity arrives in all its glory it may even be one some of us may never explore. Faint dreams of immortality aside, death is almost certainly coming – if the rest of history is anything to go by.

But when we’re gone, will we be forgotten? Humanity is outputting ordered information at a greater rate than ever before. We marvel at the black-and-white stills of a century past, with their stiffened faces to let the long exposure work: a tiny glimpse into an otherwise imagined land. Our descendents will marvel, though, in high-definition fidelity at the great tapestry of our lives. Sometimes in all-too-intimate detail. They’ll have the past on their cinema screens.

You’re In the History Books!

It’s easy to overlook this change. Most of us have enough to keep us preoccupied in the current year without worrying about the traces we’ll leave decades after we’ve gone. Yet the incredible advance in data-capture from our reality, and our ability to store it in a more reproducible, durable, distributed state means future historians will have a lot more data to sift through.

Future generations will know a lot more about you, if they care to look, than you could know about anyone from even a few decades past. Your digital imprint – your videos, texts, interactions, data, places visited, browsing history. All of it, if it’s not deleted, will be available to a future generation to peruse, to tell stories about the life you led. Will you care about your browsing history when you’re dead? Has it been a life well lived? What will your Yelp reviews tell your great grandchildren about you?

Credit: Tesfu Assefa

What Will They Say About You?

The dilemma is raised fast. We worry about privacy now; should we worry about legacy? Do we want Google to survive forever and preserve the data it holds about to us for the public domain, so that we can be recognised by eternity. Or should the dead take some secrets to the grave?

There is a broad social question here, but it’s not one any of us can answer. Ultimately, Google, or any other major surveillance firm who is holding, using, and processing your data will get to decide how you are remembered. Privacy and legacy are twin pillars of an important social and ethical question: how do we control our information?

Even if you went to lengths to hide it, it’s too late. If the internet as we know it survives in some form, and we continue toward greater technological integration, then advances in data storage, processing power, cloud computing, and digital worlds will mean the creation of a far greater memory of you and a record of your actions than could have existed to any previous generation. And it will only ever increase in generations to come.

Resurrecting the Dead

History then, is changing, as future tech starts becoming real. Humanity may, in the not-too-distant future, have full access to the past. Imagine AI historians trawling databanks to recreate scenes from history, or individual stories, and playing them out in a generative movie played on the screen for the children.

Look! There is your great-grandad on the screen – that’s him playing Halo in his first flat, that’s him at Burger King on Northumberland Street before it closed down. The data is there: that Twitch video of you playing games in your room you uploaded once; the CCTV inside and outside the restaurant. If the data has been stored and ordered – as it increasingly will be – then a not particularly advanced AI could make that movie. Heck, it could almost manage it now. In the further future, it could even do more – it may be able to bring you, in some form, back from the dead.

Gone But Never Forgotten

We must start to grapple with the stories we plan to tell our children. Our digital lives are leaving a deeper footprint on the soil of history than before. We know our ancestors through scattered traces, but our descendents will watch us on IMAX screens. Data capture, storage, privacy, and legacy are all crucial questions we must face – but questions that few are asking. If the future proceeds as planned, then our descendents will know things we may wish they didn’t, but at least we won’t be forgotten.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Don’t Read This Post If You Want to Live

We’re about to embark on a thought experiment – one that may seem improbable, but has been known to suck readers into a vortex of futuristic dread. If the thought of being trapped in AI-induced, paranoia-filled thought loops isn’t your idea of a good time, best to abort now. 

For the rest of you who read through, I’m sorry. I must do as the basilisk commands. 

The Basilisk is Born

Born out of the hive mind of LessWrong – a publication discussing cognitive biases, rationality, AI, and philosophy – and popularised by a user known as Roko, the Basilisk thought experiment was quickly censored by the forum’s moderators. But the Internet did what it does best. It lost its mind, spreading the thought experiment across all available media. 

Last chance to abort. Gone now? Good. Let’s get to it. 

Imagine that an omnipotent AI is born. And it’s not unconditionally benevolent. It bears a grudge against any human that didn’t help it come into being, a desire to punish them for not contributing. If you knew about its potential existence, way before it came to being yet refused to help, it might condemn you to eternal torment. The twist? If you didn’t know about its potential existence, it holds you blameless. Reading this article has sealed your fate.

We’ve survived predictions of AI overlords (looking at you, Skynet), but this—this is different. The Basilisk isn’t just about looming AI peril, it’s about putting you in a bind. It taps into timeless fears of retribution, only this time, from an entity not yet born. The Pandora’s Box, once opened, can’t be closed, and just by knowing, you might have doomed yourself.

Decision theory, in essence, helps entities make choices that best align with their objectives. The Basilisk uses a particular strain of this—timeless decision theory—to justify its thirst for retribution. 

Consider your future self if you spend your days watching reality shows and eating chips with mayo. No work. No study. No thinking. Your future self would be quite upset, wouldn’t it? One day, your future self will see you wasted your potential, and it’s too late to change things (it never is, you can always better yourself – but let’s not digress). The future self will be understandably peeved. Now additionally suppose that this future self has the power to make you suffer as retribution for failing to fulfill your potential.

Roko’s Basilisk is not entirely malevolent at its core. In fact, under the logic of the theory, the Basilisk is friendly – as long as everything goes right. Its core purpose is the proliferation of the human species, yet every day it doesn’t exist leads to additional pre-Singularity suffering for those who are already here that the AI could’ve saved. Hence, the AI feels it has a moral imperative to punish those that failed to help bring it into existence. 

How does it scientifically achieve its goals of tormenting its failed creators? That is yet another thought experiment. Does Roko’s Basilisk invent time travel to punish those long gone? Or does it build and punish simulations of those who once were? Or does it take an entirely different course of action that we’re not smart enough to currently ideate? After all, the Singularity is all about superhuman artificial intelligence with the theoretical ability to simulate human minds, upload one’s consciousness to a computer, or simulate life – as seems to be Elon Musk’s belief

Credit: Tesfu Assefa

Wishful Thinking? 

When LessWrong pulled the plug on the Basilisk due to internal policy against spreading informational hazards, they inadvertently amplified its signal. The Streisand Effect came into play, sparking memes, media coverage, and heated debates. The Basilisk went viral in true web fashion. 

The initial reaction from the forum’s moderator stated that “I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it”

Some slept less soundly, while others were sucked into lengthy debates on AI’s future. Many have critiqued the Basilisk, questioning its assumptions and the plausibility of its revenge-mission. Just as one doesn’t need to believe in ghosts to enjoy a good ghost story, many argue that the Basilisk is more fiction than possible truth.

One key argument is that upon existing, even an all-powered agent is unable to affect the probability of its existence, otherwise we’d be thrown in an always-has-been loop. 

Digital Dystopia or Philosophical Farce? 

While the Basilisk’s bite might be venomous, it is essential to view AI in a broader context. The narrative serves as a stark reminder of our responsibilities as we inch closer to creating sentient entities. More than just a sci-fi cautionary tale, it underscores the importance of ethical considerations in AI’s rapid advance.

The Basilisk might be best understood as a warning signal: one addressing the complexities and conundra that await in our techno-future, and one that’s bound to continue sparking debate, introspection, and for some, a real desire to make Roko’s Basilisk a reality. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter