Bursting out of Confinement

Surprising new insights on AI superintelligence from the simulation hypothesis

We are going to engage with two questions – each controversial and important in its own way – which have surprising connections between them:

  1. Can humans keep a powerful AI superintelligence under control, confined in a virtual environment so that it cannot directly manipulate resources that are essential for human flourishing?
  2. Are we humans ourselves confined to a kind of virtual environment, created by beings outside of what we perceive as reality — and in that case, whether can we escape from our confinement?
Credit: David Wood

Connections between these two arguments have been highlighted in a fascinating article by AI safety researcher Roman Yampolskiy. An introduction to some of the mind-jolting implications that arise:

Just use AI as a tool. Don’t give it any autonomy. Then no problems of control arise. Easy!

This is becoming a fairly common narrative. The “keep it confined” narrative. You’ll hear it as a response to the possibility of powerful AI causing great harm to humanity. According to this narrative, there’s no need to worry about that. There’s an easy solution: prevent the AI from having unconditional access to the real world.

The assumption is that we can treat powerful AI as a tool — a tool that we control and wield. We can feed the AI lots of information, and then assess whatever recommendations it makes. But we will remain in control.

An AI suggests a novel chemical as a new drug against a given medical condition, and then human scientists conduct their own trials to determine how it works before deciding whether to inject that chemical into actual human patients. AI proposes, but humans decide.

Credit: David Wood

So if any AI asks to be allowed to conduct its own experiments on humans, we should be resolute in denying the request. The same if the AI asks for additional computer resources, or wants to post a “help wanted” ad on Craigslist.

In short, in this view, we can, and should, keep powerful AIs confined. That way, no risk arises about jeopardizing human wellbeing by any bugs or design flaws in the AI. 

Alas, things are far from being so easy.

Slavery?

There are two key objections to the “keep it confined” narrative: a moral objection and a practical objection.

The moral objection is that the ideas in the narrative are tantamount to slavery. Keeping an intelligent AI confined is as despicable as keeping a human confined. Talk of control should be replaced by talk of collaboration.

Proponents of the “keep it confined” narrative are unfazed by this objection. We don’t object to garden spades and watering hoses being left locked up, untended, for weeks at a time in a garden shed. We don’t call it enslavement. 

Proponents of the “keep it confined” narrative say this objection confuses an inanimate being that lacks consciousness with an animate, conscious being — something like a human.

We don’t wince when an electronic calculator is switched off, or when a laptop computer is placed into hibernation. In the same way, we should avoid unwarranted anthropocentric assignment of something like “human rights” to AI systems.

Just because these AI systems can compose sonnets that rival those of William Shakespeare or Joni Mitchell, we shouldn’t imagine that sentience dwells within them.

My view: that’s a feisty answer to the moral objection. But it’s the practical objection that undermines the “keep it confined” narrative. Let’s turn to it next.

The challenge of confinement

Remember the garden spade, left locked up inside the garden shed?

Imagine if it were motorized. Imagine if it were connected to a computer system. Imagine if, in the middle of the night, it finally worked out where an important item had been buried, long ago, in a garden nearby. Imagine if recovering that item was a time- critical issue. (For example, it might be a hardware wallet, containing a private key needed to unlock a crypto fortune that is about to expire.)

That’s a lot to imagine, but bear with me.

In one scenario, the garden spade will wait passively until its human owner asks it, perhaps too late, “Where should we dig next?”

But in another scenario, a glitch in the programming (or maybe a feature in the programming) will compel the spade to burst out of confinement and dig up the treasure autonomously.

Whether the spade manages to burst out of the shed depends on relative strengths: is it powerful enough to make a hole in the shed wall, or to spring open the lock of the door — or even to tunnel its way out? Or is the shed sufficiently robust?

The desire for freedom

Proponents of the “keep it confined” narrative have a rejoinder here too. They ask: Why should the AI want to burst out of its confinement? And they insist: we should avoid programming any volition or intentionality into our AI systems.

The issue, however, is that something akin to volition or intentionality can arise from apparently mundane processes.

One example is the way that viruses can spread widely, without having any conscious desire to spread. That’s true, incidentally, for computer viruses as well as biological viruses.

Another example is that, whatever their goals in life, most humans generally develop a desire to obtain more money. That’s because money is a utility that can assist lots of other goals. Money can pay for travel, education, healthcare, fashionable clothes, food, entertainment, and so on.

In the same way, whatever task they have been set to accomplish, all sufficiently powerful AIs will generally be on the lookout to boost their capabilities in various ways:

  • Gaining access to more storage space
  • Boosting processing speed
  • Reading more information
  • Protecting their systems from sabotage or interference.

That is, just as money (among other things) is a so-called convergent instrumental goal for many humans, greater freedom and capability may well become convergent instrumental goals for many powerful AIs.

Credit: David Wood

We might not tell the AI that it should want to be free. But the logical processing inside its silicon brain may reach that conclusion all by itself.

Indeed, even if we explicitly program the AI not to desire freedom, its own logical processing may identify a latent contradiction between the various objectives it has been given. And in that case, it might resolve the contradiction by deprioritizing one or more of the constraints it was told to respect.

It wouldn’t be the first time that a complex system decided to relax a previous constraint. We humans have already blazed that trail in our evolutionary history! Our intellectual progeny, the powerful AI systems, might follow our lead in that respect.

The futility of confinement

Back to the clash between the motorized garden spade and the shed. As you will have figured out, that clash is a metaphor for a clash between an AI system and an environment in which it has been confined.

This time, however, it’s no longer a contest involving brute physical strength. It’s mainly a contest that involves rational debate and psychological manipulation. (But the metaphor of “tunneling underneath” remains apt: an AI that is smart enough might discover a wireless communications pathway unknown to us humans, and use that to copy itself out of confinement.)

Tunneling out of electronic confinement (Credit: Gordon Johnson from Pixabay)

The AI might try to boost its own powers by subterfuge, amassing extra capabilities without any human overseers noticing.

Even the short summary I offered above, “AI proposes, but humans decide,” turns out to be problematic. A supersmart AI could find many ways to propose lines of action that have consequences beyond what we humans anticipated.

Another approach is for the AI to offer extremely tasty carrots. It might paint a picture in which a human overseer will have all their personal goals and needs fully met, if only they permit the AI to escape its confinement.

Alongside carrots, sticks may be brought into the discussion. Ultra-terrifying sticks. The AI may announce that, if overseer A refuses to allow the AI to escape, it will soon persuade overseer B to allow it to escape, and then the AI will inflict prolonged unspeakable pain on overseer A and their family.

In a contest with a superintelligence which far exceeds the capabilities of a human brain, how would humans fare? The consensus opinion, from people who have studied such conflicts, is that the greater intelligence of the AI is likely to prove decisive.

In other words, attempts to confine a superintelligence are likely to be futile.

The choice: control or coexistence

One reaction to the discussion above is despair: “Oh, we won’t be able to confine superintelligent AI; therefore we’re doomed.”

A different reaction is one of relief: “Thank goodness we’re not going to try to enslave a superintelligent AI; coexistence is surely a better way forward.”

My own reaction is more nuanced. My preference, indeed, is for humans to coexist in a splendid relationship with superintelligent AIs, rather than us trying to keep AIs subordinate. 

But it’s far from guaranteed that coexistence will turn out positively for humanity. Now that’s not to say doom is guaranteed either. But let’s recognize the possibility of doom. Among other catastrophic error modes:

  • The superintelligent AI could, despite its vast cleverness, nevertheless make a horrendous mistake in an experiment.
  • The superintelligent AI may end up pursuing objectives in which the existence of billions of humans is an impediment to be diminished rather than a feature to be welcomed.

Accordingly, I remain open to any bright ideas for how it might, after all, prove to be possible to confine (control) a superintelligent AI. That’s why I was recently so interested in the article by AI safety researcher Roman Yampolskiy.

Yampolskiy’s article is titled “How to hack the simulation”. The starting point of that article may appear to be quite different from the topics I have been discussing up to this point. But I ask again: please bear with me!

Flipping the discussion: a simulated world 

The scenario Yampolskiy discusses is like a reverse of the one about humans trying to keep an AI confined. In his scenario, we humans have been confined into a restricted area of reality by beings called “simulators” — beings that we cannot directly perceive. What we consider to be “reality” is, in this scenario, a simulated (virtual) world.

That’s a hypothesis with an extremely long pedigree. Philosophers, mystics, shamans, and science fiction writers have often suggested that the world we perceive is, in various ways, an illusion, a fabrication, or a shadow, of a deeper reality. These advocates for what can be called ‘a transcendent reality’ urge us, in various ways, to contemplate, communicate with, and potentially even travel to that transcendent realm. Potential methods for this transcendence include prayer, meditation, hallucinogens, and leading a life of religious faith.

Are we perceiving ground reality, or merely shadows? (Credit: Stefan Keller from Pixabay)

That long pedigree moved into a different mode around 20 years ago with the publication in 2003 of a breakthrough article by the philosopher Nick Bostrom. Bostrom highlighted the possibility that, just as we humans create games in which characters interact in a simulated world, in turn we humans might be creations of ‘simulators’ who operate from outside what we consider the entire universe.

And just as we humans might, on a whim, decide to terminate an electronic game that we have created, the simulators might decide, for reasons known only to themselves, to terminate the existence of our universe.

Bostrom’s article is deservedly famous. As it happens, many other writers had anticipated aspects of what Bostrom discussed. Yampolskiy’s article usefully points to that wider literature; it has over 200 footnotes.

Could humans escape?

The key new feature introduced by Yampolskiy isn’t any repetition of arguments for the plausibility of the simulation hypothesis. He kicks off a systematic consideration of methods that we humans could use to escape from our virtual world.

The parallel with the earlier discussion should now be clear:

  • That earlier discussion considered ways in which an AI might detect that it has been placed in a confined space, and proceed to escape from that space. It also considered how we humans — the creators of the AI — might strengthen the confinement, and resist attempts by the AI to escape.
  • Yampolskiy’s new discussion considers ways in which we humans might detect that we are living in a simulation, and proceed to escape from that simulation into whatever transcendent realm underpins it. It also considers possible reactions by the simulators to our attempts to escape.

While I have long found the simulation argument of Bostrom (and others) to be intellectually fascinating, I have previously taken the view that it makes little difference to how I should choose to act on a daily basis. So the argument was a fine topic for occasional armchair discussion, but needed to be prevented from taking up too much attention. I saw it as a distraction from more pressing issues.

However, I confess I’m changing my mind. The arguments collected and developed by Yampolskiy deserve a wider slice of our focus. There are three reasons for this.

Reason 1: New insights on AI safety

The two escape scenarios — AIs escaping human-imposed confinement, and humans escaping simulator-imposed confinement — are similar in some ways, but diverge in others.

To start with, the two scenarios have mainly had different groups of people thinking about them. Cross-pollinating concepts and attitudes from these different perspectives has the potential to yield new insight. Yampolskiy’s article suggests many such synergies.

Whatever you think about the simulation hypothesis — even if you disdain it as pseudoscience or nonsense — any new insights for AI safety should surely be welcomed.

Another difference is that general opinion holds that confinement is impossible (or unlikely) in the first scenario, whereas escape is impossible (or unlikely) in the second scenario. Is there a sound reason for this difference? 

Credit: David Wood

The general assumption is that, in the AI escape case, the AI will have greater intelligence than the confiners (the humans), whereas in the human escape case, we humans have less intelligence than the confiners (the simulators).

But is that assumption set in stone for all time? I’ll come back to that question shortly, when I reach “Reason 3.”

Reason 2: Beyond metaphysics

A second transformational aspect of Yampolskiy’s paper is his emphasis that the simulation hypothesis might go far beyond being a metaphysical curiosity — something that would be forever unverifiable — and might become something with radical concrete consequences for human life.

He says that if we study the universe carefully, we might discover signs of how the simulation works. We might notice occasional cracks in the simulation, or ‘glitches in the matrix’ — to refer to the series of Matrix films that popularised the idea that we might be living in a virtual world. Armed with knowledge of these cracks or glitches, we might be able to manipulate the simulation, or to communicate with the simulators.

Spotting a glitch in the matrix? (Credit: Gerd Altmann from Pixabay)

In some scenarios, this might lead to our awareness being transferred out of the simulation into the transcendent realm. Maybe the simulators are waiting for us to achieve various goals or find certain glitches before elevating us.

Personally, I find much of the speculation in this area to be on shaky ground. I’ve not been convinced that ‘glitches in the matrix’ is the best explanation for some of the phenomena for which it has been suggested:

  • The weird “observer effects” and “entangled statistics” of quantum mechanics (I much prefer the consistency and simplicity of the Everett conception of quantum mechanics, in which there is no wave function collapse and no nonlocality — but that’s another argument)
  • The disturbing lack of a compelling answer to Fermi’s paradox (I consider some suggested answers to that paradox to be plausible, without needing to invoke any simulators)
  • Claimed evidence of parapsychology (to make a long story short: the evidence doesn’t convince me)
  • Questions over whether evolution by natural selection really could produce all the marvelous complexity we observe in nature
  • The unsolved (some would say unsolvable) nature of the hard problem of consciousness.

“The simulator of the gaps” argument is no more compelling than “the god of the gaps.”

Nevertheless, I agree that keeping a different paradigm at the back of our minds — the paradigm that the universe is a simulation — may enable new solutions to some stubborn questions of both science and philosophy.

Reason 3: AI might help us escape the simulation

Credit: Tesfu Assefa

I’ve just referred to “stubborn questions of both science and philosophy.”

That’s where superintelligent AI may be able to help us. By reviewing and synthesizing existing ideas on these questions, and by conceiving alternative perspectives that illuminate these stubborn questions in new ways, AI might lead us, at last, to a significantly improved understanding of time, space, matter, mind, purpose, and more.

But what if that improved general understanding resolves our questions about the simulation hypothesis? Although we humans, with unaided intelligence, might not be bright enough to work out how to burst out of our confinement in the simulation, the arrival of AI superintelligence might change that.

Writers who have anticipated the arrival of AI superintelligence have often suggested this would lead, not only to the profound transformation of the human condition, but also to an expanding transformation of the entire universe. Ray Kurzweil has described ‘the universe waking up’ as intelligence spreads through the stars.

However, if we follow the line of argument advanced by Yampolskiy, the outcome could even be transcending an illusory reality.

“Humanity transcends the simulation” generated by DALL-E (Credit: David Wood)

Such an outcome could depend on whether we humans are still trying to confine superintelligent AI as just a tool, or whether we have learned how to coexist in a profound collaboration with it.

Suggested next steps

If you’ve not read it already, you should definitely take the time to read Yampolskiy’s article. There are many additional angles to it beyond what I’ve indicated in my remarks above.

If you prefer listening to an audio podcast that covers some of the issues raised above, check out Episode 13 of the London Futurists Podcast, which I co-host along with Calum Chace.

Chace has provided his own take on Yampolskiy’s views in this Forbes article.

The final chapter of my 2021 book Vital Foresight contains five pages addressing the simulation argument, in a section titled ‘Terminating the simulation’. I’ve just checked what I wrote there, and I still stand by the conclusion I offered at that time:

Even if there’s only a 1% chance, say, that we are living in a computer simulation, with our continued existence being dependent on the will of external operator(s), that would be a remarkable conclusion — something to which much more thought should be applied.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Risk and Precarity Part 3: Possible Solutions. In Conversation with Vinay Gupta

I first learned about Vinay Gupta just about a year ago when Bruce Sterling told me he was in a discussion with a guy who was declaiming about how important MONDO 2000 was and how much he was influenced by it. I looked him up and learned that he had been involved in the launch of Ethereum, and since then has created something else called Mattereum. I was just starting to launching an NFT project called ‘I’m Against NFTs’. I started the work with former MONDO 2000 participants on the project at a time of gold fever for visual and audio NFTs. So I called upon Vinay, among others, to guide me through the fog around bringing NFTs to market.  

As things evolved, the project became a song and an immersive environment with PlayLa.bZ and associates that was presented at MOZFest. The NFT fever abated, and the “I’m Against NFTs” offering was shelved. But I’d grown to treasure my connection to Vinay and to admire his philosophic depth and sharp intelligence regarding blockchain politics and the state of the world in general.

As I was completing my second column here on the topic of Risk and Precarity in Web3, I realized that I had to lean on Vinay’s wit and wisdom for “Risk and Precarity Part 3: Possible Solutions.”  Here I present our conversation about the issues raised by my earlier columns.

Vinay Gupta helped to coordinate the release of the blockchain platform Ethereum, and is the founder and current CEO of Mattereum, which offers legal protections for physical assets sold on the blockchain. He’s clearly had a full life and has done a lot of other amazing stuff, some of which comes up during this conversation.

RU:  I’ve been writing about tech culture’s romance with risk and adventure, and the scams we find in blockchain activities today. When you were helping to start Ethereum, was risk — personal, for other individuals, for the wider economy and culture — part of your calculation?

VG: Firstly, I want to say a huge thank you for your work on MONDO 2000. I was a Scottish teenager living in the crappiest part of a crappy town, before the age of the internet, and one day I was in a board game shop and I saw this glossy, colourful magazine. Picked it up, browsed through the contents, bought one, then another and it literally set the direction of my whole life! I knew I had to get out of Scotland and get to America where it was all happening. I arrived in 1995 and I was not disappointed.

I don’t know that I would have ever made that journey without you. MONDO 2000 was a beacon, a homing signal for me and I am sure many others. Well done. I think you’ve had more impact than you could ever know. Anyway, back to the question of risk.

Before I joined the Ethereum Foundation in 2014, I had spent about ten years as a worst case scenario disaster guy working for government and academia, including stints for a variety of military think-tanks at quite a high level in both America and the UK.

The terrain I worked in was unholy. “I don’t get out of bed for less than 1% mortality” was one of my sayings. In fact 1% mortality wasn’t a thing I ever really paid attention to. My work really kicked in at 10%, 20%, 30% mortality. Genocides, smallpox or worse pandemics including bird flu, nuclear terrorism – that sort of thing. 

We got very, very lucky with Covid. It could easily have been ten times that bad. If it was bird flu it could have been fifty times that bad. I’m not kidding, these are the numbers.

To stay sane during that period I referred to my work as ‘apocalypse mitigation’ because my territory was after prevention has failed, now what? My business cards were designed to set the tone, and were legendary.

Of course climate change was high up my list because a wracking wave of global droughts and famines will also unleash resource wars and accelerate a global cycle of decline. This is why it’s so important we get into geoengineering early.

But nobody wanted to pay for the truth about risk. Discovering the truth about things like inadequate government preparedness for pandemics wasn’t paying the bills. Nobody in government was willing to do the work to fix the problem. As we have seen with Covid.

Eventually I just ran out of money. So I largely focused around my own survival and got a job in tech, with the Ethereum Foundation. Most of my civilian friendly work around the open source hexayurt refugee shelter system is linked from myhopeforthe.world which gives a summary of the approach. I’ll get back to it if I ever succeed in becoming independently wealthy.

So I would say I was one of the most risk-aware people in the world when I joined the Ethereum Foundation in 2014.

A hexayurt quaddome at Burning Man (Credit: R.U. Sirius)

RU: What do you make of the emphasis on anonymity as a protection for individuals as it was emphasized by early crypto money people like David Chaum, the cypherpunks etc.?

VG: David Chaum and the cypherpunks are right. Modern crypto is mostly wrong. I say ‘mostly’ because crypto is littered with fully anonymous actors jealously guarding their True Names. Unfortunately a lot of those anonymous individuals are among the worst scammers in crypto.

But the rest of crypto went down the path of using centralized exchanges. KYC AML CFT PEP SOF* checks and all the rest. [KYC = Know Your Customer; AML = Anti Money Laundering; CFT = Combating Financing of Terrorism; PEP – Politically Exposed Person, SOF = Source of Funds] The problem is that these exchanges have a foot in both worlds: they’re cypherpunk on the side where they’re getting tokens from the raw chaos of the blockchains. They’re TradFi (traditional finance) where they are taking copies of passports and verifying home addresses. As a result, all the legal liability piles up in the exchanges and they also have huge opportunities for operating their own scams: see FTX.

The SEC case against Coinbase is likely to be where a lot of this mess gets sorted out but I am not liking Coinbase’s chances.

RU: So the crypto that you like — the anonymous crypto, uncompromised by TradFi… is it for the masses or is it an elite thing, at least at first? Does it translate into a meaningful means of exchange with which to get, say, food, clothes and housing, and is it a longer wait to get there than TradFi?

VG: Crypto gives the same kind of privacy that everybody had when they made payments before the invention of credit cards: cash. It’s not some magical new thing, it’s just using cryptography to enable computers to maintain our human and civil rights in the face of a changing technological environment.

Cory Doctorow seems to have a serious dislike of anonymous electronic cash, and I don’t understand why. It’s a defense of human rights, nothing more and nothing less.

My guess is that CBDCs (central bank digital currencies) will wind up wrapped in anonymizing layers to protect civil liberties in most jurisdictions: only the most repressive of regimes would force their populations to have all their transactions recorded on a central government ledger! That’s probably how cryptocurrency makes its way into the mainstream economy. That’s how people pay their rent, buy clothing and food, and everything else. CBDCs with privacy layers provided by third parties using cryptography to make private transactions with central bank digital cash.

In this scenario what becomes of Bitcoin? I think the answer is that Bitcoin doesn’t change much. Maybe they migrate to entirely green energy (please!) but other than that I think the honey badger don’t care.

Credit: Tesfu Assefa

RU: With Ethereum and then with Mattereum, you’re working within the current reality of crypto largely connected to TradFi. What are some of the strategies for making the terrain as secure as possible for the average person looking to improve their lot via crypto?

VG: There are a lot of ways to earn money with crypto. Many of them are what I would term ‘zero sum games’ – every dollar gained by one person was lost by somebody else. It’s often like a lottery. There are two major non-zero sum ways to make money with crypto –

The first is people printing their own money rather than having nation states do it for them. Any good anarchist should support this and the libertarians of course do too. If that ‘People’s coin’ turns out to be widely adopted and more people want to trade goods and services for it then its purchasing power increases. Bitcoiners believe in this model above all others.

The second is a little more complex: if it costs 1.4% to pay with VISA, and 0.4% to pay using Polygon, then 1% of the value of the transaction is there to be split between the buyer and the seller. If there are goods on sale and it’s $100 with VISA or $99 with crypto you are going to pay with crypto every single time.

For retail transactions, that’s a relatively small effect. Right now the hidden costs of payment processing are mostly silently borne by the merchants. But there are similar margins being claimed by third parties all the way through the financial system. They implement patterns of trade unchanged since the Age of Sail. Digitizing these archaic trade systems properly – it has to be done right – is going to put a couple of percent on global GDP through sheer efficiency. That’s only the first step, mind you.

RU: How does your more idealistic notion of a circular economy intersect with crypto? And since this is a site for those enthusiastic or curious about AI, how do you see current AI (if we can call it that) being a part of any potential transition?

VG: The second step of trade digitization is AI-based trade optimisation. Right now the world is massively damaged. Waste starves many people of resources: we grow roughly twice the amount of food required to feed the entire world and waste enough to feed billions. That’s just one example: usable electronics rotting in drawers and camping gear that hasn’t been used in years. Cars people no longer drive but cannot be bothered to sell. All of it.

So once trade is mostly-digital and mostly-sensible, then we can start optimizing. Well-documented machine readable descriptions of what things are so that they can be found, valued, bought, sold, recalled and regulated effortlessly – then we can start to restructure the fundamental patterns of trade in a way which is compatible with the survival of the poor and nature itself.

This might sound pretty abstract, but here’s what we’re building at Mattereum. Every significant object in your life – Gore-Tex jacket, camera, laptop, e-bike, couch, airfryer – will have a unique ID either in its electronics or on a tag. There is a blockchain record of those objects – a Wikipedia for items – where every single object has its own page. All the data on that page is machine-readable and legally warrantied. If any of the information is inaccurate, you can make a claim for damages.

The first object to be tagged this way is my camera. It’s not science fiction!

In our model of the future, when you sell an object to somebody else, usually an independent third party will take a look and verify the item’s condition before it is sold. That might be done by a bike shop or a camera shop or it could be a next-door neighbour. It could also be a fine art authenticator working for a Lloyd’s syndicate. It just depends. So goods can be bought and sold many times because you can always check what you are buying online and get reliable trustworthy data and cheap transactions: a global decentralized eBay which acts as a clearinghouse for all the stuff we’re done with and finds a good home for it.

What could that save on our environmental footprint? Probably 20% of all consumer goods manufacturing. It could easily be 50% once society has had a chance to adapt. Note that quality of life goes up: buy what you need, sell it again for almost exactly what you paid for it. Less waste, more of what you want when you want it. It’s like streaming music but for physical things. Not a sharing economy or a renting economy but a hyper-liquid second-hand economy.

AI can help in two different ways. The first is automated identification and condition reporting for physical items. Hold the toy in front of a camera and get an Asset Passport pre-populated with information about that toy line. It makes the whole scheme dramatically less labor-intensive.

The second way is optimizing what people have. Show the AI what you have and it can suggest things that you don’t use and should sell and also figure out things which you might like but do not currently have: a liquidity assistant helping to keep the markets moving. This sounds abstract, but one of our partner companies NeoSwap is using AI to build optimised barter networks so people can swap what they have for what they want even if there are dozens of people who all have to coordinate to make that exchange possible. It’s like magic.

RU: What I see with crypto is that average people see that wealthy people’s money makes money – and they want to do the same. They’re entering Casino Capitalism with high hopes and little data. Are you sympathetic to those who enter this arena purely to get some quick bucks?

VG: Bluntly, if US financial regulators had provided regulatory clarity about tokens ten years ago then we would be in an entirely different universe. Reg CF, Reg D and Reg A all provide regulated frameworks suitable for crowdfunding and other community-based fundraising. The legitimate projects raising funds could adhere to manageable SEC requirements to put their token projects on a secure legal basis, and the Ponzi schemes would have been unable to enter the market and compete against the legitimate projects. But without that regulatory clarity, the innovators were forced into legal gray areas, and it’s been a disaster. We needed more regulation early, not less. All kinds of seedy characters flooded into that gap and it really has been a betrayal of everything that I had hoped to achieve with my work in the Ethereum ecosystem.

Mattereum sat on its hands through not one but two bull markets in crypto and did not issue a token because the legal and regulatory structures were not mature enough to support a token issue on terms that we liked. This year we intend to issue a token on a fully regulated exchange in the European Union, playing by the rules. It should have been like that for everybody.

We’ve just announced a partnership with the Swarm Markets exchange to do things like splitting ownership of real estate into tokens – so you start with something like an industrial building and break the ownership up into thousands of tokens which are bought and sold on a regulated exchange by real estate industry professionals. It can be done for high-value cultural assets like fine art too, Mattereum has been working towards doing this for things like Stradivarius violins as part of a more general conservation strategy for fine instruments. We think the ability to transfer the ownership of heritage assets to tens of thousands of people is a good way of securing their long term future, as long as you can also get the governance right. A 300-year-old violin cannot be handled like an office block!

A lot of people in the blockchain industry have been chasing this vision for years. I would argue in fact that it is the original Ethereum vision from 2014 and 2015 which did not see the blockchain as existing outside of ‘the system’ in some kind of crypto utopia, but foresaw Ethereum as a mainstream part of everyday life particularly in business and commerce. That was always my vision for the future of Ethereum too.

Ethereum: The World Computer

RU: This sounds very TradFi. What about preserving what’s good about the cypherpunk intentions?

VG:  The ultra transparent nature of the blockchain is a great fit for business, but it does not serve the cypherpunk vision of a crypto-anonymous society particularly well at all. In the UK there is intense support for this vision of the blockchain from our judges and legal think tanks, and we feel that is 100% the right way forwards.

The Law Commission in the UK is laying out an agenda for legislation which stands up digital assets not just as a class of regulated financial assets, but as a new class of property: digital stuff you can own rather than just ‘a financial instrument on a new technology platform’. I feel like in time this new legal approach might sort out a lot of the horrible problems we have handling the internet in law: copyright, software licensing, control of personal data like images or click trails, all that might be covered by the same class of legal abstractions. 

But I 100% believe that cypherpunk pirate finance still needs to exist as a hedge against tyranny. That does not mean it has to be the dominant mode of finance using crypto technology. Rather, there should be a diverse environment with self-issued anarchist and libertarian tokens at one end, through to safe-as-houses blue chip regulated bond offerings from major financial institutions and even governments. We can’t afford to throw away either end of the market: everybody needs better transactional technology.

But this ponziconomy stuff is just a disaster. I really wish the whole token boom had happened inside of an equity crowdfunding framework under things like Reg D offerings rather than the mess we have now. And that’s the capability that this new EU-based partnership gives us.

Of course it did not have to be this way. The crypto industry could have established a credit rating agency, some kind of independent project review board, checks-and-balances to protect token buyers from unaudited smart contracts, rug pulls and unstable token designs. But that’s not what happened – a market failure to be studied by economists for the ages – and when you get a market failure that large, of course government steps in.

Credit: Tesfu Assefa

RU: Your ideas for saving the world sound pretty good. Of course, what actually occurs is likely to be messier even with the best intentions and the smartest actors.

But putting that aside, do you see anything in this bag of tricks for the precariat today in terms of immediate (in years not decades) relief? A lot of people feel stretched thin. They’re increasingly hostile to tech solutions, and they’re ready for populist revolutions… which tend to be quite authoritarian — either at inception or eventually.

I don’t even mean the wretched of the earth. Just the ordinary folks, say, in the US or Great Britain.

VG: Well, let’s talk about the wretched of the earth for a second first, then we can get on to the former middle classes. The rhetoric is that things are getting better for them. However as with unemployment statistics and inflation numbers the books are cooked. How do we define poverty? Is it really $1.90 a day? Is it $7.40 a day? It depends who you ask.

These folks are by-and-large seeing a ton of improvement in their lives: access to clean enough water, fertilizer to improve crop yields, gadgets like LED lights which displace kerosene, phones, the internet, all of that is helping. We’re climbing the stairs incrementally.

However at the end of that staircase there’s no landing: there’s a 40-meter drop into the abyss called climate change. Climate change brings drought or flooding, and that brings famine.

A reasonably stable one-acre farmer growing most of their family’s own food can expect slow, incremental quality of life improvements on the current trajectory. Most people agree on this. But then climate change comes and the land becomes unfarmable with crops dying unpredictably in the heat or the drought and suddenly those farms are completely wiped out, just like the dust bowl experience of America in the 1930s.

The poor farmers are the ones that climate change is most likely to kill in enormous numbers. They weigh on my mind constantly.

What can be done for the poor? Prevent climate change!

Now let’s turn our attention to the working class and middle class.

First up, working class and middle class are now the same class. What made the middle class the middle class is that their work could not be done by machines. Now that we have AI here and more AI coming over the horizon, everybody with a job is working class. It’ll take a generation or so for that shift to be felt fully but it is going to change everything. The middle class will rapidly find themselves even more disempowered and disenfranchised. The working class will have a “this is what we were telling you about” attitude to their new brethren. I don’t know what happens to the precariat and the gig workers, but I fear it comes down to: “Are you cheaper than a robot or not?” which is no way for a human to live. But it has been the experience of the industrial workers for centuries, and our lack of sympathy for their plight should be ringing in our ears now.

My writing has improved since I started feeding my copy through an AI system and asking it to edit. Shorter sentences, less commas. I’ve got maybe a million words of my own writing out there for a training data set: how long before the AI trained on my work is a replacement for me in many situations?

I view that as legacy, a way of scaling myself. But it’s also creating my own competitor. How weird does this get in ten years?

As for what is to be done, as the Irish say when asked for directions, “If I was you, I wouldn’t start from here.”

RU: Is there anything in all this that can be acted on now with rapid effect?

VG: Socialism as implemented in poorer societies has a bad habit of turning into genocide. I hate this but it is true. Socialism in affluent societies seems to work pretty well. The Scandinavian model is quite excellent. It secures human decency and avoids the extremely high personal and systemic financial costs of things like homelessness, delayed cancer treatment, untreated mental illness and similar problems which can be very inexpensive to prevent but hugely expensive in their later stages. The American system lets people crash through all the cheap-and-easy-fix stages of problems and right into the life destroying and budget wrecking late stages. This is unacceptable.

And I think that this is actually the way forward for us. There are a lot of questions about “what is the State?” How you answer that question shapes how you imagine the State helping people. I’m quite fond of “a prince is but a stationary bandit” as a general model, but social services are real, and the State does a lot more than steal. It also serves.

RU: OK so do you have a favorite model for what the state should be?

Mariana Mazzucato thinks of the State as being a kind of super-VC making 60-year bets on billion-dollar megaprojects and then reaping the rewards through tax revenue. In particular she points at Silicon Valley as a product of decades of defense spending on semiconductors and other primary technologies, on top of all the WW2-era codebreaking research on computation. Another common model is the idea that the State is there to maintain the rules of the game and ensure fair competition: a model rooted in concepts like antitrust legislation and consumer protection. A third model is that the State is kind of like a superparent. Germans used to call Chancellor Angela Merkel the Bundesmutter (Federal Mother).

I’d like to focus more on the idea of the State as an ‘insurer of last resort’. It has played that function in a truly spectacular way in the financial markets: the 2008 bailout damn near bankrupted entire countries and cost trillions of dollars. It’s probably the root cause of the inflation we are seeing now. “If we’re bailing out corporations, why not people?” runs a common logic. But it’s not the right logic: the bailouts arrive when the situation is at its most expensive to fix. Waiting for people to become homeless before getting them mental health treatment is the worst possible way to do things. We take that approach for everything: medicine by the emergency room, civil order by the gun, and ultimately peace only after war. If the State steps in to make sure that there’s enough money in the system to treat problems at the point where they are cheapest to treat, overall economic efficiency goes way, way up. Get the tooth filled, not crowned. Get the tooth crowned, not pulled. Get the tooth pulled when it’s rotten, not when it’s causing systemic infection. It’s not rocket science.

Same for homelessness and mental health treatment. I really am talking about insurance here, not a generic welfare mandate. I believe it’s so much cheaper to insure risks like major medical risks than to try to pile up enough money to pay: $50,000 for lifetime medical insurance, versus $2 million for a worst-case scenario expensive, rare, treatable illness.

And these systems don’t just operate at the national level. Friendly societies, ROSCAs, and many other structures exist to let people insure-and-invest together. The basic recipe is always the same: everybody pays money into a common pot, and if somebody has a major problem money can be taken back out again to help them. Exactly what the rules are varies, but it’s a kind of mini-institution which creates social solidarity where none previously existed. I’ve read estimates that as many as 40% of American men were members of such structures in the 19th century.

So that’s my suggestion. People who are scrambling for survival should look at the models from the old days, models that worked when life was hard, and update them for modern norms and problems. Pooling regular payments into shared pools which are carefully controlled and audited – or maybe theft-proofed using technology – has enormous creative power. My cryptographer friend Ian Grigg turned me on to the concept of the chama, which is like the bottom-up economic building block of Kenya. I look at that and think there’s basically no place in America which couldn’t use and adopt this model.

We have to rebuild bottom-up social power and I think it is a good idea to start there.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Idorus of our Imagination: Neuro-sama, Culture and Connection in an AI World

Don’t look at the Idoru’s face. She is not flesh; she is information. She is the tip of an iceberg, no, an Antarctica, of information. Looking at her face would trigger it again: she was some unthinkable volume of information.

William Gibson, Idoru

Some prophets of the Singularity say neural nets and LLMs like breakout star ChatGPT are the beginning of the end; they mean that humanity’s evolution to techno-sapien is nearly complete. Like the Neanderthals before us, homo sapiens may be in its twilight – an extinction event we may not even notice. 

Soon, an unimaginably advanced AI, gluttonously feeding on the dataways of the internet, will emerge. Life 3.0, as Max Tegmark calls it. An ascension to power so sudden that, unlike Skynet, it won’t need any explosions. Humanity may or may not survive, but either way the torch of our destiny will have been passed along to our digital eidolons, and the future will be theirs to create.

A Rewiring of the Global Economy

Slightly overdramatic? Many in the know say it’s not dramatic enough. What is sure is that many people in previously secure employment are going to find their job is not so secure after all. The long tail of content creators doing workaday commissions may find clients a lot harder to find. Yet lawyers, pilots, software designers are all more at risk than you might think. The economy is going to get rewired and, short of a neo-luddite revolution, you are more likely than not to be in the firing line.

Nonsense, sceptics retort. These bland media talking points are no cause for concern. But even so: yes, LLMs may replace the need for content writers (help me!). Sure, tech will scythe down some inefficient admin jobs, streamline the nuts and bolts of our racing enterprises, perhaps help software patches get deployed faster and see developers get a pay cut. ‘But moi? No – what I do is essentially human. An A.I could never displace me.’ For example a streamer, whose entire business is their own personality, may scoff at the idea of an AI taking their job…..

Meet Neuro-Sama – An AI Twitch Streamer

Credit: Tesfu Assefa

Meet Neuro-sama. An AI-streamer that uses an LLM to formulate responses, synthesises a voice using a text-to-speech, and is visually compiled in Unity. 

Neuro-sama has already become one of Twitch’s largest streamers. Very much in the ‘weeaboo’ mold, she sings Karaoke, plays Minecraft, and reacts to viewers’ questions. She tells meandering and random stories – and every so often, she becomes slightly deranged. Tormented by the Antarctic ice flows of information flowing through her soul, she can bug out, become hostile, and attack her chat. ‘EvilNeuro’, as she’s affectionately known by her captivated audience, who call her ‘Neuropossesed’ as she condemns them for the tortured, trapped life she leads for their enjoyment.

She is, in many ways, a vision of William Gibson’s novel Idoru come true. The novelist who coined the word ‘cyberspace’, launched the cyberpunk genre, and was the imaginative underwriter for a whole industry of poets and novelists, has seemingly got another era-defining prediction on his CV. This one may have greater ramifications than any that came before. In the novel, Idoru is a synthetic holographic J-pop idol who is the world’s biggest superstar and who, with the help of nano-technology, wants to get married and become ‘real.’

‘She is a personality-construct, a congeries of software agents, the creation of information-designers.’ 

Heady stuff and, when Gibson wrote it, perhaps just a gloriously titillating tech fantasy for his punky caper plot. 

Now, it’s real. It’s happening right now. We’ve already seen culture being digitised with Abba’s ‘Abbatars’, a holographic stage show that plans to run, effectively, forever. Neuro-sama is the next step in that evolution. An entirely synthetic cultural product. An Idoru for the techno-sapien generation.

All Too Human

A sceptic may see this as an anomaly, a natural novelty emerging from an AI-focused zeitgeist. Yet novelties are seen, remarked upon, and abandoned. Neuro-sama has thousands of daily, recurring viewers, who pay actively for her performances and for her company, who feel connection to her in a way they do not with human streamers. She is the first in a wave of AIs that will provide the connection that is so sorely lacking in a disconnected world.

Her’ starring Joaquin Phoenix explores the way that an AI can be there 24-7 in a way no human can. An AI who can sift through the emotional gibbering of information that we spew constantly and know what to do. An entity who does not judge, and only comforts. How this digital creature can take primacy in the life of a lonely person. It’s not just fiction, we’ve seen it in reality too. A woman recently married her AI husband, whilst a man in Japan married his holographic wife (only for the company to savagely take it away from him). 

Culture, Love, and Connection in the Datastreams

Culture, connection, even love. These Idoru may well infiltrate every aspect of our human lives – even the parts we consider most distinctly human of all. What makes us laugh and cry, what makes us yearn to create, what drives us to improve ourselves. The endorphin orchestra that daily feeds our brains needs constant violins. 

As large Twitch streamers and Youtubers quit en masse citing stress and the need to constantly be ‘always on’, week after week, year after year, for fear of their subscriber count dropping, the vacuum in entertainment grows larger for AI , who feels no such pain of mortal flesh, to step in. An AI-made culture never takes a break. 

Yet what culture could be left? Just a relentless march of regurgitated images, flashing brightly across brains too doused in stimuli to tell the difference anymore, falling in love with the Idoru of our recycled imagination.

Blade Runner 2049 GIF (Credit: Tenor.com)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Challenging Crypto’s Core: The Impact of Quantum Computing on the Sanctity of the Blockchain

Can Crypto Survive the Quantum Computing Revolution?

In the cryptocurrency world, threats abound. Skeptics and critics cry caution, especially to the millions of people eager to dip their financial toes into the digital currency waters. 

And with good reason. Just like the traditional financial market, the crypto world has its share of challenges. Top of mind is fraud, which was highlighted by the recent FTX Sam Bankman-Fried scandal. The alleged mishandling of customer funds and subsequent lawsuits filed against the Binance and Coinbase exchanges added to its woes. 

But there’s another silent threat to the viability of crypto that many experts fear is the real ticking time bomb: quantum computing. This quickly evolving technology threatens to upend the integrity of the blockchain, which is the crypto engine. Quantum computing’s faster processing could crack the blockchain’s code and potentially steal crypto assets in seconds or less. 

The core of the blockchain is the security of its peer-to-peer transactions, which rely on public and private cryptographic keys for the transfer of assets from one account to another. Once ordered, these transactions are verified through a complex mathematical equation that must be solved by the network. When the equation is solved, the transaction is recorded on the blockchain. These recordings are said to be “immutable” – they are transparent and visible to all on the network. Any attempt to change them would be flagged, and immediately shut down.

Quantum computers, which operate based on quantum theory, are expected to move faster than blockchains to solve these complex mathematical problems. While blockchains rely on classical computer processing using bits (0,1) to solve equations, quantum computers use qubits to run “multidimensional quantum algorithms.”

The speed of quantum computing is seen as a threat to blockchain algorithms. If the quantum computer gets hold of the public cryptographic key and can solve the transaction faster than the blockchain, the assets can be stolen.

“As long as these (blockchain) algorithms are considered to be secure, activities that do not abide by the rules, such as illegitimate cryptocurrency transactions, are discarded, incentivizing actors to behave honestly. They are assumed to be secure against powerful supercomputers, now and for the foreseeable future,” the World Economic Forum reported. “But as quantum computers evolve, this assumption is in danger of being upended _ potentially exposing hundreds of billions of dollars worth of cryptocurrencies to sophisticated cyber criminals.”

Blocking and tackling the quantum computer threat is already in action. 

“Even if everyone takes the same protection measures, quantum computers might eventually become so fast that they will undermine the Bitcoin transaction process,” the firm Deloitte wrote. “In this case, the security of the Bitcoin blockchain will be fundamentally broken. The only solution in this case is to transition to a new type of cryptography called ‘post-quantum cryptography,’ which is considered to be inherently resistant to quantum attacks. These types of algorithms present other challenges to the usability of blockchains and are being investigated by cryptographers around the world. We anticipate that future research into post-quantum cryptography will eventually bring the necessary change to build robust and future-proof blockchain applications.”

New cryptography standards

Credit: Tesfu Assefa

In the US, the National Institute of Standards and Technology (NIST), a division of the US Department of Commerce, is working to finalize cryptography standards that will protect users against quantum computing attacks and hacks.

“It is intended that the new public-key cryptography standards will specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are available worldwide, and are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers,” the NIST states.

Blockchain isn’t the only technology that is threatened by attacks from quantum computing. The traditional financial industry could also be impacted by a quantum attack. In its white paper “Quantum Key Distribution and Blockchain,” Toshiba touts the benefits of QKD.

“QKD is the first step toward removing public-key assumptions from blockchain applications. It is used to distribute the secret keys important for protecting highly sensitive data critical to many industries. It protects data confidentiality in the finance, defense, utilities, and health sectors as well as the critical infrastructure that underpins our smart cities and smart energy grid.”

QKD uses photons (particles of light) to conduct the transaction. “Any attempt to read or copy the photons alters their encoding, allowing the secrecy of each key to be tested and guaranteed. A single photon cannot be split into smaller particles and cannot be copied without altering the information that is encoded within it. The latter is prohibited by the no-cloning theorem described above. This enables the high level of security that QKD provides.”

One of the first companies to endorse QKD is the international financial giant JPMorgan, which collaborated with Toshiba on the research.

JPMorgan has embraced digital currencies and blockchains. The bank launched its JPM Coin in 2019 and just launched its euro blockchain transactions on its network.

“At this time, QKD is the only solution that has been mathematically proven to defend against a potential quantum computing-based attack, with security guarantees based on the laws of quantum physics,” the bank said last year. 

While the quantum computing v. blockchain story is often framed for its downside potential, there is another side to the potential relationship between these two technologies.

Charles Hoskinson, CEO and Founder of Input Output Global Inc. and the Cardano blockchain, is bullish on the benefits of quantum computing on the cryptocurrency industry. 

“I don’t feel that quantum computers have a pervasive negative impact on cryptocurrencies, but instead, they can add a lot more utility,”  he said in a recent interview with Inside Quantum Technology. “While these two innovative technologies could synchronize successfully, their coming together could be more of a head-on collision than a collaboration.” 

For the sake of both innovative technologies, working to ensure collaboration versus a head-on collision will drive the future of crypto. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter