SocialFi Primer: Why It’s SoFi So Good for Web3’s Creator Economies

Introduction

Social networks have risen back to prominence over the course of 2023, with Elon Musk’s Twitter rebranding to X and headlines screaming about its battles with Meta’s Threads. Meta’s continued push into a VR-powered metaverse is finally picking up steam, as this astonishing new Lex Fridman podcast demonstrates. 

With 4.7 billion users plugged in for an average of 2.5 hours a day on traditional social media, creator economies are blossoming and it’s no surprise that its Web3 version, Social Finance (SocialFi/SoFi) is high on the priority list for crypto investors and builders for finding new decentralized ways to monetize users’ social network clout. 

Despite the lowest crypto VC funding since 2020, Q3 2023 saw multi-million investment rounds for SoFi projects such as RepubliK and Phaver. A lot of this can be attributed to the Base chain’s surprise hit Friend.tech, which allows social media accounts to be tokenized and traded. Its 2023 buzz is giving off some early Axie Infinity tingles, not for GameFi but this time for SocialFi. 

Let’s take a closer look at what this emerging trend for crypto’s potential bull cycle of 2024/2025 is all about. 

What is SocialFi?

Please note: ‘Social finance’ can also refer to another altruistic form of finance that supports real-world communities, which shouldn’t be confused with social network finance, the topic of this article.

SocialFi stands for “social finance” and combines the principles of social media and decentralized finance (DeFi) to offer a Web3 approach to social interactions through the power of blockchain technology. At its core, SocialFi is about empowering content creators, influencers, and participants who seek better control over their data and freedom of speech, aka data privacy and censorship-resistance. 

These lofty ideals are admirable, but a network can only be successful if users are sufficiently incentivized to share their resources and time with it. This is where those quirky little digital assets come in – the ones we either love or hate, depending where in the cycle we bought and sold them: 

• Cryptocurrencies (e.g. ETH or a governance token) provide avenues for monetizing social media engagement and rewarding infrastructure and security providers.

• Non-fungible tokens (NFTs) establish the management and digital ownership of assets.

The Three Core Principles of SocialFi

So what makes SocialFi platforms different from their giant Web2 equivalents like Facebook, X (Twitter), YouTube, and Instagram?

Three words: Decentralization. Tokenization. Governance.

Decentralization: Distributed control

Decentralization is the backbone of SoFi, setting it apart from social media platforms like Facebook and Twitter. Unlike these centralized platforms, where a single server hosts user data and content, SocialFi is spread over a decentralized network. This shift in architecture gives users more control over their own data and enhances the platform’s resistance to censorship and major data breaches. 

The level of decentralization depends on the underlying blockchain; Bitcoin and Ethereum are highly decentralized, making them secure and resilient, while others are ehhh, not so much. Tools like DappRadar and DeFiLlama can help you gauge the health of a Social Finance project.

Tokenization: Quantifying social influence

Tokenization is another core principle in SocialFi. It transforms the fuzzy idea of social influence into a measurable asset. Users earn tokens based on their contributions to the community, and these tokens are multifunctional. They can be used for micro-payments, trading, or even voting on platform changes. This token-based economy is made possible through smart contracts, which autonomously distribute value to users, providing a reward to make the platform more engaging.

Governance: Community decision-making

The third pillar of SocialFi is community governance, which puts decision-making power into the hands of the users. Unlike traditional platforms where changes are dictated by a managing company, SocialFi uses a DAO (Decentralized Autonomous Organization) to allow users to vote on significant changes or new features. This democratic approach fosters a sense of ownership and aligns the platform more closely with the needs and preferences of its community.

Credit: Tesfu Assefa

How SocialFi democratizes social media

1. Monetization and Data Ownership

The terms “There’s no such thing as a free lunch” and “If the product is free, then you are the product” ring especially true for social media platforms. These platforms exploited early Web2 users’ reluctance to pay for any online service or content during the 2000s and 2010s through a Faustian offering of free services. Years later, users learned their behavior was recorded all along to build models to manipulate human online behavior for commercial and other purposes. Don’t be evil my ass.

Traditional Web2 social media platforms have been criticized for their centralized control over user data and monetization strategies. Platforms like Youtube, Twitter and TikTok milk their users’ content and engagement to generate billions of dollars of revenue but share only a fraction of profits with the actual content creators. While this is starting to change and some Web2 platforms are onboarding their own token and even NFTs, it’s still too lopsided.

It was reported in 2022 that creators still rely on brand partnerships for up to 77% of their income, whereas subscriptions and tips make up around 1–3%. SocialFi platforms instead use social tokens or in-app utility tokens to manage incentives fairly. These tokens can be created by either the application or the user (a fan token), allowing creators to manage their own micro-economies. 

2. Freedom of Speech

Another big bugbear with Web2 platforms, especially in these increasingly fractured and divisive political times, is the centralized decision-making process that often ends up in a final form of censorship. 

There is sometimes a need to stop harmful content from being disseminated across the internet, but the question is who gets to do this. It can be a very bad thing to have a centralized arbiter of truth that stifles opposing views from controversial public figures (read the prescient books Animal Farm, 1984, and A Brave New World). A decentralized curation process, aligned with Web3 ethos, could offer a fairer approach. 

Social media initially created new communities and united old ones. Unfortunately, a weekly post or two by an average user doesn’t pay the bills for platforms. Controversy stimulates emotion and magnetizes user eyeballs, and that brings in dollars. So, biased algorithms were created to herd users into digital echo chambers and online political fight clubs. Web2 platforms are as complicit in spreading hatred across social media as any Tate or Trump. 

SocialFi platforms beat censorship through decentralized curation. All publicly viewable posts can be swiftly tagged based on their topic and nature of the words used. Individual nodes can be assigned the control and responsibility over filtering. 

Three Key Challenges for SocialFi

Scalability

One of the primary challenges for SocialFi is scalability. Traditional social media platforms like Facebook generate massive amounts of data daily. Blockchain solutions like DeSo claim to address this issue through techniques like warp sync and sharding, but these solutions are yet to be stress-tested at scale during bull market volatility.

Sustainability

Another challenge is creating sustainable economic models. While many platforms promise high incentives, these are often short-term growth hacks. The models need to be stress-tested through various market cycles to ensure long-term viability. Another problem is the intricate issue of token emission schedules.

Security 

Unfortunately in blockchain your network is only as strong as your weakest link. The hacking incident on the Roll platform raised concerns about the safety of SocialFi platforms – and in a field where smart contracts and hot wallets play such a crucial role, the threats from malicious or faulty code, or phishing scams, must be overcome before we can expect mainstream adoption. This is where the concept of account abstraction (see my article on Vitalik’s 3 Ethereum transitions) should revolutionize user safety.

Ten Trending SocialFi Tokens for 2024

Below are some SoFi tokens trending currently.

1. Galxe: A Web3 social community project built on Ethereum with over 10 million active users.
2. Torum: A social media platform built on Binance Smart Chain that combines DeFi and NFTs.
3. Friend.tech: A decentralized social media platform built on Base Network that allows users to tokenize their social network.
4. NestedFi: A SocialFi project that supports building, managing portfolios, copying trades of the best traders, and supporting social trading with just one click.
5. STFX: A SocialFi protocol for short-term digital asset management.
6. Hooked Protocol: A launchpad project (recently invested in by Binance) that supports SocialFi.
7. Lens Protocol: A project made about Social Graph and running on the Polygon network, developed by the founder of AAVE.
8. Safereum: A decentralized memecoin project with decent performance.
8. Bitclout: A social token project that gained significant attention by allowing brands, organizations, and individuals to create their tokens.
10. Rally: A social token project that allows creators to monetize their content and engage with their audience.

Disclaimer: I do not hold any of these tokens and be advised that you shouldn’t invest in any of them without doing proper research. They are very risky and require an advanced grasp of crypto knowledge. You’ll need to understand tokenomics like their vesting unlocked schedule, fully diluted value, the team behind them, and their supposed value proposition and use cases. Many of these projects will likely not be around in 5 years’ time.

SoFi So Good. What’s next? 

By combining the principles of social media and decentralized finance, SoFi can reshape the social media landscape and help the normal user reclaim rightful ownership of their data, and monetize it fairly and transparently if they choose to do so. 

SocialFi is an amalgamation of Web3 and social media trends, and thus perfectly geared towards boosting creator economy models. However, as we’ve seen with other trends such as Play-to-Earn (P2E), and even DeFi to some extent, early hype and traction mean nothing if they’re not built on something of substance. SoFi experiences will need to have engagement and staying power if they are to retain real users and the network effects that come with them. 

Therefore, SocialFi’s robustness can only be claimed after it has weathered a few market downturns and soldiered through them. With an evolving Web2 industry and new frontiers like the metaverse and artificial intelligence on the doorstep, the opportunities are endless. 

Just one last thing: Please, let’s ban infinite scroll.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Alluring Turing Test: Mimicry and AI

The Turing test, originally called ‘the imitation game’, is one of the first AI benchmarks ever posed. If an artificial conversational agent can convince someone that it is human then it can be supposed to be intelligent. It was proposed in 1950 by the father of computer science, Alan Turing, and is, in the collective imagination, the definitive milestone an AI has to pass to begin to be considered sentient.

But many AIs, for decades, have passed forms of the Turing test (think of a spambot or catfishing account sending someone a few messages) yet we don’t generally consider them sentient. Rather, we’ve decided people are easy to fool. The Turing test has been called obsolete. For John Searle, this was true on a philosophical level: his Chinese Room experiment showed that just because a computer can process symbols that does not make it sentient – just like how ChatGPT guesses the next word in the sequence. It’s just good at simulating one effect of intelligence.

Fool Me Once

If an AI can fool you into believing it’s real, what else is an illusion? Sci-fi landmarks like The Matrix and Lawnmower Man have long played with the idea of hallucinated reality. It’s part of life to question reality, to check that everything is as it seems. It was natural to apply this to proto-AI, to check that it could seem intelligent. Over time, Turing tests haven’t become obsolete, they’ve just become harder to pass and more rigorous. 

Rather than testing whether someone is sentient, the Turing test has evolved into whether content was created by AI. Our civilisational consciousness is now attuned to the idea that what we are talking to might not be human, or what we are seeing might be made by a computer. We accept that generative AI can paint gorgeous pictures and write beautiful poems. We know they can create virtual environments and deepfaked videos – albeit not, yet, at the fidelity to fool us consistently.

Fool Me Twice

That fidelity might be close, however. And, when the average deepfake fools more than 50% of the humans that see it then, suddenly, generative AI has the ability to make a 51% attack on our entire society. Scepticism, always a powerful human evolutionary tool, will become more essential than ever. We have already seen a damaging polarisation of society caused by social media, fueled by a lack of scepticism about its content. Add generative AI with plausible content, and the problem escalates. 

The Turing test, that rusted old monocle of AI inquiry, may become more vital to human thought than it has ever been. We, as a society, need to remain alert to the reality and unreality we are perceiving, and the daily life to which we attend. Generative AI will be a massive boon in so many sectors: gaming, financial services, healthcare, film, music – but a central need remains the same: knowing who we’re talking to and what they want and whether they’re real. Critical thinking about what you’re being told in this new hyperverse of real and unreal information. It will be what makes you a human in an endless constellation of AI assistants.

Credit: Tesfu Assefa

A Turing Test for Humans

The Turing test may end up not being for the AI after all, but for the human. Corporate job examinations could test your ability to identify what content is from a bot and what is not, which film was made by AI, and which by a human. You’ll need to have your wits about you to stay Turing-certified – to prove that no false reality generated by AI could hoodwink you into revealing secrets. We saw this through the virtuality of dreams in Christopher Nolan’s film Inception – but with digital VR worlds coming soon, such espionage might be closer than we think.

Alan Turing’s test remains relevant. Judging what is a product of legacy humans and what is from our digital children will become a fascinating battleground in just about every human sector. Will people want to watch AI-made films? How close to fidelity can they get? Cheap AI-produced neverending sitcoms based on classic series already exist – they just fail the Turing test, as do endless conversations between AI philosophers. These wonders would have fooled people 25 years ago, they would be convinced that a machine could never make it up – now they come off as the playful fancies of a new tool.

You Can’t Get Fooled Again

But soon, these fancies will become fantasies, and more people will be fooled. A deepfake video of a world leader issuing a declaration of war need only convince so many people before it became an existential risk. AI will write dreamworlds that promise the most fantastic ways of productivity and play, but should too many of us become too intimate with the machine, and think, like the Lambda engineer, that it truly is sentient, then the influence these AIs innocently exert could be dangerous.

And what if our pursuit towards AGI and the utopian Singularity leads to us declaring that an AI we created was finally sentient, and that it was on our side? Would we put it in charge? Then would it really matter if it was faking it the whole time? Well, yes, but by then it will be too late. 

So run along and take the Turing test. Both of you.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Unreal Engines, Real Monsters: Generative AI in Videogames

You’re trapped in a demonic world filled with monsters. You, a single heroic supersoldier, crash through their alien world wreaking carnage. You are the one they fear. You stand ready to execute another but, instead of running away like usual, this one talks to you. 

You’re shocked, you didn’t even realise it could speak – and it begs for its life. Says that it is just trying to protect its demon wife and child. You spare it. It runs away. From that point on, every demon you talk to speaks, pleading to be spared – while they secretly plot their revenge.

Doom 2030, perhaps? With the rise of neural nets and generative AI, it’s possible.

Making Worlds Seem Real

AI has always been crucial for the player experience in videogames, having the inhabitants of the world react intelligently. Videogame AI has been basic for most of its existence, a bag of clever-but-simple developer tricks masking rote-response by the digital people and creatures you meet. NPCs, for the most part, speak a few stock phrases, and have only on-rails reactions to various player activities. 

Game devs succeeded in creating believable NPC behaviour at the cost of 1000s of hours of writing, voice acting, animation, and code. The labor poured in producing titles like Cyberpunk 2077, Grand Theft Auto and Mass Effect is legendary.

But the illusion’s never quite perfect, despite clever tricks like ‘random’ pathing trees for behaviour, and procedural generation of the gameworld. There’s only so much you can do. The shopkeeper will never leave the town, the farmer’s wife will never fall in love with you, and the demons will never beg for their life – it’s simply not in the codebase. They were never told to act that way. 

How Generative AI Will Change Videogames

Generative AI in gaming has the ability to change all this. A well-trained neural net with the task of, say, producing the entire dialogue set for a dragon-themed fantasy game world, is now entirely possible. 

NPCs could make free choices powered by their neural nets. Whether Gerald the Innkeeper chooses to give you a discount is, truly, up to him. The ironmonger Derek Longfang may change his objective and become Lord of the High Vale through a brutal reign of terror. Everyone may love you, everyone may hate you, and their opinions might change. It would, indeed, feel real.

Or grotesquely surreal. Generative AI could create truly unique nightmare dungeons, trained on a dataset of every dungeon ever created by a human designer. Intrepid adventurers would crawl through a unique dungeon every time, outstripping the strictly-defined procedural generation models that currently exist. Imagine stepping into the warped imagination of a devilish AI, replete with eldritch monsters who themselves generate their own behaviour. A true test for the continued relevance of human bravery and resourcefulness. It’s you versus the machine, and the effect is unsettling in the best possible way. 

Credit: Tesfu Assefa

Videogames Perfect Training Ground for AI

The world’s largest creative industry is gaming – bigger than movies and music combined. As AI continues to develop rapidly, gaming will be one of its first major use cases. Efforts have already begun. Videogame users are digital natives looking for a simulated experience, so the ‘uncanny’ barrier that AI faces in movies and music is not there. 

Gamers are used to fighting digital prometheans, ever since the first Cacodemon chased them into the next room in a ‘lifelike’ display of monstrous ingenuity. What if the first true AGI arises by accident, when the developers give Derek Longfang, Lord of High Vale (a popular current storyline) just a bit too much processing time and the ability to rewrite his own code. 

The willingness to engage in virtuality makes videogames a fertile soil with which to experiment with technology’s latest razor edge – and it won’t be long before assets generated by neural nets appear in games. Writing and voice acting, both of which can be cheaply and effectively produced by common AI models, will likely become the norm. The bottleneck is the cost of running it – and who exactly has the resources to cover these costs. Running, training, and maintaining neural nets is fearsomely resource intensive. The idea of an always-on online world being overseen entirely by a generative AI would be an effort only the world’s wealthiest companies could even hope to pull off. 

All the Possible Worlds

Yet AI will get cheaper over time. Self-trained neural nets will be ever more common. And game developers will be some of the first users of the latest tools. ChatGPT just announced its ability to see and hear and react to its surroundings. It’s not a leap to imagine virtual friends and enemies reacting authentically to everything a player does, in worlds crafted and ruled by AI gods.

Humans love to play. AI will too. If done right, generative AI will revolutionise gaming, and create truly unique, immersive worlds for players to inhabit. With improvements in VR, graphics, and processing power, we might play for eternity in infinite worlds, and soar over the silicon demesnes of our dreams.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Become the Artist You Know You Can Be, Even If You Never Learned How to Be One

Have you ever found yourself daydreaming, your mind bursting with colors and scenes so vivid they could be real? It’s like there’s an entire world inside you, just waiting to be shared, but when you go to take pencil to paper, the realization sets that you can’t even get close to what you want to create?

This was me. Then along came Stable Diffusion, Midjourney, and DALL-E, opening up a side of myself I wasn’t previously able to touch.

Join me here as we explore the world of Diffusion models, in particular Stable Diffusion: revolutionary software capable of turning dreamers into artists with entire worlds to share.

The Rise of Generative AI in Creative Spaces

Stable Diffusion came into play in August 2022. Since then the explosion in creativity, creation, and dedication by AI artists, coders, and enthusiasts has been enormous. This open-source project transformed into a huge community that contributes to creating a tool capable of generating high quality images and videos using generative AI.

The best part, it’s free and you can access and run it with relative ease. There are hardware requirements, but if you are a gamer or someone else with a powerful GPU, you may have everything you need to get started. And you can also explore Stable Diffusion online for free as well at websites such as Clipdrop.co.

However, Stable Diffusion was not the first time generative AI entered the creative space. Earlier than that, many artists were using various forms of generative AI to enhance, guide, or expand their creative expression. Here are a few popular examples:

1. Scott Eaton is an amazing sculpturist, early generative AI artist pioneer, who combines generative models with 3D printing and metal casting to produce fascinating sculptures. Here is a video of Scott sharing his process back in 2019: https://www.youtube.com/watch?v=TN7Ydx9ygPo&t

2. Alexander Reben is an MIT-trained artist and roboticist, exploring the fusion of humanity and technology. Using generative AI, he crafts art that challenges our relationship with machines, and has garnered global recognition for his groundbreaking installations and innovations.

3. Sofia Crespo merges biology and machine learning, highlighting the symbiosis between organic life and technology. Her standout piece, ‘Neural Zoo‘, challenges our understanding of reality, blending natural textures with the depth of AI computation.

All of these artists (and many more) incorporated machine learning in art before it was cool. They’ve helped pioneer the technology, invested time, energy, and funds to make it possible to create the applications that are available today.

Fortunately, we don’t have to repeat their process. We can dive straight into creation.

How does Stable Diffusion work?

Stable Diffusion operates as a diffusion-based model adept at transforming noisy inputs into clear, cohesive images. During training, these models are introduced to noisy versions of dataset images, and tasked with restoring them to their original clarity. As a result, they become proficient in reproducing and uniquely combining images. With the aid of prompts, area selection, and other interactive tools, you can guide the model’s outputs in ways that are intuitive and straightforward.

The best way to learn is to get hands-on experience, run generations, and compare results. So let’s skip talking about samplers, models, CFG scores, denoising strength, seeds, and other parameters, and get our hands dirty.

Credit: Tesfu Assefa

My personal experience

My personal experience with generative AI started with Midjourney, which was a revolutionary application of the technology. However, when Stable Diffusion was released, I was struck with its rapidly growing capabilities. It gave me the ability to guide the diffusion models in a way that makes sense, enabling me to create images as I want them to be, rather than whatever I got off of my prompt. It featured inpainting and eventually something called ControlNet, which further increased the ability to guide the models. 

One of the most recent projects was working on a party poster to commemorate an event for a Portuguese DJ group, Villager and Friends. We wanted to combine generative AI with scenery from the party location to commemorate the party. We decided on a composition and then generated hundreds of styles for it, and cherrypicked the four best ones, which then got voted on by the community. The winning style was upscaled to a massive format, and will be made available in print for the partygoers. Let me show you the transformation –

The main composition

Credit: Zoran Spirkovski

The Four Selected Styles –

Credit: Zoran Spirkovski

The Winning Style by Community Vote –

Credit: Zoran Spirkovski

A few details to point out about this project:

1. Notice the number 6 present in the background of every image; this is only possible thanks to the ControlNet extension for Stable Diffusion

2. Notice the increased level of detail on the final chosen image. This is a result of an ‘upscaling’ process. The final image is a whopping 8192px x 12288px!

3. Due to the size of the image, a single generation of the final version took about four hours. We had to generate several times due to crashes or ugly results.

4. The final version is unedited. It is raw output directly from the Stable Diffusion Model.

How can you get started with Stable Diffusion?

Running Stable Diffusion locally is the way to go. However in order to do that you will need to have good hardware. The main resource by Stable Diffusion used is VRAM, which is provided by the GPU. The minimum starting point would be a 4GB VRAM GPU. Unfortunately, the best optimization (xformers) are available only for NVidia GPUs.

In order to run Stable Diffusion locally you will need to install some software –

1. A user interface
       a. Automatic1111 (balance between simple and flexible)
       b. ComfyUI (extremely flexible and difficult to use, resource efficient)
2. Stable Diffusion Model (choose one you prefer on https://civitai.com/)
3. Python (a programming language)
4. PyTorch (a machine learning framework based on Python)

Start with the user interface; it will help you download everything you need to run it. I use Automatic1111; it’s simple enough and flexible enough for me. ComfyUI is better, faster, and capable of using resources more effectively, but also more complicated and requires a lot more learning to use effectively.

The errors generated from both are verbose, so if anything goes wrong, you can copy the error message and search the web for a solution. Pretty much everything you can run into as an issue in your first month of using Stable Diffusion has been solved by someone somewhere on the web.

CivitAI is a great resource for finding new and interesting models. Stable Diffusion 1.5 has the most developed (i.e. trained) models. If you’re looking for a particular style, you can likely find it there – and if you’re not looking for a particular style, you’ll likely discover something new. That said, most models are flexible and receptive to your prompts, and you can increase the weights of your prompts to guide the generation where you want it to go.

Sometimes Stable Diffusion is stubborn. Getting the result you want can be difficult, and this is where ControlNet and other guidance methods come in, helping you create the compositions you want.

This is just the beginning of your journey, but I’m glad you took the steps to learn how to get started. I’m looking forward to seeing your creations and explorations of latent space.

Is AI art, art?

Stable Diffusion enables people to create interesting art that they would otherwise never make. If you have imagination and some basic skills, you don’t need to be constrained and by technique – you can guide Stable Diffusion to putting your imagination onto the page.

Countless NFT artworks are being sold online, often coming from people that don’t necessarily have the skills to do everything on their own, but have learned to guide the diffusion models to produce their desired outcome. Some people simply have the talent for picking winners from a big batch of generated images. 

Don’t get me wrong. There is joy in working with traditional art. Mastering the brush and paints of watercolor, oil, the needed strokes to create form on a blank canvas, human proportions and composition techniques are all beautiful and one can definitely still pursue them alongside AI art. 

But they also involve a significant time investment, some pain and suffering, a dedication most creatives are not willing to give. 

AI art is also difficult; it’s just a different kind of difficulty. Getting exactly what you want is challenging, similar to how it is with traditional art. The thing is, learning to do good AI art is learning art theory and applying it as guidance. So in a way AI art can bring you closer to real art than real art ever could. Something to think about.

In the end it’s up to you to decide if this is art or not. If you are finding ways to express your views, emotions, and ideas through AI, who really cares what others think about it?

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Wen Web? The Decentralized Internet’s Current State and 2024 Odds

Introduction

Since the start of the decade, Web3 has been touted as our salvation from the ills of Web2, as the fully decentralized foundation for a censorship-free new Internet that preserves privacy and restores the cypherpunk vision of the 1990s, with extra power-ups unlocked by distributed ledger and cryptocurrency technology (in particular AI cryptos this year) on blockchains like Ethereum and Cardano.

Web3’s hype train derailed in late 2021 and throughout 2022, as record inflation, rising interests, and a cascade of centralized custodial blowups deflated crypto markets. However, hope springs eternal in the world of digital assets. With 2024 just around the corner, filled with the promise of the Bitcoin halving, spot ETFs (for both BTC and Ethereum), a pivot in global macro-economic policy, and other wonderful things, the talk around Web3 is heating up again, especially in Asia, even if artificial intelligence has stolen a lot of the thunder and VC funding this year. 

Will 2024 or 2025 be the Year of Web3, or is it all just a wishful thinking, bad investments, and the echo chambers of Crypto Twitter and YouTube? What does the average person know about Web3.0?

An extensive survey conducted by major US cryptocurrency exchange Coinbase, called the “International Survey of Web3 adoption”, polled over 32,000 internet users across 16 countries. The countries span developed economies like the US and Japan as well as emerging markets like the Philippines and Vietnam. This provides a valuable snapshot of how Web3 technologies are being embraced on a global scale.

The 16 countries were: the United States, Canada, and Brazil in the Americas; the United Kingdom, Germany, Italy, France, Spain, and the Netherlands in Europe; and in the Asia-Pacific region Australia, Philippines, Indonesia, India, Thailand, Vietnam, and Japan. 

Survey Broad strokes: Demographics and Web3 Awareness 

With a 50-50 gender split, the survey captures a balanced view from both men and women. Most respondents are city dwellers (46%), with a good mix of suburbanites (32%) and rural folks (21%). Education and income levels are all over the place, from high school dropouts to advanced degree holders, and from low to high earners. 

A whopping 80% of people know about cryptocurrencies, and two-thirds have heard of at least one Web3 use case. Europe is leading the pack in awareness, while emerging countries and, oddly enough, Japan are lagging. 

The report suggests social media platforms play a vital role in driving awareness of Web3, especially in emerging market countries. Up to 2 in 5 cryptocurrency users rely on sources like YouTube, Facebook, Twitter, and cryptocurrency websites for information. Far fewer – only 16% to 26% – rely on mainstream news sources.

So what’s hot in Web3 right now? Trading on centralized exchanges (sadly) and blockchain gaming are the go-to activities. Nearly half of Web3 users have a software wallet, and about 30% are rocking a hardware wallet.

Looking ahead, the survey predicts a 50% surge in Web3 adoption by 2026. The future looks especially bright in developing countries, where crypto is becoming the new way to pay. When questioned about specific Web3 use cases, the applications most familiar to respondents were cryptocurrency payments, NFT trading, and trading on centralized cryptocurrency exchanges (CEXs). Despite the regulatory alarms, centralized exchanges are still the main gateway to the Web3 world.

33% were familiar with crypto payments, 24% with NFT trading, and 23% with CEX trading. Comparatively, awareness of more complex and risky decentralized finance (DeFi) activities like staking, decentralized exchange trading, and borrowing/lending were significantly lower; roughly 1 in 6 people were familiar with DeFi staking, trading, and lending or borrowing.

Web3 Services

One of the most commonly used Web3 services after centralized cryptocurrency exchanges (CEXs) is crypto gaming platforms, while self-hosted cryptocurrency wallets, both software and hardware, are gaining increased mainstream traction and adoption.

The report suggests CEXs currently serve as the primary entry point for most people into the Web3 ecosystem, providing a bridge to the mainstream finance world. Despite growing regulatory scrutiny, CEXs are expected to continue spearheading cryptocurrency adoption into the future.

When survey respondents who had used Web3 before were asked about their initial experiences, trading on CEX platforms emerged as the number one entry point, accounting for 21.1% of first interactions. This aligns logically with CEXs often being the first stop for users looking to convert fiat currency into cryptocurrency.

Interestingly, the report also highlights how initial entry points into Web3 differ significantly between countries. In developed nations, CEX trading was by far the most common gateway into Web3, likely because people in these regions are already familiar with using financial systems.

On the other hand, in emerging market countries like the Philippines and Vietnam, playing crypto games emerged as the most popular entry point. This may be boosted by play-to-earn crypto games providing income generation opportunities during COVID-19 for lower-income users.

Below are ten use cases ranked from most to least familiar:

  1. Crypto Payments (33%)
  2. NFT Minting/Trading (24%)
  3. Trading on Centralized Exchanges (23%)
  4. Overseas payments (23%)
  5. Playing crypto games (23%)
  6. Using P2P trading platforms  (19%)
  7. Use of crypto payment card (18%)
  8. Staking for returns (17%)
  9. DEX Trading (16%)
  10. Borrowing/Lending (13%)

International Survey of Web3 Adoption (Credit: Coinbase Institute)

Barriers to Entry

Among respondents who had not used Web3 before, around 46% cited a lack of knowledge about Web3 technology as a key barrier to adoption. Over 25% of non-users also noted they simply did not know where to begin exploring the space.

Beyond educational barriers, concerns around volatility, hacks, scams, and government regulation also deterred usage among some respondents. Regulation concerns were particularly acute in some countries, like India and Canada, where 26% of respondents cited this as a barrier.

Web3 in Asia

The Coinbase report notes that Asia contains countries at varying levels of technological adoption, and with diverse regulatory environments. During the pandemic, Asian crypto markets and innovations played a vital role in sustaining Web3 development globally.

Here are some insights:

  • Japan has comparatively low awareness of Web3, likely due to a challenging regulatory environment including high crypto taxes. This has led many crypto firms to choose alternative locations for their operations.
  • Emerging countries like the Philippines exhibit greater Web3 awareness than more developed nations. The Philippines has a young, tech-savvy population with high remittance flows that could benefit from blockchain technologies.
  • Vietnam has low Web3 awareness currently but a rapidly growing interest in blockchain and crypto, particularly for gaming.
  • Gaming and metaverse participation are more popular Web3 use cases in Asian emerging markets compared to developed countries.
  • Developed Asian countries focus more on crypto payments and financial services, whereas emerging markets prioritize remittances more.

The Promise and Potential of Web3

Web3 carries enormous potential across a wide range of use cases like supply chain management, digital identity, healthcare, and insurance. However, there are challenges between the potential and the actual. 

  • The decentralized nature of Web3 makes scalability difficult, as the network waits on nodes to validate transactions.
  • Interoperability issues arise from the multitude of different blockchains that don’t always work together seamlessly. 
  • Usability remains a barrier, with many Web3 applications having non-intuitive user experiences.

But Web3 momentum is clearly building. Monthly active developers have surged 297% since 2018. Gartner predicts 25% of enterprises will use Web3 applications by 2024. With solutions in areas like decentralized finance and play-to-earn gaming already demonstrating value, Web3’s possibilities are vast despite current limitations.

Credit: Tesfu Assefa

The Outlook for Web3 in 2024

The prognosis for Web3 adoption by 2024 looks positive, as developers work to address current challenges. Improved scalability, interoperability, and usability will likely emerge to make Web3 more accessible to mainstream audiences.

Familiar technological challenges remain though:

  • Scalability: Web3’s decentralized structure hampers easy scaling, limiting its adoption.
  • Interoperability: Multiple blockchains exist, but they don’t always sync well, affecting Web3 adoption.
  • Usability: Complex Web3 apps deter users, posing a barrier to widespread adoption.
  • Blockchain projects often over-promise and under-deliver, eroding trust and adoption.
  • Onboarding process: Clunky onboarding experiences can kill user interest, hindering adoption.

On the plus side, as more and more people become more familiar with Web3 capabilities, adoption is expected to accelerate, across ever more diverse use cases. Custom-built and application-specific modular blockchains will simplify development, and the eventually ubiquitous implementation of zero-knowledge rollup proofs will enable greater security and privacy.

While the Coinbase report shows it’s still early days, Web3 is rapidly evolving. Its awareness is reasonably high worldwide, but substantial barriers around education and regulation still remain and will have to be dealt with. 

If you’re a business looking to ride the Web3 wave globally, you’ve got to be a bit of a cultural chameleon. Different places have their own vibes, especially in the diverse landscape of Asia. So, understanding the local scene – be it regulations, economics, or even just what people are into – is key.

Looking ahead, Web3 is set to graduate from being this edgy, niche thing to becoming part of our daily lives and how we do business, turbocharged by emerging tech like AI and machine learning. Buckle up for the future!

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The End of Days: Crypto and the Apocalypse

Never has the end seemed quite so near. Climate change, war, a pandemic… and the birthing of a monstrous digital god that rewrites society in a few arcane flashes of code. The genesis of digital life could be humanity’s approach to The Great Filter – if we don’t nuke each other in a fit of overenthusiasm first. 

Life, however, always finds a way. And crypto, many argue, could too. Crypto has long been championed by doomsday prophets, preppers, and hedge funds as the ultimate and absolute hedge against complete societal breakdown, whatever form that takes. Should there be an apocalypse-level event, crypto’s properties do make it uniquely resilient against such a fall. Where has this narrative come from, and does it hold up to scrutiny?

Crypto as a Hedge Against Disaster

Crypto has historically boomed in times of distress. Much of the last bull run was driven by crypto’s ability to be a hedge against inflation, as money was being printed at a neckbreaking pace to pump liquidity in the economy. In the stricken economy of Turkey crypto ownership is at record levels. In Russia and Ukraine, where war rages, crypto offers a way of transferring value that can’t be blocked by a bank’s policy or collapse. Crypto’s consensus systems operate globally, not locally, so should any central banking system fail (and with it the society it oversees), crypto should still store value. 

Anarchists and preppers have long seen crypto’s value: no government control, anonymity, non-correlation with geopolitical borders, and a form of cash to use if traditional cash becomes worthless. That global consensus maintained by computers means any localised failure doesn’t bring down a given cryptocurrency except in the absolute worst cases (a meteor strike or something affecting all global regions). The universality of crypto is unmatched by anything but gold, and its ability to cross borders and ignore financial firewalls is unparalleled. It’s no wonder crypto has carved out a place as the ‘apocalypse’ currency. 

This is particularly true of any manmade apocalypse, such as a powerful dictator running riot on the world stage, or any usurpation of the central financial system by a single overweening authority (maybe that last one has already happened). Pseudonymous and sanction-resistant, crypto can maintain a financial ecology governed by the people on the ground, and act as a counterpower to techno-dictatorships.

Can crypto be a medium of exchange in a truly global apocalypse? That is far more questionable. First, who would want it? As the ashes of the nuclear winter fall, will people care what the ledger says? People will be far more interested in food than tokens on a ledger. If you’re scavenging in the wastelands, a packet of noodles becomes more important than the contents of your Web3 wallet. 

Moreover, upkeep of these decentralised ledgers could be gravely compromised by mass outage of the internet, eradication of mining hubs, and more. It’s possible one large-scale intact mining farm could gain a 51% share of a blockchain, and this would break the blockchain’s status as trustless or decentralised. There are counters to this: it is possible to send Bitcoin over radio, and there are satellite networks which are likely to survive any terrestrial disaster – but it’s grasping at straws to think the priorities of society would drive towards upkeep of the ledger when the ice caps melt. 

Proof-of-stake coins – the majority of the top 100 cryptocurrencies – are even more under threat. Substantial amounts of the competitive quorum that governs these chains could be wiped out no matter what the event, and 51%-ing these chains might become a whole lot more feasible as your competitors die off. The sad fact is when everything goes wrong, humanity has two choices: order or violence. 1s and 0s on a ledger are unlikely to be what holds back our most ruthless instincts.

And then there is AI. The black box of the Singularity could have some unexpected outcomes, one of which is apocalyptic. A newly minted AGI may decide that crypto is the way forward, and immediately ensure its own seizure of the ledger. Such an AGI may require us to advance to quantum computing – already itself an existential threat to crypto.

Credit: Tesfu Assefa

Hold Until the End?

So, crypto, pemmican, and a gun? Is it all you need to survive the end of days? Well, maybe. Crypto will continue to serve as a hedge against social upheaval, and a ‘minor’ or localised apocalypse will probably lead to exponential uptake of crypto as a medium of exchange. But if the end of days is truly everywhere, it’s unlikely crypto will be part of any new world order. But keep your cold storage USB close just in case.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

TESCREALism.  Has The Silicon Valley Ruling Class Gone To Crazy Town? Émile Torres in conversation with R.U. Sirius

TESCREALism
T=Transhumanism
E= Extropianism
S= Singularitarianism
C = Cosmism
R = Rationalism
EA = Effective Altruism
L = Longtermism

Émile Torres, a philosopher and historian who has focused recently on existential threats, developed what they refer to as a ‘bundle’ (we might call it a memeplex) that claims to link the above series of -isms into a sort-of singular force that has been embraced by many of the super-wealthy and influential in the tech world. It is the influence of these tropes on the super-rich and influential that, in Torres’ view, makes them very dangerous.

In an article for Truthdig, Torres writes, “At the heart of TESCREALism is a ‘techno-utopian’ vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundancereengineering ourselves, becoming immortalcolonizing the universe and creating a sprawling ‘post-human’ civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.”  

In the same piece, Torres gets into the wilder projections that I suspect even many techno-enthusiastic transhumanism-oriented Mindplex readers would find fantastic (rooted in brilliant minds  taking their fantasies for reality),  Torres theorem leans heavily on Oxford Professor Nick Bostrom’s views, writing that he “argues that if there’s a mere 1% chance of 10^52 digital lifetimes existing in the future, then ‘the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.’ In other words, if you mitigate existential risk by this minuscule amount, then you’ve done the moral equivalent of saving billions and billions of existing human lives.” 

As he explained in his conversation with Douglas Rushkoff, Torres identifies TESCREALism as a philosophical ‘bundle’ that, in a sense, trivializes the lives and sufferings of currently existing humans by finding a greater importance in the possibly trillions of posthumans that could exist in physical and/or virtual space in the future — ‘people’ having experiences that can be valued beyond our imagining. Some of those quoted tend to use statistics to value experience, which is about as alienated from experience as you can get.

I can assume you all know about transhumanism and the singularity. If you’re here, you probably know about Ben Goertzel’s project to build AGI. But are most of you familiar with the eccentricities and extremities that have attached themselves to Rationalism (as defined by LessWrong), Effective Altruism and Longtermism?

In the interview below, I mainly ask Torres to thrash out how real all this is. Do a lot of people buy into the whole philosophical bundle? My own attitude, even as a longtime associate of transhumanism, has always been kind of “are you for real?” when it comes to people taking their shit too seriously, particularly when they’ve deluded themselves into thinking they’re rational. 

In a follow up poll, I will ask Mindplex readers and veterans of the transhumanist culture to weigh in on the TESCREAL bundle. 

RU Sirius:  In your book Human Extinction: A History of the Science and Ethics of Annihilation, you went from writing about existential threats as a historical phenomenon to various transhumanist tropes. As I was reading it, it was like suddenly we had gone from science and geology into science fiction. Then I was wondering if there was science fiction in older times. (I suppose there was the Bible and other myths.) How did you get into this? 

Émile Torres:   Back in the mid-2000s, I encountered transhumanism for the first time. And I was initially really critical of it. The second paper I ever published was a critique of transhumanism. But  then, certain considerations led me to believe that transhumanism is a defensible position, and I became a sort of transhumanist.

And one of the main considerations was that the development of these technologies is inevitable. So if you’re going to be against transhumanism, then maybe you need to be against the development of certain person-engineering technologies. But since they’re inevitable, there’s no point in opposing it just to hold back the tide. So the best thing to do is to join the transhumanists and do what you can to ensure that that project is realized in the most optimal way.

The notion of existential risk was tightly bound up with transhumanism from the start: existential risk was initially defined as ‘anything that might prevent us from creating a posthuman civilization’.

RUS:  I’m sure there must have been mention of existential risk before that in various intellectual circles… like related to nuclear war and so forth?

ÉT:  There was definitely talk of extinction and global catastrophe. But what’s new about this idea of existential risk — right there in the definition — is the idea of desirable future development. 

There were people, particularly in the second half of the 20th century, arguing that one reason human extinction would be bad is that it would foreclose the realization of all future progress, future happiness, and so on. But that lost-potential argument was never formalized. The focus was really on going extinct. Everybody on earth is going to die. You and me are going to die. Our families will die. That was the foreground. Lost potential was not prominent. 

The notion of existential risk, I think, flipped that around and foregrounded the lost potential: the argument became that the worst aspect of human extinction is the lost potential. It’s not the 8 billion people who are going to die. That’s very bad, but the badness of all the lost potential is orders of magnitude larger.

RUS:  I may be a bit out of touch with the transhumanist culture… to me this is a bizarre iteration of transhumanism. It’s not something I bumped into much when I was interacting with that world in 2007-2010 as editor of h+ magazine. At that time, you’d mostly hear about life extension and other enhancements. Or immortality, if you wanted to get really far out. The notion of uploaded mind children was around, but as something very speculative. But the idea of sacrificing just about everybody to imaginary future people as you’ve discussed in your writings about TESCREAL did not seem to be much in circulation back then.

ÉT: That sounds right to me. I think this notion of potential is really central to longtermism. The initial definition comes from 2002, with Bostrom discussing the  transition dynamics from our human to a posthuman civilization, foregrounding the potential of becoming posthuman. This was also bound up with this notion that the creation of posthumanity isn’t just valuable because it’s a good way for cosmic history to unfold. But also, you and I might benefit, right?

So why is creating a posthuman civilization important (according to Bostrom and people like him)? Well, because if it happens within my lifetime, maybe I get to live forever. Or even if it happens within maybe a thousand years, I still get to live forever because I’ll sign up with ALCOR and get resurrected. So I really see this moment where there is a  sort of the pivot towards thinking about the far future. I think initially, for the transhumanists, it was bound up with their own fate as individuals. 

RUS: I was thinking that maybe – for example – Eliezer Yudkowsky is being selfless when he talks about risking nuclear war and sacrificing most life on the planet to make sure AI doesn’t happen before he thinks we’re ready. Because it seems to me he could have at least a 50:50 chance of being a victim of the nuclear war that he is willing to risk to prevent the development of AI too soon. So I’m thinking he’s being selfless but he loves the idea of the blissful future humans so much that he’s willing to sacrifice himself.

ÉT: My understanding of the history is that it was really in the 2000s that people in this community became increasingly aware of just how huge the future could be. With that awareness came a corresponding shift in the moral emphasis.

Yudkowsy wants to live forever. On Lex Fridman’s podcast, he said that he grew up believing that he would live forever. And so part of the trauma for him, as he mentioned on that podcast, is being in this situation where AGI is so close, and he’s having to face his own mortality, maybe for the first time. It seems like his thinking exemplifies this pivot throughout the 2000s.

RU: To me it sounds like it’s all fantasy. Some of this stuff that you’ve mentioned being part of this bundle – like the theoretical trillions of people, including digital people, having quantifiably great experience — it sounds like dormroom stoned nerd brainstorms that just never ended. They keep elaborating from the original premise, getting more and more isolated from real-world experiences turn by turn. Ideas used to mature – now they just seem to get crankier. I can’t prove it, but it could be the result of the attention economy. To misquote Neils Bohr, “Your idea is crazy but it’s not crazy enough to get a following on social media.”

ÉT: With respect to the attention economy, my sense is that longtermists recognize that this vision of the future is kind of nuts. I mean, some of them have used the term ‘crazy town’. Consequently, I think they do their best to avoid mentioning what their actual goals are publicly. Crazy ideas do grab the public’s attention, but in this case, I think they feel that some of these ideas are not good PR. 

What About Useful AI?

Credit: Tesfu Assefa

RUS: Regarding your assertion that AI activity can only be explained by this ideological configuration. I don’t know whether you’re talking about practical AI for, say, tracking and responding to weather conditions, developing vaccines and other responses to pandemics, developing medicines, etc. Or if you’re referring only to AI that is performing what we consider intellectual or creative things.

ÉT: I don’t think AI in general is motivated by this ideology. The race to AGI is. And I think there are two factors. One that’s obvious is the profit motive. Microsoft and Google expect to make billions of dollars off of these large language models. But I think the other crucial component of  the explanatory picture is TESCREALism. 

It’s like… why did DeepMind form in the first place? Why did Demis Hassabis – who was at a lot of these transhumanist conferences – found it? And Shane Legg, who received $100,000 from the Canadian Singularity Institute for Artificial Intelligence after completing his PhD thesis, and gave talks at the Singularity Summit conferences.

RUS: If I went to all the boardrooms in Silicon Valley and talked to the owners and the programmers, do you think most of them would embrace this entire TESCREAL idea? My guess is they would tend to be transhumanists, and quite a few might be singularitarians, but they are probably not into the ‘trillions of future people’ aspect of that project. I mean, how ubiquitous are these ideas really?

ÉT: In terms of the ubiquity of TESCREAL beliefs, I think you’re right. A lot of them wouldn’t even use the word transhumanism. You could ask, “Would you support re-engineering the human organism?” Or ask, “Are you funding projects to try to re-engineer the human organism so we can merge AI into our brains?” I think a lot of them would say yes. And they’d be for aspects of the longtermist worldview like the imperative to colonize space and plunder the cosmos. My strong suspicion is that’s the water that these people swim in.

An article I want to write would be about the different terms and ideas that various authors use to describe the culture of Silicon Valley – using different terms, but ultimately describing the same phenomenon. So what I mean by TESCREALism is the same thing that far-right guy Richard Hanania calls the “tech right.”

There was a Huffington Post article about how he holds white supremacist views. And he said, “I hate the word TESCREALism.” So he called it the ‘tech right’. Douglas Rushkoff calls this ‘the mindset’ – he says it is everywhere in Silicon Valley among tech billionaires and so on; in talking to them about their views, he found that they all thought: “the future is digital. We’re going to upload our minds. We’re going to spread throughout space” and so on. What Rushkoff means by ‘the mindset’ is basically what I mean by TESCREALism. Would these people who embody ‘the mindset’ say, “yeah, I’m a longtermist, and I believe that the most morally important thing to do is to conquer space and create all these digital people in the future?” I don’t know. But their worldview looks a lot like longtermism.

RUS:  Do you think a lack of concern for currently living people is a sort of political manifestation of the longtermist view is driving some of the people of  Silicon Valley towards right-wing extremism?

ÉT: I think that’s largely correct. I think some people, like Wil Macaskill, [a figure in ‘effective altruism’] really accept this very abstract philosophical position that what matters is that there are huge numbers of people in the future. And a lot of tech billionaires see this vision as bound up with their fate as individuals. So the thinking is like… “I want to build a bunker to survive the apocalypse so I can get to space, have my mind digitized” and so on. And that definitely can lead to this disregard for most human beings. A wild example of this is the news that broke that Sam Bankman-Fried’s brother and somebody else at FTX had discussed the possibility of buying the island nation of Nauru explicitly so that members of the ‘effective altruism’ movement could survive an apocalyptic event that  kills up to – as they wrote in the document – 99.9% of human beings.

The Singularity is Probably Not Near

Credit: Tesfu Assefa

RUS: Ben Goertzel said that I should ask you if you think the Singularity will happen. And if it will happen, will it happen in 2035 or 2050?

ÉT: I guess it depends on what one means by the Singularity. There’s the intelligence explosion interpretation… there’s the Kurzweilian idea that just has to do with the rate of change.

RUS: I think of the Singularity as the point where the AIs get smarter than us, and beyond that, you can’t predict anything. You can’t predict who we’ll be, or if we’ll be around, or what the world will be like. The science fiction writer Vernor Vinge was the first person to suggest that idea of a Singularity. We would make intelligences that would become as incomprehensible to us as we are to worms.

ÉT: I’m sympathetic with that view of the Singularity. There’s just not much we can say beyond it. I’m very skeptical of the intelligence explosion idea. And the rate of change idea from Kurzweil seems to be in direct and significant tension with the fact that a climate catastrophe is almost inevitable unless there’s some new technology that, at scale, removes carbon dioxide from the atmosphere.

RUS: Kurzweil shows that the inclining line of human technological development survived two World Wars (actually world wars boosted technology development) and Mao and Pol Pot and all kinds of terrible events.

ÉT: I think climate change is different than that.

RUS: Yeah, I think so too.

ÉT: We’re talking about global civilization. Is it gonna survive? I don’t know. I mean, there are legit climatologists out there who don’t think it will unless there’s immediate action to avert catastrophic climate change.

I remember arguing, many years ago, with colleagues in the existential risk field, where I was claiming that climate change is a much bigger deal than they were suspecting. They thought: “We’ll invent AGI. And once we have AGI, it’ll…”

RUS: It’ll figure out what to do, yeah.

ÉT: Figure out what to do. But how are we gonna create AGI in a world that’s just crumbling and falling apart? How are we gonna host conferences on AI when the weather is so hostile that you can’t get there?

RUS: I guess the question becomes how radical the intervention of weather catastrophes is in the immediate future. People are thinking they might be able to accomplish AGI in the next 10-20 years or so. And we’re already dealing with all kinds of crappy weather and deaths and destruction. But to the visible eye, western civilization seems to roll on. People get in their cars and go to the store. Food is still being distributed.

So we do seem to be carrying on, and maybe we will do that for 10 or 20 years. If the people making the AGI and related robotics and so forth are able to manage to get to the lab and do their work, get in their cars and get enough food etc., then maybe they can  accomplish what they hope to. I guess that’s the idea.

ÉT: It’s just not my area of expertise. But my sense is that, in terms of the LLMs that we have, there’s no obvious path from those systems like ChatGPT to genuine AGI or superintelligence.

RUS: A lot of people are saying that ChatGPT and the like are not much to brag about. Michio Kaku, who generally tends to be a hyper-optimistic tech promoter, called it a glorified tape recorder.

ÉT: I think it was Gary Marcus who was laughing about the rise and fall in prestige, if you will, of ChatGPT. It became a joke line during a Republican debate.

RUS: It happens so fast these days.

ÉT: Yeah. So I don’t think that Singularity is going to happen, probably. And I would put money on it not happening soon, not happening in 2045 like Kurzweil predicts. 

What About the Humane Transhumanists, Singularitarians and AI Enthusiasts?

RUS: Let me ask you about the varying ideologies and ideals within transhumanism and its spin-offs. You’ve mentioned Ben Goertzel — the captain of the ship here at Mindplex — in various places as having a central role in the ‘bundle’ because of his longtime pursuit of AGI. And I know Ben to be a humanist, and more or less a liberal or even better. I know he doesn’t want to exploit or enslave or kill off the current people on earth but wants to try to lift everybody. So I know from experience that there’s a  lot of philosophical variation within transhumanism. 

I can remember when they asked me to create the magazine for humanity+, I had my own assumptions based on earlier experiences with the Extropians. So I confessed to these guys at a meeting, I said,  “I’ve got to tell you right up front that I’m not a libertarian. I’m a leftist with a libertarian streak.”  And one of the guys said “Yeah me too.” And the other guy said “I’m not even sure about the libertarian streak.” 

Generally, around that time – around 2007 – I learned that a lot of the people engaged with that official transhumanist organization thought of themselves as liberals, sort of conventional mainstream liberals. And there’s a lot of variation within that world.  

ÉT: I recognize and affirm that. The notion of TESCREALism is supposed to capture the techno-utopian vision that came out of some of these movements, and to gesture at the worst aspects of that. I think they’re the ones that have become most influential now. So, like the democratic socialist James Hughes — he was somewhat influential. But, compared to someone like Bostrom, his influence is minor. And I absolutely recognize that there are transhumanists who like anti-libertarian. Some of them are Mormons.

RUS: Yeah… the Mormon transhumanists! They’re adorable. I think when you had people like Peter Thiel and Elon Musk attaching themselves to these ideas, they probably became the main attractors to transhumanism or the ideas of human enhancement associated with it. More people who would be attracted to their ideologies have been  pulled in, particularly in the Silicon Valley culture. These ruling class scary monsters and super creeps became sort of the main widely-available public voice for those kinds of views. Then you had the neoreactionary movement and the dark enlightenment. Most of the people talking about those tended to embrace transhumanist tropes. That became the alt-right; it fed into the spread of right wing extremism.

You can see how the idea of the glorious future – stand up straight and tall and shoot yourself up into the glorious future – could attract a certain type of fascist sensibility.

ÉT: That’s my impression also. Obviously there’s a fascinating history involving futurism and fascism. Maybe it does tend to attract a certain type of person or lends itself to being interpreted or exploited by fascists. TESCREALism captures that aspect.

Is Less Wrong A Cult?

RUS: I remember being at a Singularity Conference and being approached by someone involved in Less Wrong. And it felt something like being approached by a cult. I wonder if you run into any actual cult-like behavior in your studies, like people gathering in communities and getting guns to defend themselves, or worship the leader and that sort of thing.

ÉT: There’s definitely that charismatic leader aspect to rationalism. There are these Less Wrong posts that are just lavishing praise on Yudkowsky. I remember seeing a list of one or two sentence statements about Yudkowsky. One of them was something about how “inside Eliezer Yudkowsky’s pineal gland is not an immortal soul, but another brain.” “In the history of Western thinkers, there was Plato, Immanuel Kant, Eliezer Yudkowsky.”
(Laughter)
Someone who I won’t name told me that the Bay Area rational scene is a full-grown apocalypse cult. 

I think EA (Effective Altruism) is sort of a cult. There was an article published by Carla Cremer recently. She talked about a secret competitive ranking system in which participants get points subtracted if they have IQs of less than 120.

RUS: Oh! I was thinking I might ask people engaged in transhumanism if they even believe in IQ as a legitimate measurement of intelligence. 

ÉT: I’d be really curious to know. Because I do think that IQ realism is pretty widespread within this community. Bostrom has written that IQ is good but imperfect. So they sort of lean towards IQ realism.

Does Anyone Call Themselves a TESCREAList?

RU: You noted that Marc Andreessen has identified himself with this bundle that you co-created. Have others directly embraced the term as a positive identity that you’re aware of?

ÉT: No, not really. Hanania acknowledges it in arguing that the ‘tech right’ is
a better term. He said we were basically right about what the streams are, what the bundle is, but ‘tech right’ is a better term. I’m not surprised that there aren’t more people coming out and saying they identify as TESCREAL.

RUS: Maybe after this runs on Mindplex there’ll be a bunch of people deciding that is who they are. Oh dear. Whatever have we wrought?

Eugenics

Credit: Tesfu Assefa

RUS: Let me try a thorny issue: eugenics. What about intervening at the germline to prevent horrific diseases and stuff like that? Do you think there can be a legitimate use for that sort of thing?

ÉT: Yes. I do think that could be permissible under certain circumstances. I mean, I have worries about the limits of what that technology will be used for. Will it be used just for what we would intuitively call therapeutic purposes? My main concern is that it could easily open the door to an ‘enhancement’ approach. As soon as you’re talking about enhancements, there are questions like, “What criteria are you using to judge whether some modification is better?” That’s where you get into the issue of ‘super-classes’ which Bostrom has written about. 

A lot of that is probably ableist. What ‘enhancing’ means for somebody like Bostrom might be completely different than what I might mean. Right?

RUS:  I must confess I had a knee-jerk reaction the first time I heard the term ableism. People should be able. Generally, we should be in favor of abilities and not get into a place where people are worshiping their broken parts, so to speak. At the same time, people should have the right to choose how they want to be. But I’m uncomfortable with the idea that people would want to maintain what most people would consider a kind of brokenness. And I wonder: where’s the line for that?

ÉT: My sense is that words like ‘broken’ are normative terms. I think disability rights activists or disability scholars have a point when they say, “there’s an infinite number of things that I can’t do.” I can’t fly. The notion of disability, some would argue, is just a social construct. We live in a society that is unaccommodating for somebody who can’t do something that a statistical majority of other people can do. That’s what a ‘disability’ is. So maybe, you fix society, the disability goes away even if the inability remains.

RUS: How would you think about physical problems that make life more difficult for an individual, or for the people around them? 

ÉT: There are two aspects to that. One is the inability and the other is the society one lives in. So you can fix or eliminate disability by improving society. And then there’s a good argument that a lot of the inabilities that we classify as disabilities would not be seen as bad. It’s just different. There are people of different heights. There are people who can’t walk. I find my heart is filled with a lot of empathy for the disability scholars — some of whom are disabled themselves — arguing that they wouldn’t want to change. And their view that we shouldn’t aim for a world in which people like them no longer exist.

Techno Gloom

RUS: Do we need to worry about extreme forms of tech negativism? For example, the person who can’t walk on their own will rely on good technology to get around
and probably hope for even better technology. And there’s a real move towards extreme tech negativism now, clearly provoked partly by the sort of TESCREAList people that you’re bringing to the surface. I wonder if you’re a little worried that there might be an overreaction, a tech reactionary kind of move that is actually harmful?

ÉT: Not so much because I haven’t seen a lot of evidence, at least like my social media feed.

RUS: You don’t live near San Francisco…

ÉT: To put it simplistically, a lot of the people in my circle are for good technology,  not bad technology. Maybe small-scale technologies, which doesn’t mean low-tech. So you can have advanced technologies that would enable somebody to get around who can’t walk. But without the megacorps, and all the control  and all the risk that comes with that.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Uncanny Scammy Valley: Misinformation in the AI World

Whisk anyone from the turn of the millennium to the present day – before deepfakes and ChatGPT and LLMs were part of the world’s lexicon – and they would find themselves lacking an important set of survival skills, defences against their phone beeping to tell them, “You have a virus on your computer!”, “There is a security issue on your bank account, click here to fix it!”.

Scamming is all too commonplace in our communication-overloaded world and, with AI, it’s getting scarier than ever – and ever more difficult to distinguish reality from fiction.

The History of Scamming

Scams are nothing new. Parlour tricks to deprive honest people of their money are as old as humanity itself. From thimblerig (the cups and ball trick) to Hegestratos to Ponzi, deceiving others for your gain is one of humanity’s saddest skills. Homo sapiens’ ability to trick and to swindle has been theorised as the reason for our ultimate ascension over the Neanderthals. Neanderthals were stronger, faster and smarter than us, but they fell for it everytime we cried, ‘Look, a mammoth!’ before burying our flint axe in their back.

Every new technology has been used as a new way to deceive. When mass telecommunication was first developed, it wasn’t long before auto diallers began being used for fraud. They still happen today, with the ‘Can You Hear Me?’ scam causing widespread misery as late as 2020. 

Phishing emails were around as soon as the internet began. Early online users often had to reformat their computers multiple times due to a vicious virus ransacking data on the computer – before we all gained the collective consciousness to not trust everything you read online, or click on anything whose source you don’t know. Scams were an irritant to most, but an unlucky few saw themselves bankrupted by scammers draining their accounts entirely.

Why We’re Not as Scam-Resistant as We Think

We like to think we’ve evolved beyond being deceived so easily, but the numbers tell a different story. In 2022, reported consumer losses to fraud totaled $8.8 billion, a 30% increase on the year before. Our online hygiene isn’t getting better, it’s getting worse. And this is before we even factor in crypto where, with transactions being irreversible, there are fewer safeguards in place to prevent losses to fraud. Even power users with burgeoning wallets have their wealth drained with just a few misguided clicks.

Why? Are we all just suckers? No, we are not. We’re wiser, more educated, more perceptive and more technologically integrated than at any previous time in history. So what is going on?

First, we are more lonely than ever, more distanced from each other in a more deracinated society. Our individual experience is more atomised and more online – and thus more prone to blindspots. Second, information overload and the increasingly simulated world make it harder to distinguish the real from the unreal – even for the sophisticated. The sheer amount of processing we have to do when traversing any public forum is beyond what previous societies ever had to do. Voices and opinions, truth and lies from all sides surround us like a fog, making it hard to see the straight-and-narrow road of consensus society (a fact illustrated by the bitter political polarisation we see today).

Credit: Tesfu Assefa

How AI Is Used to Scam Everyone

And thirdly, but most importantly, scams are getting really good. Scary good. A large professional industry has now formed around trying to scam people from their money. It’s not just shysters and the morally bankrupt stealing from grandma. It’s state-sponsored aggression by malevolent actors targeting enemy citizenry. Certain countries have teams devoted to scamming the rich citizens of their rivals in order to keep their economy afloat.

And they have the tools to do it. There is a theory in robotics of the ‘uncanny valley’, the unheimlich of gothic literature, of what is so close to real it unnerves you. Yet as generative AI becomes able to produce images, video, and speech closer to the real thing, and LLMs become more able to produce, orchestrate and distribute text with ever greater verisimilitude to a normal human’s presentation, the more this uncanny valley fills, the harder it is to take anyone on trust – even the ones you love.

Stories of scammers emulating loved one’s voices to make desperate phone calls pleading for help and money are truly chilling. If your mother or brother or partner or child rang you screaming, would you think twice? What if they facetimed you, a fully deepfake visual imprint in distress, the horror would have you sending money before you analysed the render too closely. 

Even if such depraved tactics are not used, with AI tools, there is no need for a scammer to spend hours, days, or weeks building trust with a target before finally getting the information they need. It’s point-and-click fraud. Stories of blackmail with teenagers being sent deepfaked nudes of themselves are just the latest in a litany of horror that many seemed determined to use our powerful new technological advances to create.

How to Stay Safe Online

We live, then, in scary times. It is the responsibility of everyone to understand what kind of dark valley our new AI tools can dig, and it’s the responsibility of governments to educate and inform their citizenry of the dangers they face. Despite this black mirror, there is hope. AI sentinels protecting each individual, helping them discern the real from the fake, more powerful forms of self-custody of wealth and better banking infrastructure to protect against fraud, and a wider understanding of the brave new world we live in can hopefully, in time, stop scammers in their tracks. 

Until then, stay safe out there. Trust me when I tell you to not trust anyone.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Top 20 AI Cryptocurrencies For 2024 (Part 1)

Introduction 

AI cryptocurrencies, as we explained in our last article, have really come to mainstream attention in 2023, at least for cryptocurrency enthusiasts,  following the breakthrough of popular large language model (LLM) such as OpenAI’s ChatGPT and the image generator Midjourney. In addition, we also had the Summer of Trading Bots this year.

While some experts like DeFi pundit Andre Cronje believe that artificial intelligence and blockchain technology (presently) shouldn’t mix, there is a growing cohort of promising AI projects that integrate both blockchain and cryptocurrency technology on chains like Ethereum and Solana in order to sustain and grow their ecosystems. 

Mindplex Magazine is excited about the use of artificial intelligence in the world of Web3 and cryptocurrency, and this article will be the first in a series that focuses on identifying top AI cryptos

With both AI and Web3 innovation expected to continue its ascent next year, here are five of the best AI cryptocurrency projects for 2024:

Credit: Tesfu Assefa

1. SingularityNET (AGIX): The AI Marketplace

Name
SingularityNET

Ticker
AGIX

Year Founded & Team

Founded in 2017 by Ben Goertzel (CEO)

Institutional Investors

Kosmos Ventures, Fundamental Labs, LDA Capital, Elizabeth Hunker, Zeroth.AI

What is SingularityNET?

SingularityNET is a decentralized platform that enables users to buy and sell artificial intelligence (AI) services. The platform uses blockchain technology to create a marketplace where AI service providers can offer their services to artificial intelligence and blockchain users.

Main Purpose

To provide a secure and transparent way for users to buy and sell high-quality AI services.

What Are Its Use Cases?

SingularityNET is basically a one-stop shop hub for all things AI. You can build new AI apps, carry out artificial intelligence research, and even launch your project into the real world. It’s like a marketplace where you can find, mix, and match various AI services to meet your unique needs.

How It Utilizes AI:

SingularityNET’s platform isn’t just about offering artificial intelligence services; it also uses machine learning to make your life easier. It helps you discover services that fit your needs, ensures you’re getting a fair price, and even checks the quality of the service you’re using.

Why SingularityNET Is Important?

SingularityNET democratizes AI usage for everyone. It does this by connecting developers to users to sell and buy AI services in a secure and transparent way. By making AI marketplaces more efficient and fair, it helps to bring the cost of machine learning down and raise its impact. 

What is the AGIX token used for? 

The AI crypto token AGIX helps SingularityNET manage transactions, govern its DAO and enable global access to artificial intelligence services. The token works across multiple chains like Cardano and Ethereum, and AGIX holders can stake their tokens with interest to provide platform liquidity.

Chains: Ethereum and Cardano
Fully diluted market cap: $358 million (Oct 2023)

2. The Graph: The AI Librarian

Name
The Graph

Ticker
GRT

Year Founded & Team

The Graph was founded in 2018 by Yaniv Tal, Brandon Ramirez, and Jannis Pohlmann, and in December 2020 it launched its mainnet.

Institutional Investors

Early investors included Launch Code Capital, Ganesh Kompella andFuture/Perfect Ventures.

Project Definition

Think of The Graph as a super-smart librarian for blockchain data. It sorts and organizes this data into neat categories called subgraphs. This helps developers and analysts find exactly what they need, whether they’re working on digital currency exchanges, virtual collectibles, or other blockchain-based projects.

Main Purpose

In simple terms, The Graph aims to make the messy world of blockchain data a whole lot easier to navigate through the use of artificial intelligence. If you’re looking to build a decentralized app (dApp) or just trying to analyze blockchain data, this platform is your go-to resource.

Use Cases

Primarily, The Graph is a big help in two areas: building dApps and researching blockchain data. Developers get a smoother way to integrate blockchain data into their apps, while researchers get a more streamlined method to sift through and analyze the data they need.

How it Utilizes AI:

AI plays a handy role here. It’s like the assistant librarian, helping you find the most relevant and high-quality subgraphs. It’s also the pricing guru, making sure you’re not overpaying for the data you’re after. Overall, machine learning is used to enhance The Graph’s functionality, ensuring the info you get is both reliable and fairly priced.

Why should you care about The Graph?

Because it’s essentially making blockchain data more approachable and user-friendly. Thanks to its AI features, it’s an invaluable tool for anyone wanting to build dApps or analyze blockchain data. It’s smoothing the path toward wider adoption of blockchain technology, making life easier for developers and data nerds alike.

What is the GRT token used for?

The Graph’s GRT token is a machine-learning cryptocurrency that’s used for staking, delegating, contributing to network governance, and payment to network participants.

Chains: Ethereum, Solana, Avalanche and more
Fully diluted market cap: $922 million, with infinite token supply (Oct 2023)

3. Fetch.AI (FET): The AI Agency

Name
Fetch.AI

Ticker
FET

Year Founded & Team

Founded in 2017 by Humayun Sheikh (CEO), Toby Shorin (CTO), and AJ Gordon (COO)

Institutional Investors

Fetch.AI raised $75m through investors like Outlier Ventures, GDA Group, BitGet and DWF Labs.

Project Definition

Fetch.AI is a game-changer in a few areas. It’s making supply chains smarter by automating stuff like keeping track of inventory and filling orders through the use of autonomous agents. If you’re into the Internet of Things (IoT), it’s also useful there—managing how devices collect and share data. Plus, it gives people a way to monetize their data.

How it Utilizes AI:

First off, Fetch.AI gives you the tools you’d need to build one of these smart agents. Once you’ve built one, it helps you deploy it to do whatever job it was designed for. And it doesn’t stop there; it uses artificial intelligence to make sure a bunch of these agents can work together efficiently to reach common goals.

Why It’s Important?

Fetch.AI is making businesses run smoother by automating a lot of their tasks. Because it’s all built on AI, it’s leveling up the marketplace for these autonomous agents, making this cool tech more available and cheaper for everyone.

What is the FET token used for? 

Fetch.AI’s FET is an artificial intelligence cryptocurrency used as a utility token for the Fetch ecosystem and network transaction fees. FET is required to find, create, deploy, and train autonomous economic agents and is essential for smart contracts, oracles, and transactions.

Chains: Ethereum, BNB Chain, and Cosmos
Fully diluted market cap: $267 (Oct 2023)

4. Ocean Protocol (OCEAN): The AI Data Merchant

Name
Ocean Protocol

Ticker
OCEAN

Year Founded & Team

Founded in 2017 by Trent McConaghy (CEO), Bruce Pon (CTO), and Brian Singer (COO)

Institutional Investors

Ocean Protocol has 20 institutional investors including Outlier Ventures, Amino Capital and Blockchain Coinvestors

Project Definition

Ocean Protocol is like an online marketplace, but specifically for data. The Singapore-based project uses the security of blockchain to help people buy, sell, and share data safely. It puts data owners in the driving seat, letting them decide who gets to access their data.

Main Purpose

The big idea here is to give people a safe and clear-cut way to deal with data. Whether you’re buying, selling, or just looking to share, it’s all above board and secure.

Use Cases

So what can you do with Ocean Protocol? A lot, actually. If you’re a business or a researcher looking to team up and share data, then this is your playground. Got valuable data? You can sell it here. If you’re on the hunt for very specific data, Ocean Protocol can help you find it.

How Ocean Protocol Wields AI:

The Ocean platform uses AI to make their user experience a case of plain sailing. It weeds out low-quality data so you’re only dealing with the good stuff. Plus it uses artificial intelligence to hook you up with the data that suits your needs and even helps to set a fair price for the data you’re interested in.

Why It’s Important?

Why does any of this matter? Well, Ocean Protocol is making the data world a better place. It’s more secure and transparent, so people are more willing to share. And thanks to AI, it’s also super efficient. This is great news for everyone from researchers to everyday data owners who want to unlock the value of their data.

What is the OCEAN token used for? 

The OCEAN token allows users to buy and sell data tokens and services, participate in governance, or stake within the Ocean Protocol ecosystem. 

Chains: Ethereum ERC-20 token
Fully diluted market cap: $450 (Oct 2023)

5. Cortex (CTXC): The AI Garden

Name
Cortex

Ticker
CTCX

Year Founded & Team

Founded in 2017 by Li Tian (CEO), Li Mu (CTO), and Jianping Chen (COO)

Institutional Investors

Cortex has 9 institutional investors including FundamentalLabs, IOSG Ventures, and Global Blockchain Innovative Capital.

Project Definition

Cortex uses smart contracts to facilitate transactions and ensure fairness and transparency in the AI marketplace. Think of Cortex as a communal garden for AI—anyone can come in, plant a seed (an AI model), nurture it, and sell the fruits. All of this happens on a decentralized blockchain, meaning it’s not controlled by any single entity.

Main Purpose

The core mission of Cortex is to make AI as accessible as Wi-Fi in a coffee shop. By offering a decentralized space to create, run, share, and even make money off artificial intelligence models and build AI-infused decentralized apps (dApps), it’s breaking down the ivory towers that often make AI seem unattainable for regular folks.

Use Cases

Cortex is a jack-of-all-trades when it comes to AI. Whether you’re a developer, a researcher, or someone looking to make a buck off your AI model, Cortex serves as your one-stop shop.

How it Utilizes AI:

Here’s where it gets meta—Cortex uses machine learning to make its AI marketplace better. If you’re looking for a specific AI model, its algorithms will match you up. And just like an auctioneer, it dynamically sets the prices based on what’s in demand. Plus, it makes sure whatever you’re choosing is up to snuff quality-wise.

Why It’s Important?

In a nutshell, Cortex is almost like the Robin Hood of AI. It’s making sure that AI technology isn’t just for the elite, but for everyone who wants in on it. By offering a decentralized, open platform for artificial intelligence, it’s tearing down the walls that have traditionally kept people out of this exciting field.

What is the CTCX token used for? 

The CTXC token is an ERC-20 asset and utility token for the Cortex platform, which is focused on AI and machine learning. It is used to pay for AI services, incentivize developers, and participate in the governance of the platform.

Chains: Ethereum ERC-20 token
Fully diluted market cap: $36 (Oct 2023)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Digital Realms: The War for Sovereignty in Cyberspace

The internet is the infrastructure that supports our economy and society. Whoever controls it controls the world. Whoever can censor it, deny access, and control its output controls society. The internet is a permissionless network with countless participants, but nevertheless access to it has agglomerated towards centralised entities, whose influence grows by the day. Privacy is now a relic and your access to the internet is less assured than you might think. The war for cyberspace hasn’t just begun – it’s been raging for decades, and the war over the digital realm is no less vital than those waged in the real.

Erecting Digital Walls

The Great Firewall of China, the tongue-in-cheek name given to China’s mass surveillance, restriction, and gatekeeping of the internet has for decades now inhibited its citizens’ access to data. Russia recently followed suit. Societies on the totalitarian end of the spectrum want more than anything to keep the internet under their control, and deny access to global information. 

It’s easy to see why. The internet, like communication technologies before it, lets societies communicate and distribute information en masse without oversight of the elite. Remember that the printing press was heavily censored for centuries almost as soon as it was created, although in the end it didn’t stop the Lutherian reformation and the messages of the newly minted protestant movement being distributed in secret, smuggled under the cowls of renegade preachers.

Yet corporate America has its own issues with free internet access, with net neutrality under siege from ISPs who would like to discriminate and levy fees based on access it, or what they are accessing (although in fairness to the USA, their surrendering ICANN’s control of the DNS system to a multi-stakeholder model was a major move towards ‘decentralisation’ of the internet). 

Meanwhile, the EU panics about the US-led cartel in cloud computing, and the fact that the majority of the world’s data is held in massive data farms controlled by US techopolies and routed through Amazon, Google, and Microsoft’s services – data used by national governments to service their own ends, or wielded by corporations who finally rip off the fig-leaf of social conscience (remember that Google stripped ‘Don’t Be Evil’ from its corporate manifesto). 

How AI Data Scouring Leads to Dystopia

The advance of AI is central to the current hubbub of concern over all of this. Mass harvesting of data is useless without appropriate indexing and, as anyone who uses Windows can tell you, even searching a hard drive for a file can be a difficult task. No matter how many data crunchers you put to the task, and how powerful your indexing software is – there is simply too much data to reliably capture, store and output in any meaningful way. 

Command-and-control technologies like this are still in their infancy, despite decades of research. Yet neural nets trained to harvest innocuously-generated data lead to a dystopian future, one where you can say ‘Hi DataGPT, please look up [John Maguire], give me a three-paragraph profile on who he is, and a verdict on whether he is an enemy of the state’. To think governments won’t use it is a naive fallacy. In a decade, getting caught for speeding might have the cop asking his AI about you, and what you’ve been up to, before he decides whether he should wave you on or shoot you down. 

Credit: Tesfu Assefa

A Return to the Original Internet

The internet was originally dreamed up as a fully decentralised network, built to withstand the possible infrastructure-annihilating shocks of war or catastrophe. Over time, commercialisation crept in, and centralisation with it. Rather than accessing any given server, instead people accessing through one ‘node’, that of the ISP. 

That was Web 1.0 but, in some ways, Web3 is an attempted return the prelapsarian state first envisaged by the creators of the early internet, where activities and services are run on a decentralised set of nodes and are permissionless, trustless, and free (in an access sense) – forever, with no one able to revoke access, and no great firewalls being erected and – in an ideal world – with pseudonymous or anonymous privacy maintained.

Of course, Web3 currently needs the infrastructure rails of the ‘old internet’ to function. Yet as decentralised scalability improves, there is perhaps a future in which an internet exists which no nation state can colonise, where privacy is retained, and which enshrines the rights of the individual. Excitement over crypto starts with the power of trustless decentralisation, with tokens that give you the right to wander these digital realms without fear.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter