Runes Protocol: Did It Ruin Bitcoin or Save It?

The future of Bitcoin was at stake last week in two ways: with both the Halving upgrade and the launch of the Runes protocol, a new token standard for issuing fungible tokens directly on the Bitcoin blockchain. The Runes Protocol laid a foundation that will determine the fate of the chain in the decades to come. Activated on 19 or 20 April 2024 on block 840,000, coinciding with the next Bitcoin halving, Runes aims to provide a more efficient and responsible way of creating fungible tokens compared to existing options. Let’s dive into what Runes is all about, who created it, how it works, and what impact it could have on the Bitcoin ecosystem.

What is the Runes Protocol?

The Runes protocol is a new token standard that allows issuers to create fungible tokens on the Bitcoin blockchain in a more efficient way. It can offer users a streamlined method for creating tokens that represent various assets, from stablecoins to governance tokens. Runes positions itself as a robust platform for token creation and management with all the security and immutability of Bitcoin. At least, that’s the official line. For Bitcoin maximalists, Runes and its predecessors Ordinals and BRC-20 are cynical money-grabs that clutter and congest the world’s most important blockchain with its flood of transactions. 

Rodarmor: The Mastermind Behind Runes

Bitcoin developer Casey Rodarmor, well-known as the creator of the Ordinals protocol, proposed Runes in September 2023. Building upon his experience with Ordinals, which opened the door to NFTs on Bitcoin, Rodarmor envisioned Runes as an improved token standard that addresses the limitations of existing solutions like the BRC-20 standard, which he felt required too many steps to complete and wasn’t built in accordance with Bitcoin’s ethos.

Rodarmor designed Runes to be a simple protocol with minimal on-chain footprint and responsible UTXO management. UTXOs, or Unspent Transaction Outputs, represent individual units of Bitcoin value that have not yet been spent. Unlike the BRC-20 standard, which is complex and produces junk UTXOs that congest the Bitcoin network, Runes aims to be more efficient and user-friendly.

Other fungible token protocols on Bitcoin, such as RGB and Taproot Assets, rely on off-chain data storage. Runes distinguishes itself by keeping all token information on-chain using OP_RETURN, a Bitcoin script opcode for storing data. In this way, Runes ensures that asset metadata remains tightly integrated with the base layer.

Under the Hood: How Runes Works

Runes adopts a UTXO-based model that aligns seamlessly with Bitcoin’s design. When a Rune token is created (‘etched’), minted, or transferred, a protocol message called a runestone is generated. Runestones contain all the necessary information, including the token ID, output index, and amount, encoded in an OP_RETURN output.

The token supply of a Rune is stored within a single UTXO, with a maximum supply of approximately 340 undecillion (340 followed by 36 zeros). Each Rune has a divisibility parameter that determines the number of decimal places it can have, up to a maximum of 38.

New Runes are created in a process called etching, where the token’s properties, such as its name, divisibility, symbol, pre-mine amount, and minting terms, are defined. Once etched, the Rune can be minted according to the established terms, with the minter specifying the Rune ID and the desired quantity.

Transferring Runes is accomplished through ‘edicts’ – instructions that define how tokens move from inputs to outputs within a transaction. Edicts support batch transfers, airdrops, and a transfer of all remaining units of a specific Rune ID in a single transaction.

Runes vs. BRC-20 and Ordinals

Runes vs BRC-20

While both Runes and BRC-20 are token-standards built on the Bitcoin blockchain, there are several key differences between the two.

BRC-20 is a meta-protocol that relies on the Ordinals protocol. This means that BRC-20 inherits the complexity of Ordinals, and requires multiple transactions for minting and transferring tokens. In contrast, Runes is a standalone protocol that operates independently of Ordinals, allowing it to create and manage tokens more efficiently.

Another significant advantage of Runes over BRC-20 is its simplified transaction structure. With Runes, minting and transferring tokens can be done in a single transaction, reducing the overall on-chain footprint and minimizing the creation of unnecessary UTXOs. This streamlined approach leads to improved scalability and a more user-friendly experience for token issuers and holders.

Runes vs Ordinals

Although both Runes and Ordinals are protocols built on top of the Bitcoin blockchain, they serve different purposes. Ordinals is primarily focused on creating and managing non-fungible tokens (NFTs) by inscribing data onto individual satoshis. These inscriptions are unique and can represent various types of digital assets, such as artworks, collectibles, or even text.

On the other hand, Runes is designed specifically for fungible tokens, which are interchangeable and divisible. 

The Potential Impact of Runes on Bitcoin

The Runes protocol could have far-reaching implications for the Bitcoin ecosystem, both good and bad. Developers can use Runes to create various types of fungible tokens, potentially attracting a wider user base and expanding Bitcoin’s utility beyond its primary function as a digital currency.

As more projects build on top of Runes, the increased transaction volume could generate additional revenue for miners in the form of transaction fees. This is particularly relevant in light of the halving of the Bitcoin block reward: the added revenue from fees would compensate for one incentive for miners being reduced.

Moreover, Runes could spur innovation within the Bitcoin developer community. Projects like RSIC, a metaprotocol that combines Ordinals with yield-farming, have already emerged in anticipation of Runes’ launch. As developers explore new use-cases and build novel applications on top of Runes, the Bitcoin ecosystem could witness a surge in creativity and experimentation.

However, Runes has also in its short history attracted an avalanche of scam or low-quality projects that offer little to no chances of a return on investment. 

Credit: Tesfu Assefa

The Road Ahead for Runes

Casey Rodarmor’s next plan is to introduce direct trading between users, potentially reducing reliance on centralized exchanges and mitigating issues like Replace-By-Fee (RBF). Additionally, the approval of the OP_CAT Bitcoin Improvement Proposal (BIP) could pave the way for bridging Runes tokens to Layer-2 networks, enhancing scalability and interoperability.

As the Bitcoin community prepares for the launch of Runes, excitement is building around the potential for a new era of token innovation on the world’s most secure and decentralized blockchain. With its focus on simplicity, efficiency, and responsible UTXO management, Runes aims to address the limitations of existing token-standards, and to provide a solid foundation for growth of the Bitcoin ecosystem.

Only time will tell how developers and users will receive and adopt Runes. However, one thing is certain: when Runes is activated at block 840,000, it marks a significant milestone in Bitcoin’s ongoing evolution, opening up new possibilities for token-creation, management, and exchange on the original and most secure blockchain.

The Runes protocol has the potential to bring numerous benefits to the Bitcoin ecosystem –

  • Firstly, Runes can attract a wider user-base by enabling various types of tokens, such as utility tokens, governance tokens, or even stablecoins. This increased diversity of use-cases can draw new users to the Bitcoin network, driving adoption and fostering a more vibrant and inclusive ecosystem.
  • Secondly, the increased activity generated by Runes can make the entire Bitcoin network more sustainable. As more users engage with Runes-based tokens, the demand for block space will increase, leading to higher transaction fees. These fees will draw in more miners to continue securing the network, especially as the block rewards diminish.
  • Lastly, Runes can serve as a catalyst for innovation and experimentation within the Bitcoin ecosystem. By providing a standardized and efficient platform for issuing tokens, Runes can lower the barriers to entry for developers and entrepreneurs who want to build new applications and services on top of Bitcoin. This can lead to a proliferation of novel use-cases, and a more dynamic, resilient, and interesting ecosystem.

Runes provides a platform for token-related activities directly on the Bitcoin blockchain, and can help drive transaction fees, nourishing a sustainable mining ecosystem. Even if some of the tokens created through Runes are shitcoins or memecoins, Rodarmor argues that the fees generated from these activities are still valuable for the network’s security.

Moreover, Rodarmor sees Runes as a way to bring more users and activity to the Bitcoin ecosystem. This increased adoption and engagement can further strengthen the Bitcoin network and its position as the world’s leading cryptocurrency.

How Runes Works

  • Etching is the process of creating a new Rune token and defining its properties. This is done through a special transaction that includes an OP_RETURN output containing the token’s metadata, such as its name, symbol, and any additional attributes.
  • Minting refers to the act of creating new units of a Rune token. The minting process involves specifying the token ID, which is derived from the etching transaction’s block height and transaction index. Minting can be done through an open process, allowing anyone to participate, or it can be restricted based on predefined terms set during the etching process.
  • Transferring Runes involves moving tokens from one UTXO to another. This is accomplished through a transaction that consumes the input UTXOs containing the tokens and creates new output UTXOs with the updated token balances. The transfer process is governed by a set of instructions called ‘edicts’. These edicts specify the token ID, amount, and destination UTXO.
  • In the event of an error during the etching, minting, or transferring process, a ‘cenotaph’ is created. Cenotaphs are runestones with invalid or unrecognized data, and they cause the associated tokens to be burnt. This mechanism encourages responsible UTXO management and helps maintain the integrity of the Runes protocol.

Conclusion

Existing token standards, such as BRC-20, have certain limitations. Every time they are minted or transferred, multiple transactions have to pass through the Bitcoin blockchain, and this leads to increased complexity and network congestion.

In contrast, Runes offers a streamlined approach, allowing you to create and transfer tokens with minimal on-chain footprint and responsible UTXO management. Fewer transactions are needed and Bitcoin’s limited block space is used more optimally. It is a more efficient and scalable solution for issuing tokens.

Conversely though, the protocol is still young and has had to deal with some adversity. Proponents of BRC-20 feel that Runes projects are too centralized, while others feel Rodarmor’s design was nothing more than a cynical money grab. Only time will tell if they will survive and even thrive. As Samson Mow told me in an interview last year at Bitcoin Miami, “it’s just noise”. 

It pays to zoom out and see where other chains like Ethereum and Cardano are heading, and what’s possible with new protocols and even Layer-2 chains for Bitcoin. When mining rewards become negligible in the next 10 or 20 years, the network will have to rely on transaction fees to keep the miners from revolting and shutting down their machines. Innovations like Runes are asking the right questions in order to get them to stay. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Guide to Solana’s AI Cryptocurrencies 2024

Introduction

Solana is a rebellious, young and cutting-edge blockchain. It has weathered frequent outages, a price collapse, and industry disdain due to being backed early on by FTX and Sam Bankman-Fried. Its technical quality has helped it bounce from its nadir in 2022, seeing the SOL asset price jump from $8 to over $200 as users poured in, partly due to some lucrative airdrops

All this adversity has battle-tested Anatoly Yakovenko’s Proof-of-History network, drawing so much traffic that it had to roll out a patch this week in order to combat severe network congestion the last few weeks. 

It boasts an exploding Web3 ecosystem of DeFi, NFT and memecoin projects that take advantage of its high-speed, low-cost transactions and minimal energy impact. However, it also provides fertile ground for the intersection of artificial intelligence (AI) and blockchain technology.

Solana’s unique architecture utilizes a parallelized environment, and makes an ideal platform for AI projects that require fast and efficient transaction processing. The blockchain’s ability to handle a high volume of transactions quickly has drawn the attention of projects like io.net, a decentralized network that provides global GPU resources for AI and machine learning purposes. 

With io.net’s upcoming launch and impressive $1 billion valuation, it’s clear that Solana is poised to become a major player in the AI cryptocurrency space, which is currently dominated by big players like SingularityNET, which has close ties with Cardano, the most proof-reviewed blockchain which takes a more academic and stable but slower approach to development.

In this article, we’ll dissect this in more detail and also briefly go over some of the hottest Solana AI crypto projects out there right now. 

Warning: Solana’s low-cost fees and gung-ho ‘degen’ culture have drawn in not only some hottest Web3 projects, but also many crypto scams and vaporware projects that claim to use AI but don’t. Users should exercise extreme caution when investing and always conduct thorough research, including on the content in this article. None of it should be considered financial advice. 

Why is Solana a Promising Platform for Crypto AI?

Solana’s unique architecture offers several key advantages that make it an ideal platform for AI applications in the crypto space:

  1. Scalability: Solana’s combination of Proof-of-History (PoH) and Proof-of-Stake (PoS) consensus mechanism enables it to process thousands of transactions per second, making it highly suitable for AI-related computations.
  2. Low Transaction Costs: Solana’s low fees make it an attractive choice for AI applications, allowing developers to execute complex algorithms and models without the high costs associated with traditional cloud computing services.
  3. Fast Confirmation Times: Solana’s high-speed network ensures fast confirmation times for transactions, which is essential for real-time data processing required by AI algorithms.
  4. Open and Transparent: Solana’s open-source technology eliminates potential biases and ensures that AI algorithms deployed on the network are fair and accountable.
  5. Developer-Friendly Tools: Solana provides a comprehensive set of tools, libraries, and APIs, simplifying the development process and enabling seamless integration of AI algorithms with the blockchain.
  6. Robust Community: A thriving and supportive community of developers and enthusiasts are actively collaborating to build innovative AI solutions and foster a vibrant ecosystem.

Real-world Applications of Solana Crypto AI

The potential applications of AI within the Solana ecosystem are vast and varied:

  1. Decentralized AI Marketplaces: Solana’s scalability and low transaction costs make it an excellent platform for building decentralized AI marketplaces, where individuals and organizations can buy and sell AI algorithms, datasets, and models.
  2. AI-powered Financial Services: Solana can be used to create AI-powered financial services, such as automated trading systems, risk assessment models, and fraud detection algorithms, enabling more accurate decision-making and enhanced efficiency.
  3. Smart Contracts and AI Integration: Solana’s smart contract capabilities allow developers to integrate AI algorithms directly into blockchain applications, and build self-executing AI contracts and decentralized autonomous organizations (DAOs).
  4. AI-driven Supply Chain Management: By combining real-time data from various stakeholders with AI analytics, businesses can optimize inventory levels, predict demand, and identify potential disruptions, improving overall supply chain management.

Credit: Tesfu Assefa

Top Crypto AI Projects on Solana

  1. io.net (GPU resources)

Crypto AI platform io.net is a highly anticipated project in the Solana ecosystem. It aims to provide a decentralized network for AI and machine learning purposes. The platform is designed to offer global GPU resources, enabling developers and researchers to access powerful computing capabilities for training and executing AI models.

With its launch and airdrop planned for this month, io.net has garnered significant attention within the crypto community. The project has already secured an impressive $1 billion valuation and has raised $30 million in funding, speaking to strong interest and support from investors. The airdrop is likely to generate substantial buzz and excitement, as it presents an opportunity for individuals to gain exposure to a promising project at an early stage

  1. Grass (Solana Layer2)

Grass is a unique project that uses a decentralized network to gather users’ public web data for training AI models. By developing a zero-knowledge (ZK) Solana Layer-2 solution, Grass allows users to participate in the network by installing a browser extension, effectively turning their browsers into nodes. This innovative approach enables the network to harness spare internet bandwidth from users and collect data from public websites.

  1. gmAI (AI Dapp builder)

Developed by the creator of the points-trading exchange Whales Market, gmAI is an advanced AI platform designed to improve the functionality and user experience of dApps on Solana. gmAI is an operating layer of AI capable of analyzing on-chain data, identifying smart contract risks, prompting on-chain swaps, and automating yield farming without custody issues. While its functions are mostly related to DeFi, gmAI intends to support various use cases, including on-chain gaming, DAO automation, and SoFi.

  1. Nosana (GPU marketplace)

Nosana, a project that has seen a staggering 24,000% appreciation in the past year, is creating a decentralized network specifically designed for AI inference workloads. By establishing a marketplace for GPU power, Nosana enables individuals and companies to contribute or access computational resources, making AI model training and execution more cost-effective and scalable.

  1. Synesis One (AI model trainer)

Synesis One is building a decentralized solution for training AI models on the Solana blockchain. The platform allows users to earn cryptocurrency by completing small tasks, such as providing data for models, or labeling data. Synesis One aims to democratize AI development by making it easy for ordinary people to get involved.

  1. DatorAI (GPU marketplace)

DatorAI strives for inclusivity and accessibility in the AI and GPU sharing landscape. DatorAI is a way for people to use AI technologies through a decentralized platform. With features like revenue-sharing, GPU node rental and lending, and on-demand nodes, DatorAI empowers users and fosters innovation across various sectors.

  1. Dither (AI trading bot)

Dither, often mistaken for a simple Telegram trading bot, has larger ambitions. It aims to be an AI tool that utilizes open-source historical data to create tools for trading applications within and outside the crypto space. With upcoming applications like a ‘semantic sniper’ for evaluating soon-to-launch tokens and a Fantasy Football Draft Player Analysis, Dither showcases the versatility of AI in the Solana ecosystem.

  1. Solana Trading Bot

Bitsgap’s Solana Trading Bot harnesses AI to automate trading and optimize strategies. It monitors markets 24/7, identifying profitable opportunities and making autonomous decisions based on predefined strategies. 

The bot offers customizable modifications, such as the GRID bot for sideways markets and the DCA bot for volatile conditions. These bots can be tailored to individual preferences and risk tolerances. The Solana Trading Bot manages risk with AI and automates away constant manual monitoring to help users maximize profits while minimizing loss in the dynamic cryptocurrency market.

  1. Render (GPU media rendering)

The most popular Solana AI cryptocurrency Render is a decentralized GPU rendering platform that harnesses the power of distributed computing. It utilizes AI algorithms to allocate rendering tasks across a distributed network of GPUs, ensuring efficient and cost-effective rendering for artists and studios.

Conclusion

As Solana continues to mature and attract more innovative projects, it has the potential to become a major hub for AI-focused cryptocurrencies which play to its strengths. However, as with any emerging technology, it’s essential for users to exercise caution and thoroughly research projects before investing, as scams are not uncommon in the crypto space. By conducting proper due diligence, users can make informed decisions and participate in the exciting growth of Solana’s AI blockchain ecosystem.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Layer-3 (L3) Networks: Climbing Crypto’s Next Frontier

Intermediate Level Introduction

Web3 and DeFi continue to grow and mature, and any established smart contract platform blockchain requires scaling and speed to keep up with user demands, causing a new frontier to emerge: Layer-3 chains

These cutting-edge solutions are designed to build upon the foundation of Layer-1 and Layer-2 technologies, bringing forth a new era of scalability, interoperability, and specialized functionality.

Understanding the Blockchain Layers

Credit: Werner V.

To understand why Layer-3 networks are being touted as integral to the future success of crypto, it’s essential to understand the role of each layer in the blockchain ecosystem.

Layer-1: The Foundation

Also called the base blockchain, Layer-1 networks, such as Bitcoin, Ethereum, Solana and Cardano form the bedrock of the blockchain world. They provide stability and battle-tested security to the projects that build on them, and rely on them to keep their assets safe. 

Bitcoin and Ethereum are the two biggest L1 blockchains in the world. L1 blockchains are the basic infrastructure and security layer that Layer 2 (L2) blockchains build on. These networks provide the core functionalities, consensus mechanisms, and security protocols that decentralized transactions and applications require.

Layer-2: Scaling Solutions

Layer-1 networks face scalability challenges; Ethereum can process only 14 transactions per second maximum. So Layer-2 solutions have emerged in the last two years to address these limitations. 

A Layer-2 chain is a secondary protocol that is built on top of an existing Layer-1 blockchain. 

The primary purpose of Layer-2 chains is to improve the scalability and efficiency of the whole blockchain by handling transactions off the main ledger. This approach helps alleviate the network congestion. It also reduces the transaction costs – which are often high on Layer-1 blockchains due to their limited throughput. 

Popular examples of Layer-2 solutions include the Lightning Network for Bitcoin and Optimism and Arbitrum for Ethereum. Layer-2 solutions employ various techniques like state channels, sidechains, rollups, and plasma, each with distinct mechanisms for moving transactions off the main blockchain.

As such, Layer-2 chains are critical in blockchain architecture, offering a balance between decentralized security and high scalability.

Optimistic vs ZK Proof Rollups

Rollups execute transactions outside the main chain, but post transaction data back to it. This setup enables higher transaction throughput while coming back to the robust security of the Layer-1 blockchain. 

Ethereum rollups can broadly be divided into two camps, namely optimistic rollups (OR) and zero-knowledge proof (ZK) rollups. 

Optimistic rollups like Optimism (Coinbase’s Base Network used its OP stack to build its chain) and Arbitrum assume transactions are valid by default, and only run computations in the event of a dispute, which significantly reduces the burden on the main blockchain but entails a waiting period of up to seven days for withdrawals to ensure security. 

In contrast, ZK rollups like ZkSync and Stark use zero-knowledge proofs to validate all transactions off-chain before bundling them back to the main chain, providing immediate finality and reducing wait times but requiring more complex computation upfront. Vitalik Buterin, Ethereum creator, is a big fan of ZK rollups as they enable you to validate data without the need to share your private information. 

Layer-3: The use case-specific layer

However, if the world wants to all transact and send data on one chain like Ethereum or Solana, we need to go bigger and faster. This is where Layer-3 chains come in. 

Layer-3 networks focus on enabling seamless interoperability between different blockchains while providing specialized functionality tailored to specific use cases. Think of them as specialized, custom-built chains created for specific use cases, such as Web3 gaming or DeFi trading. 

Key Features of Layer-3 Networks

Layer-3 networks offer distinct advantages that set them apart from their predecessors:

Enhanced Scalability and Efficiency

By optimizing consensus mechanisms and data structures, Layer-3 networks achieve higher transaction throughput and processing capabilities. This allows decentralized applications (dApps) to perform with extreme efficiency, minimizing network congestion and computational bottlenecks.

Improved Interoperability and Accessibility

One of the key benefits of Layer-3 networks is their ability to seamlessly communicate and transfer assets between different blockchains. This interoperability means the crypto ecosystem becomes more connected and more accessible, enabling users to navigate and bridge across various networks with ease and far less risk.

Customization and Security

Layer-3 networks usually host only one dApp per network. This allows developers to customize their chains to their satisfaction, and implement security features tailored specifically to their dApp’s requirements. By providing a dedicated environment for each application, Layer-3 networks ensure optimal performance and enhanced security.

Notable Layer-3 Projects

Several promising Layer-3 projects have emerged, each bringing its own set of innovative features and use cases to the table.

Orbs

Orbs positions itself as a Layer-3 infrastructure project, bridging the gap between Layer-1, Layer-2, and the application layer. By providing an intermediary execution layer, Orbs enhances smart contract capabilities and introduces groundbreaking DeFi protocols such as dLIMIT, dTWAP, and Liquidity Hub.

Degen Chain

Built on the Base blockchain, Degen Chain is a Layer-3 platform designed to efficiently handle payment and gaming transactions. With its thriving ecosystem of tokens and rapid growth, Degen Chain aims to tackle scalability issues while maintaining low transaction costs.

Social media influencers have relentlessly shilled the chain in recent weeks for potential airdrops, and attracted a lot of investment as a result. 

Arbitrum Orbit

Arbitrum Orbit enables developers to create customizable Layer-2 or Layer-3 chains within the Arbitrum ecosystem. These chains can settle transactions on Arbitrum One, providing developers with the flexibility to tailor their application’s features and governance to their specific needs.

Other notable Layer-3 projects include Cosmos IBC, Polkadot, Chainlink, Superchain, and zkHyperchains, each contributing to the evolution of the blockchain landscape in their own unique ways.

Credit: Tesfu Assefa

Potential Impact and Use Cases

Layer-3 networks hold immense potential for the future of blockchain technology. Here are a few of the biggest plusses.

Decongesting the Main Chain

By processing transactions off-chain, Layer-3 solutions help alleviate congestion on the main blockchain. This leads to reduced network congestion and lower transaction fees, improving the overall user experience.

Enabling Complex dApps

The specialized functionality offered by Layer-3 networks opens up new possibilities to develop sophisticated and user-friendly dApps across sectors like DeFi, gaming, and social media. By providing a tailored environment for each application, Layer-3 networks enable developers to create highly optimized and efficient dApps.

Driving Mainstream Adoption

The ability to create customized, high-performance applications lowers the entry barrier for businesses and individuals, fostering a more inclusive and diverse crypto ecosystem.

Challenges and Considerations

While Layer-3 networks present exciting opportunities, they also face certain challenges that need to be addressed.

Centralization Concerns

Some critics argue that Layer-3 networks, being built on top of potentially centralized Layer-2 solutions, may further compromise the decentralization principles that is the soul of blockchain technology. Striking the right balance between scalability and decentralization remains a crucial consideration for Layer-3 networks.

Competition and Fragmentation

As more Layer-3 networks enter the fray, competition for users and developers is likely to intensify. This could lead to fragmentation within the crypto ecosystem, with liquidity and resources being spread across multiple platforms. Ensuring a cohesive and interconnected ecosystem will be a key challenge for Layer-3 networks.

Conclusion

Layer-3 networks can make blockchain technology. By building upon the foundations laid by more scalable, interoperable, and specialized than ever before. As the crypto landscape continues to mature, Layer-3 networks are poised to play a crucial role in shaping its future.

For beginners navigating the complex world of cryptocurrencies, understanding the significance of Layer-3 networks is essential. By staying informed about these cutting-edge developments, individuals can position themselves to capitalize on the opportunities presented by this next-generation technology.

As the blockchain ecosystem continues to evolve, Layer-3 networks will undoubtedly face challenges and obstacles. However, the potential benefits they offer in terms of scalability, interoperability, and specialized functionality are too significant to ignore. As more projects emerge and mature, the true impact of Layer-3 networks will become increasingly apparent. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Bitcoin Halving 2024: Final Countdown To 19 April

Please note: This article is for educational purposes only and doesn’t constitute financial advice of any kind. Please invest responsibly. 

Intro

Tick-tock, tick-tock, get closer one more block. Despite this weekend’s crypto jitters after Middle East tensions, everyone and their cat is now tuned in for this week’s 4th Bitcoin Halving, scheduled for 19 April, 2024 around 6pm UTC, and anticipation, nerves and speculation levels are off the charts. 

This momentous event, which occurs roughly every four years, will reduce the mining rewards from 6.25 BTC to 3.125 BTC per block, cutting the daily issuance of new Bitcoins in half. 

Bitcoin supporters are gunning for that $100,000 price tag milestone with laser-eyed focus, and the 2024 halving is Bitcoin’s big event after the year following the Spot ETF approvals, and a defining moment for the entire crypto industry, providing high hopes that the ensuing supply pinch will kick off another crazy bull run.

What is the Bitcoin Halving?

The Bitcoin Halving is a pre-programmed event that is hardcoded into the Bitcoin protocol. It is designed to control the supply of new Bitcoins entering circulation, ensuring that the total supply will never exceed 21 million BTC. By reducing the block reward for miners every 210,000 blocks (approximately four years), the Halving helps maintain Bitcoin’s scarcity and deflationary pressure.

Historical Behavior of 2012, 2016, and 2020 Halvings

Credit: Werner V.

To better understand the reason for optimism surrounding the upcoming 2024 Halving, let’s take a closer look at the previous three Halving events and how they affected Bitcoin’s price, narratives, and overall market sentiment.

The 2012 Halving

The first Bitcoin Halving took place on 28 November, 2012, when the price of BTC was just $12.35. In the year leading up to the event, Bitcoin’s price had recovered from the fallout of the first Mt. Gox hack, rising from $2.55 to $12.35. Post-Halving, Bitcoin’s price surged by an astonishing 2000%, reaching $260 in April 2013. This period was characterized by growing interest from tech-savvy individuals and online communities, laying the groundwork for Bitcoin’s future growth.

The 2016 Halving: Leaving behind Mt. Gox 

By the time of the second Halving on July 9, 2016, Bitcoin had faced several challenges, including the devastating Mt. Gox hack in 2014 and a tarnished reputation due to Dark Web-related criminal activity and prosecutions such as Ross Ulbricht of Silk Road. Despite these setbacks, Bitcoin’s price rose from $430 to $650 in the months leading up to the Halving. 

In the post-Halving period, Bitcoin entered a phase of runaway growth, reaching nearly $20,000 by December 2017 – a staggering 2984% increase from the Halving day price.

The 2020 Halving: Institutions arrive

The 2020 Halving occurred on 11 May, 2020, amidst the global turmoil caused by the COVID-19 pandemic. Governments worldwide pumped trillions of dollars into their economies, leading to increased interest in Bitcoin as a hedge against inflation.

In the months preceding the Halving, Bitcoin’s price recovered from a significant drop, rising from $5,000 to $8,600. Post-Halving, Bitcoin’s price rallied to an all-time high of approximately $64,000 in April 2021, driven by a surge in institutional adoption and growing interest in decentralized finance (DeFi Summer), and those silly crypto jpegs known as NFTs.

Bitcoin Halving: A Three-Act Play (Hype, Disillusion and Accumulation)

According to Galaxy Research, Bitcoin’s halving events have historically unfolded in three distinct phases, each characterized by unique market dynamics and investor sentiment. 

  • Hype: The first act, dubbed the ‘Hype Phase’, sets the stage with a surge in prices leading up to the halving. Excitement and anticipation build as market participants speculate on the potential impact of the reduced supply.
  • Disillusionment: As the curtain rises on the second act, known as the ‘Disillusionment Phase’, the market wakes up to a post-halving hangover. Prices dip or go sideways, leaving some investors questioning the immediate effects of the event. However, this act is merely an intermission, setting the stage for the grand finale.
  • Accumulation: The third and final act, the ‘Accumulation Phase’, is where the magic happens. Prices recover and embark on a sustained upward trajectory, rewarding patient investors who held through the previous two phases and market participants who recognize the long-term implications of the halving and the growing maturity of the ecosystem.

Opinions on prices after the 2024 Halving

Uber-bullish Bitcoin predictions are a satoshi a dozen right now. 

Michael Novogratz, CEO of Galaxy Digital, Morgan Creek CEO Mask Yusko, and analyst Tom Lee have all predicted that Bitcoin’s price will hit $150,000, while Skybridge founder Anthony Scaramucci thinks Bitcoin will hit at least $170,000 in the 18 months after the Halving. Additionally, billionaire investor Tim Draper has predicted that Bitcoin will reach $250,000 in 2024, and Cathie Wood’s ARK Invest has projected that Bitcoin could surpass $1 million in the long-term.

Other notable predictions include Plan B, a prominent Bitcoin analyst, who regularly shares price analyses and predictions on Twitter, ranging from $100,000 to $1 million. Fred Thiel, Chairman and CEO of Marathon Digital Holdings, also forecasts Bitcoin reaching $120,000 post-Halving.

Credit: Tesfu Assefa

What to Expect After the 2024 Bitcoin Halving

As Bitcoin’s daily emissions get slashed from 900 BTC to 450 BTC, the price of mining will go up exponentially over the coming years. Several factors are expected to contribute to Bitcoin’s potential price appreciation, including increased institutional adoption, growing interest from younger generations, and the reduced supply of new Bitcoins entering circulation. Additionally, the launch of Bitcoin ETFs and the continued development of Bitcoin’s core infrastructure, such as the Lightning Network and Taproot upgrade, are expected to further bolster Bitcoin’s growth.

All these will help other crypto networks such as Ethereum, Cardano and Solana and their AI cryptocurrencies, memecoins and DePIN, as Bitcoin’s rising tide has shown to raise all ships. 

The Impact of Bitcoin Spot ETFs

One of the most significant developments in the lead-up to the 2024 Halving has been the introduction of Bitcoin Spot ETFs. These investment vehicles hold actual Bitcoins rather than futures contracts, and provide institutional investors with a way to enter the crypto market. With major players like BlackRock and Fidelity now holding hundreds of thousands of Bitcoins in their ETFs, the institutional demand for Bitcoin is stronger than ever.

The Cost of Bitcoin Mining After the Halving

While the 2024 Halving is expected to have a positive impact on Bitcoin’s price, it will also present challenges for miners. As the block reward is reduced, the cost of mining new Bitcoins will effectively double. Some analysts, such as CryptoQuant CEO Ki Young Ju, predict that mining costs could rise from $40,000 to $80,000 per BTC for miners using the popular Antminer S19 XP.

This increase in mining costs will likely lead to a consolidation of the mining industry, with smaller, less efficient miners being forced out of the market. However, as the difficulty of mining adjusts to the reduced hash rate, the remaining miners will become more profitable, potentially leading to a more stable and secure network.

Conclusion

As the crypto world counts down the days to the 2024 Bitcoin Halving, it’s clear that this event has the potential to be a watershed moment for the world’s first cryptocurrency. With institutional adoption at an all-time high, a dramatically reduced supply of new Bitcoins, and a range of technical upgrades in the works, Bitcoin is poised for significant growth in the post-Halving period.

While it’s impossible to predict the exact price of Bitcoin in the coming years, the historical precedent set by previous Halvings suggests that we could be on the cusp of another bull run. As always, it’s essential for investors to conduct their own research, manage risk appropriately, and stay informed about the latest developments in the ever-evolving world of cryptocurrencies.

In the meantime, sit back and count down with the entire crypto space here:

https://www.nicehash.com/countdown/btc-halving-2024-05-10-12-00

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Why is Data Availability (DA) in Crypto and Blockchain Important? 

Introduction

The digital world’s driven by data, whether in Web2 or Web3. It’s no different in the cryptocurrency and blockchain sector. Gigantic new decentralized networks are spinning up and getting new layers of chain – and data availability is crucial to their integrity, security, and functionality. 

With the technology evolving at a breakneck pace, understanding data availability and its implications is key to understanding the future of cryptocurrency applications. New innovations, like data sharding and sampling, are making it cheaper and more effective to ensure reliable DA and data storage than ever before. And the DA space is only going to get more competitive from here on, with ‘modular’ chains like Celestia, which are divided into specific layers dedicated to specific tasks.

In this blog post, we will explore the concept of data availability, its challenges, and the innovative solutions being developed to address them.

What is Data Availability (DA)?

Data availability can be defined as the state when all transaction-related data is accessible to nodes on the blockchain network. For a trustless network to function, it is essential that nodes can verify transactions and compute the blockchain’s state independently. When block producers propose new blocks, they must make all data within the block, including transaction data, available for others to see and verify. 

It can get quite complicated. Let’s look at the top two chains in the world and how they handle DA. 
First up is Bitcoin:

Now, let’s take a look at Ethereum, and how its shard chains distribute data:

Credit: Shardeum

The Importance of Data Availability

Data availability is crucial for several reasons:

1. It maintains the integrity of the ledger: once data is recorded by all nodes, it is challenging to alter, ensuring the ledger’s immutability.

2. Decentralization: with multiple nodes storing copies of the ledger, data availability ensures that all nodes have access to the same data, maintaining consensus and preventing central points of failure.

3. Transparency and auditability: data availability means all participants can verify data and transactions stored on the blockchain, fostering trust among users.

4. Resilience: by distributing data across multiple nodes, the blockchain becomes more resilient to attacks or failures.

Challenges of Data Availability

While data availability is essential for the proper functioning of blockchains, it also presents some challenges:

1. Reduced throughput: requiring nodes to download and verify data reduces the overall throughput of the blockchain.

2. Increased storage requirements: as the blockchain grows, the amount of data that needs to be stored by nodes increases, leading to higher hardware requirements.

3. Centralization risk: as hardware requirements increase, fewer individuals may be willing to run nodes, potentially pushing out small operations and leaving only large orgs running nodes.

Data Availability in Blockchain Scaling Solutions

To address the challenges of scaling while maintaining data availability, various solutions have been proposed:

Rollups and Data Availability

Rollups are a layer-2 scaling solution that executes transactions off-chain before compressing and posting them in batches to the base layer. There are two main types of rollups:

1. Optimistic rollups: These use economic incentives to guarantee data availability, relying on fraud-proofs to prevent invalid state transitions.

2. Zero-knowledge rollups (ZKR): ZKRs guarantee data availability using cryptographic proofs to check transactions are valid without revealing sensitive information.

Sharding and Data Availability

Sharding involves splitting the blockchain network into multiple sub-chains (shards) that operate in parallel. Ethereum’s future scalability plans include data sharding, where nodes only store data posted in their assigned shard. This reduces the storage requirements for individual nodes while maintaining data availability across the network.

Monolithic vs. Modular Blockchains for Data Availability

As networks scale, the architecture of your network is becoming increasingly important, and the arrival of modular chains like Celestia is making even Vitalik Buterin nervous.

Monolithic blockchains handle all aspects of the blockchain in one layer – including execution, consensus, and data availability. While this approach ensures high data availability, it can limit scalability and decentralization due to the increased storage and computational requirements for nodes.

In contrast, modular blockchains separate the blockchain’s functions into distinct layers, allowing for specialization and optimization. In this architecture, a dedicated data availability layer focuses on storing and providing access to data, enabling other layers to scale more efficiently.

Innovations in Data Availability

Several innovations have been proposed to improve data availability and overcome its challenges:

Data Availability Sampling (DAS)

DAS involves randomly selecting nodes to store a subset of the data on the blockchain. This reduces the resources required to store data while maintaining its availability. DAS is often used in conjunction with erasure coding, where data is encoded with redundant data pieces, and stored across different nodes, to ensure recoverability even if some data is lost.

Data Availability Layers

In modular blockchain architectures, data availability layers are dedicated to the task of ensuring data availability. These layers can be on-chain or off-chain and focus solely on storing and providing access to data, leaving other layers free to specialize in tasks like execution or consensus.

Danksharding and Proto-Danksharding

Danksharding is a sharding architecture that utilizes binary large objects (BLOBs) for efficient data storage. It aims to increase decentralization, provide additional security, and mitigate the risks of MEV (Maximal Extractable Value). Proto-danksharding was added to Ethereum when the recent Dencun upgrade implemented EIP4884. Proto-danksharding is a step on Ethereum’s roadmap towards complete sharding, introducing a new transaction format called BLOB-carrying transactions.

Credit: Tesfu Assefa

Five Projects Utilizing Data Availability Solutions

Ethereum is actively implementing sharding as part of the Ethereum 2.0 roadmap. This sharding will split the Ethereum network into 64 shard chains for processing transactions and storing data, reducing storage requirements for nodes while prioritizing and ensuring data availability. However, it’s getting competition from others. Here are a few leading  projects that incorporate data availability solutions in their architectures.

  1. Celestia:
  • Modular blockchain architecture separating consensus, execution, and data availability layers
  • Focuses on providing a decentralized data availability layer for other blockchains to build on top of it
  • Enables scalable and interoperable blockchain solutions without compromising security or decentralization.
  1. Filecoin:
  • Decentralized storage network using blockchain for secure and efficient data storage
  • Utilizes ‘Proof-of-Spacetime’ consensus mechanism to incentivize storage providers
  • Ensures data availability with cryptographic proofs and a decentralized network of storage providers, allowing users to retrieve data on-demand
  1. NEAR Protocol:
    • Scalable blockchain platform using sharding to increase throughput and decrease latency
    • Ensures data availability through erasure coding and the Doomslug consensus mechanism
    • Enables parallel processing of transactions while maintaining data availability
    • Introduces ‘chunks’ for better load balancing and adaptability
  1. EigenDA:
    • Data availability service for high-throughput decentralized operation on Ethereum rollups
    • Uses EigenLayer restaking primitives for secure and scalable infrastructure
    • Employs erasure coding and KZG commitments to efficiently store and retrieve data
    • Aims to reduce costs through a shared security model and minimized storage requirements
  1. Avail:
    • Avail is a data availability layer to improve scaling and interoperability in Web3
    • Serves as base layer for trust-minimized applications and sovereign rollups
    • Utilizes validity proofs, erasure coding, and KZG Polynomial commitments
    • Ensures immediate and reliable data availability for efficient rollup operation

Conclusion

Data availability is a fundamental aspect of blockchain technology. Without it, we can’t trust in the integrity, security, and functionality of decentralized systems. As the demand for scalability and efficiency grows, innovative solutions such as rollups, sharding, data availability sampling, and dedicated data availability layers are being developed to address the unique challenges associated with data availability. It is likely that the best blockchains for DA will thrive in the coming years. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Grandmaster-Level Chess Without Search

Artificial Intelligence (AI) has been a significant player in the world of chess for decades, with systems like IBM’s Deep Blue making headlines in the late 90s for defeating world champion Garry Kasparov. More recently, AI advancements have led to the development of systems like AlphaZero and Stockfish 16, which use machine learning techniques to improve their gameplay. 

Research in the area still continues robustly, as exemplified by a recent paper from Google DeepMind. The DeepMind researchers have trained a transformer model with 270 million parameters using supervised learning on a dataset of 10 million chess games. Each game in the dataset was annotated with action-values provided by the powerful Stockfish 16 engine, which led to approximately 15 billion data points.

In the world of chess, a player’s skill level is often measured using the Elo rating system. An average club player might have an Elo rating of around 1500, while a world champion’s rating is typically over 2800. A Lichess blitz Elo rating of 2895, as mentioned in this paper, indicates a very high level of skill, comparable to the top human players in the world.

The model was able to achieve a Lichess blitz Elo rating of 2895 when playing against human opponents, and it was also successful in solving a series of challenging chess puzzles. Remarkably, these achievements were made without any domain-specific tweaks or explicit search algorithms.

Credit: Tesfu Assefa

In terms of performance, the model outperformed AlphaZero’s policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. The researchers found that strong chess performance only arises at sufficient scale. They also conducted an extensive series of ablations of design choices and hyperparameters to validate their results.

The researchers concluded that it is possible to distill a good approximation of Stockfish 16 into a feed-forward neural network via standard supervised learning at sufficient scale. This work contributes to the growing body of literature showing that complex and sophisticated algorithms can be distilled into feed-forward transformers. This implies a paradigm shift away from viewing large transformers as mere statistical pattern recognizers to viewing them as a powerful technique for general algorithm approximation.

The paper also discusses the limitations of the model. While the largest model achieves very good performance, it does not completely close the gap to Stockfish 16. All scaling experiments point towards closing this gap eventually with a large enough model trained on enough data. However, the current results do not allow the researchers to claim that the gap can certainly be closed.

Another limitation discussed is that the predictors see the current state but not the complete game history. This leads to some fundamental technical limitations that cannot be overcome without small domain-specific heuristics or augmenting the training data and observable information.

Finally, when using a state-value predictor to construct a policy, the researchers consider all possible subsequent states that are reachable via legal actions. This requires having a transition model 𝑇 (𝑠, 𝑎), and may be considered a version of 1-step search. While the main point is that the predictors do not explicitly search over action sequences, the researchers limit the claim of ‘without search’ to their action-value policy and behavioral cloning policy.

In conclusion, the paper presents a significant advancement in the field of AI and chess, demonstrating that a complex, search-based algorithm, such as Stockfish 16, can be well approximated with a feed-forward neural network via standard supervised learning. This has implications for the broader field of AI, suggesting that complex and sophisticated algorithms can be distilled into feed-forward transformers, leading to a paradigm shift in how we view and utilize large transformers.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Deep fakes: What’s next? Anticipating new twists and turns in humanity’s oldest struggle

Fake news that the Pope endorsed Donald Trump (a story that was shared more widely than any legitimate news story that year). A fake picture of former US VP Michael Pence in his youth seemingly as a gay porn star. Fake audio of UK political leader Keir Starmer apparently viciously berating a young volunteer assistant. Another fake audio of London mayor Sadiq Khan apparently giving priority to a pro-Palestinian march over the annual Remembrance Day walk-past by military veterans. Fake videos of apparent war atrocities. Fake pornographic videos of megastar pop celebrities.

What’s next? And how much does it really matter?

Some observers declare that there’s nothing new under the sun, and that there’s no special need to anticipate worse to come. Society, they say, already knows how to deal with fake news. Fake news may be unpleasant – and it’s sometimes hilarious – but we just have to keep calm and carry on.

I strongly disagree, as I’ll explain below. I’ll review ten reasons why fake news is likely to become worse in the months ahead. Then I’ll suggest ten steps that can be taken to regain our collective sanity.

It remains to be determined whether these ten steps will be sufficient, or whether we’ll all sink into a post-truth swamp, in which sneering suspicion displaces diligent understanding, fake science displaces trustworthy science, fake journalism displaces trustworthy journalism, and fake politicians seize power and impose their dictatorial whims.

Credit: David Wood via Midjourney

Deception: the back story

It’s not flattering to say it, but we humans have been liars since before the dawn of history. And, just as important, we have been self-deceivers as well: we deceive ourselves in order to be more successful in deceiving others.

In case that idea offends you, I invite you to delve into the evidence and analysis offered in, for example:

Credit: Book publishers’ websites (links above)

We implore our children to be truthful but also guide them to know when to tell white lies – “thank you for this lovely present, it’s just what I wanted!” And the same ancient books of the Bible that command us “do not bear false witness” appear to celebrate deceit when practiced by figures such as Jacob, Rachel, Rebekah, and Tamar.

I could tell you, as well, that the ancient Greek dramatist Aeschylus, known as ‘the father of tragedy’, made this pithy observation two and half millennia ago: “Truth is the first casualty in war”. One tragedy – war – births another – deception.

As it happens, it seems likely that this quotation is a misattribution. I’ll come back to that point later, when talking, not about deception, but about solutions to deception. But regardless of whoever first uttered that saying, we can appreciate the insight it contains. In times of bitter conflict, there are special incentives to mislead observers – about the casualties we have suffered, about the casualties we have inflicted on opposing forces, about our military plans for the future, and much more.

It’s not just war that provides an incentive to deceive. It’s the same with politics: opposing parties compete to set the narrative, and individual politicians seek to climb past each other on what Benjamin Disraeli dubbed “the greasy pole” of political intrigue. It’s the same with commerce, with companies ready to spread misleading ‘FUD’ (fear, uncertainty, and doubt) regarding the comparative strengths of various forthcoming products and services. And it’s the same in private life, as we seek to portray ourselves in a favorable light in the eyes of family and friends, hiding our physical and psychological warts.

In this sense, deception is old news. We’ve had ‘fake news’ for as long as there has been ‘news’.

It’s tempting, therefore, to yawn when people draw attention to more recent examples of fake news and deception.

But that would be a giant mistake.

It’s technology that’s making the difference. Technology ramps up the possibilities for fake news to be even more deceptive, more credible, more ubiquitous, more personal, and more effective. Led by leaps in capabilities of AI systems, technology is enabling dramatic new twists in the struggle between truth and lies. It’s becoming even harder to distinguish between trustworthy and untrustworthy information.

The joy of misinformation. What harm could it cause? (Credit: David Wood via Midjourney)

If we fail to anticipate these developments, we’re likely to succumb to new waves of deception. The consequences may be catastrophic.

But forewarned is forearmed. By drawing on insights from humanity’s better experiences, we should be able to create technologies, processes, and institutions that help us to block these oncoming waves.

Ten twists

1. Fake news at scale

If at first you fail, why not try again?

You tried to deceive your target audience, but they were not swayed. This time, they saw through your lies. Or perhaps they didn’t even pay attention.

But if trying is cheap and quick, you can try again, this time with a different false narrative, expressed in a different voice.

What’s changed is that it’s much cheaper to try again. You can take advantage of automation, always-on networks, social media, and generative AI, to create and distribute new pieces of fake news. It’s mass-production for lies.

You’re not constrained by only creating one bot on social media. You can create armies of them.

You’re not constrained by having to write text yourself, or create suitably misleading images. You can obtain good results from a few clicks of a mouse.

The result is that discussion is being flooded with deliberately false narratives.

2. Fake news that earns money

Some false narratives are designed to try to change people’s minds. They want to change voting decisions, purchasing decisions, relationship decisions, and so on.

But other false narratives have a different purpose: to earn money via advertising clicks or affiliate marketing revenue share.

Viewers are attracted to websites by content that is outrageous, inflammatory, intriguing, or funny. They spend more time on these sites to explore the other content there, enjoying being outraged, inflamed, intrigued, or simply humored. And while on these sites, they may click on other links that generate revenue for the owners of the site.

In this case, the content creators have no special interest in whether the content matches their own political or philosophical outlooks. They produce whatever earns them the most clicks. Indeed, some clickbait merchants set up websites posting contradictory stories, to catch traffic from both sides of the political spectrum.

As a sad side-effect, people’s minds become increasingly confused. Being misled by fake content, they become less able to distinguish fantasy from reality.

3. Fake news with a personal appeal

It’s not just that fake news is being created on a greater scale than ever before. It’s being created with a greater variety than ever before.

Technology makes it easier to create different variants of the same false narrative. Some variants can be sent to people who are supporters of Candidate A within Party P. A different variant can be sent to people who support Candidate B within Party P. Yet other different variants target people whose favored candidates are from Party Q, Party R, and so on.

More than that: once software has learned which kind of pretty face each person is likely to look at – or which kinds of music make each person want to listen – these variants can easily be generated too, and directed to each target.

4. Fake news based on footprints

You might wonder: how does software know that I am likely to be distracted by particular kinds of pretty faces, or particular kinds of music?

That’s where extensive data gathering and analysis come to the fore. We are each constantly generating online footprints.

For example, Facebook notices that when it places a chess puzzle in my timeline, I tend to click on that conversation, to consider the position in more detail. Facebook observes my interest in these puzzles. Soon, more chess puzzles are being shown to me.

That particular inference is relatively straightforward. Other inferences depend on a wider review of my online activity – which posts I ‘like’, which posts I ‘hide’, and so on.

Astute robots can learn more from our footprints than we expected (Credit: David Wood via Midjourney)

The algorithms make all kinds of deductions from such reviews. They’re not always correct, not even close. But the AI systems that create personalized fake news have greater numbers of positive hits than those that don’t.

5. Fake news that builds on top of truth

The best lies mix truth with untruth. These lies are especially effective if the truth in question is one that much of society likes to suppress.

Consider a simple example. A leaked document here, a whistleblower there – a few hints suggest something fishy is going on: there is bureaucratic corruption and nepotism within a political state. Then the news-faker adds the unjustified conclusion: the government in question is irretrievably corrupt. Hence the conclusion: kick all these politicians out of power!

Again: a narrative might give a number of examples of people experiencing remission from long-standing diseases, despite forecasts from white-coated doctors that the disease was fatal. Then it adds the lie: what matters most in healthcare is your personal attitude, rather than expensive drugs that Big Pharma are trying to sell. Therefore: stop listening to your doctor, and instead purchase my course in positive thinking for $29.99 a month!

Again: members of some minorities suffered appalling abuses in trials of various medical procedures, where there was no informed consent, and where there was an apparent casual disregard for the suffering entailed. And then the lie: present-day society is incorrigibly racist and irredeemably exploitative. Therefore: it’s time to wield pitchforks!

The cleverest fake news combines this principle with the previous one. It works out our belief-systems from our online footprints – it figures out what we already suspect to be true, or hope to be true, even though the rest of society tends to think differently. Then it whips up a fake narrative from beliefs we support plus the new message it’s trying to inject into our minds.

In this way, it flatters us, in order to better mislead us.

No wonder that we often fall for that kind of deception.

6. Fake news that weaponizes friendships

Each of us is more likely to pay attention to a message if it comes from a person that we think we like – someone we perceive as one of our special friends.

If our friend is concerned about a topic, it makes us more likely to be concerned about it too – even if, previously, we might not have given that topic a second thought.

This is where the sinister power of the systems that manufacture fake news reaches higher levels. These systems invest time to create fake personas – people who we welcome as our ‘friends’ on social media.

At first, these friends say nothing out of the ordinary. We forget whether or not we met them in real life. Their names become increasingly familiar to us. We imagine we know lots about them – even though their entire backstory is fictitious.

And that’s when the poisonous messages start seeping into your conversations and then into your thoughts. And without you realizing what has happened, a fake friend has led you into a fake idea.

7. Fake news with amplification support

If we hear the same opinion from multiple sources, we may at first resist the idea, but then start to accept it.

That’s especially true if the opinion receives apparent support from apparent credentialed experts.

Thus when some fake audio is posted to social media, other fake posts soon accompany it. “I’m an expert in audio authentication”, a bot declares. “I’ve studied the clip carefully, and I assure you it’s genuine”.

If we don’t look closely, we’ll fail to spot that the credentials are bogus, and that there’s no real-world audio expert behind these claims.

The greater the number (and the greater the variety) of the apparent endorsements, the easier it becomes for some of these fake endorsements to bypass our critical faculties and to change our minds.

8. Fake news that exploits our pride

We all like to tell ourselves: we’re not the kind of person who falls for a simple conjuring trick.

Other people – those not so smart as us, we think – might be misled by dubious claims in advertisements or social media memes. Not us!

This has been called the bias blind spot – the cognitive bias that says “other people have cognitive biases, but not me!”

But recall that our ability to deceive ourselves is key to our ability to deceive others. If we are conscious of our lies, astute listeners will notice it. That’s why our subconscious needs to mislead our conscious mind before we in turn can mislead other people.

In the same way, it is an inflated self-confidence that we are good reasoners and good observers that can set us up for the biggest failures.

Couple a misplaced pride in our own critical faculties with the warm feelings that we have developed for friends (either fake online personas, as covered above, or real-world friends who have already fallen into social media rabbit holes), and we are set up to be suckered.

9. Fake news that exploits alienation

Pride isn’t the only emotion that can tempt us into the pit of fake news. Sometimes it can be a sense of grievance or of alienation that we cling to.

Unfortunately, although some aspects of the modern world feature greater human flourishing than ever before, other aspects increase the chances of people nurturing grievances:

  • The inability of large segments of the population to afford good healthcare, good education, or good accommodation
  • The constant barrage of bad news stories from media, 24 hours a day
  • A matching barrage of stories that seem to show the “elites” of society as being out-of-touch, decadent, uncaring, and frivolous, wallowing in undeserved luxury.

As a result, fake news narratives can more easily reach fertile soil – unhappy minds skip any careful assessment of the validity of the claims made.

When you’re fed up with the world, it’s easier to lead you astray (Credit: David Wood via Midjourney)

10. Fake news with a lower barrier to entry

Perhaps you’re still thinking: none of the above is truly novel.

In a way, you would be correct. In past times, clever operators with sufficient resources could devise falsehoods that misled lots of people. Traditional media – including radio and newspapers – were spreading destructive propaganda long before the birth of the Internet.

But the biggest difference, nowadays, is how easy it is for people to access the tools that can help them achieve all the effects listed above.

The barrier to entry for purveyors of far-reaching fake news is lower than ever before. This is an age of ‘malware as a service’, dark net tutorials on guerrilla information warfare, and turnkey tools and databases.

It’s an age where powerful AI systems can increasingly be deployed in service of all the above methods.

Happily, as I’ll discuss shortly, these same AI systems can provide part of the solution to the problem of ubiquitous fake news. But only part of the solution.

Interlude: a world without trust

First, a quick reminder of the bad consequences of fake news.

It’s not just that people are deceived into thinking that dangerous politicians are actually good people, and, contrariwise, that decent men and women are actually deplorable – so that electors are fooled into voting the dangerous ones into power.

It’s not just that people are deceived into hating an entire section of society, seeing everyone in that grouping as somehow subhuman.

It’s not just that people are deceived into investing their life savings into bogus schemes in which they lose everything.

It’s not just that people are deceived into rejecting the sound advice of meticulous medical researchers, and instead adopt unsafe hyped-up treatments that have fearful health consequences.

All of these examples of unsound adoption of dangerous false beliefs are, indeed, serious.

But there’s another problem. When people see that much of the public discourse is filled with untrustworthy fake news, they are prone to jump to the conclusion that all news is equally untrustworthy.

As noted by Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab,

A pernicious harm of fake news is the doubt it sows about the reliability of all news.

Thus the frequent lies and distortions of fringe news sites like InfoWars, Natural News, and Breitbart News lead many people to conclude that all media frequently publish lies. Therefore nothing should be trusted. And the phrase “mainstream media” becomes a sneer.

(They find some justification for this conclusion in the observation that all media make some mistakes from time to time. The problem, of course, is in extrapolating from individual instances of mistakes to applying hostile doubt to all news.)

Baroness Onora O’Neill of the Faculty of Philosophy at the University of Cambridge commenced her series of Reith Lectures in 2002 by quoting Confucius:

Confucius told his disciple Tsze-kung that three things are needed for government: weapons, food, and trust. If a ruler can’t hold on to all three, he should give up the weapons first and the food next. Trust should be guarded to the end: ‘without trust we cannot stand’.

Sadly, if there is no trust, we’re likely to end up being governed by the sort of regimes that are the furthest from deserving trust.

It’s as the German historian and philosopher Hannah Arendt warned us in her 1951 book The Origins of Totalitarianism:

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, in other words, the reality of experience, and the distinction between true and false… people for whom those distinctions no longer exist.

However, the technologies of the 2020s put fearsome possibilities into our grasp that writers in 1951 (like Arendt) and in 2002 (like O’Neill) could hardly have imagined.

Big Brother will be watching, from every angle (Credit: David Wood via Midjourney)

In previous generations, people could keep their inner thoughts to themselves, whilst outwardly kowtowing to the totalitarian regimes in which they found themselves. But with ten-fold twisted fake news, even our inner minds will be hounded and subverted. Any internal refuge of independent thinking is likely to be squelched. Unless, that is, we are wise enough to take action now to prevent that downward spiral.

Regaining trust

What can be done to re-establish trust in society?

Having anticipated, above, ten ways in which the problem of fake news is becoming worse, I now offer an equal number of possible steps forward.

1. Education, education, education

Part of growing up is to learn not to trust so-called 419 scam emails. (The number 419 refers to the section of the Nigerian Criminal Code that deals with fraud.) If someone emails us to say they are a prince of a remote country and they wish to pass their inheritance to us – provided we forward them some hard cash first – this is almost certainly too good to be true.

We also learn that seeing is not believing: our eyes can deceive us, due to optical illusions. If we see water ahead of us on a desert road, that doesn’t mean the water is there.

Similarly, we all need to learn the ways in which fake news stories can mislead us – and about the risks involved in thoughtlessly spreading such news further.

These mechanisms and risks should be covered in educational materials for people of all ages.

It’s like becoming vaccinated and developing resistance to biological pathogens. If we see at first hand the problems caused by over-credulous acceptance of false narratives, it can make us more careful on the next occasion. 

But this educational initiative needs to do more than alert people to the ways in which fake news operates. It also needs to counter the insidious view that all news is equally untrustworthy – the insidious view that there’s no such thing as an expert opinion.

This means more than teaching people the facts of science. It means teaching people the methods used by science to test hypotheses, the reasons why science assesses various specific hypotheses as being plausible. Finally, it means teaching people, “here are the reasons to assign a higher level of trust to specific media organizations”.

That takes us to the second potential step forward.

2. Upholding trustworthy sources

Earlier, I mentioned that a quote often attributed to the fifth century BC writer Aeschylus was almost certainly not actually said by him.

What gives me confidence in that conclusion?

It’s because of the reliance I place in one online organization, namely Quote Investigator. In turn, that reliance arises from:

  • The careful way in which pages on that site reference the sources they use
  • The regular updates the site makes to its pages, as readers find additional relevant information
  • The fact that, for all the years I’ve been using that site, I can’t remember ever being misled by it
  • The lack of any profit motivation for the site
  • Its focus on a particular area of research, rather than spreading its attention to wider topics
  • Positive commendations for the site from other researchers that have gained and maintained a good reputation.

Other organizations have similar aspirations. Rather than “quote checking”, some of them specialize in “fact checking”. Examples include:

Credit: Fact-checking websites (links above)

These sites have their critics, who make various allegations of partisan bias, overreliance on supposed experts with questionable credentials, subjective evaluations, and unclear sources of funding.

My own judgment is that these criticisms are mainly misplaced, but that constant vigilance is needed.

I’ll go further: these sites are among the most important projects taking place on the planet. To the extent that they fall short, we should all be trying to help out, rather than denigrating them.

3. Real-time fact-checking

Fact checking websites are often impressively quick in updating their pages to address new narratives. However, this still leaves a number of problems:

  • People may be swayed by a false narrative before that narrative is added to a fact-checking site
  • Even though a piece of fake news is soundly debunked on a fact-checking site, someone may not be aware of that debunking
  • Even if someone subsequently reads an article on a fact-checking site that points out the flaws of a particular false narrative, that narrative may already have caused a rewiring of the person’s belief systems at a subconscious level – and that rewiring may persist even though the person learns about the flaws in the story that triggered these subconscious changes
  • The personalization problem: false narratives tailored to individual targets won’t be picked up by centralized fact-checking sites.

AI could hold part of the answer. Imagine if our digital media systems included real-time fact-checking analyses. That’s part of the potential of AI systems. These real-time notifications would catch the false information before it has a chance to penetrate deeply into our brain.

Our email applications already do a version of this: flagging suspicious content. The application warns us: this email claims to come from your bank, but it probably doesn’t, so take care with it. Or: the attachment to this email purports to be a PDF, but it’s actually an executable file that will likely cause damage.

Likewise, automated real-time fact-checking could display messages on the screen, on top of the content that is being communicated to us, saying things like:

  • “The claim has been refuted”
  • “Note that the graph presented is misleading”
  • “This video has been doctored from its original version”
  • “This audio has no reliable evidence as to its authenticity”
  • “There is no indication of a cause-and-effect relationship between the facts mentioned”

In each case, ideally the warning message will contain a link to where more information can be found.

4. Decentralized fact-checking

The next question that arises is: how can people be confident in relying on specific real-time fact-checkers?

We can already imagine their complaints:

  • “This fact-checker is wokism gone mad”
  • “This fact-checker serves Google, not me”
  • “This fact-checker serves the government, not me”
  • “I prefer to turn off the fact-checker, to receive my news free from censorship”

There’s no one easy answer to these objections. Each step I describe in this list of ten is designed to reduce some of the apprehension.

But an important step forward would be to separate the provision of content from the fact-checking layer. The fact-checking layer, rather than being owned and operated by the commercial entity that delivers the media, would ideally transcend individual corporations. For example, it could operate akin to Wikipedia, although it would likely need more funding than Wikipedia currently receives.

Further developing this model, the fact-checking software could have various settings that users adjust, reflecting their own judgment about which independent sources should be used for cross-checking.

Maybe the task is too dangerous to leave to just one organization: then another model would involve the existence of more than one option in the fact-checking field, with users being able to select one – or a bunch – to run on their devices.

5. Penalties for dangerous fakes

As well as trying to improve the identification of fake news, it’s important to change the incentives under which fake news is created and distributed. There are roles for ‘sticks’ (penalties) as well as ‘carrots’ (rewards).

Regarding penalties, society already imposes penalties:

  • When advertisements make misleading or unfounded claims
  • When companies make misleading or unfounded claims in their financial statements
  • When people make libelous claims about each other.

Fines or other punishments could be used in cases where people knowingly distribute misleading narratives, when the consequences involve clear harm (for example, a riot).

This proposal makes some people nervous, as they see it as an intrusion on freedom of expression, or a block on satire. They fear that governments would use these punishments to clamp down on statements that are embarrassing to them.

That’s why monitoring and prosecuting such cases needs to be done independently – by a police force and judiciary that operates at arms’ length from the government of the day.

This principle of separation of powers already applies to many other legal regulations, and could surely work for policing fake news.

Related, there’s a case for wider collection and publication of statistics of reliability. Just as hospitals, schools, and many other parts of society have statistics published about their performance, media organizations should receive the same scorecard.

In this way, it would be easy to know which media channels have a casual relationship with the truth, and which behave more cautiously. In this way, investment funds or other sources of financing could deny support to organizations whose trustworthiness ratings drop too low. This kind of market punishment would operate alongside the legal punishment that applies to more egregious cases.

6. A coalition for integrity

Some of the creators of fake news won’t be deterred by threats of legal punishment. They already operate beyond the reaches of the law, in overseas jurisdictions, or anonymously and secretly.

Nevertheless, there are still points of crossover, where new content is added into media channels. It is at these points where sanctions can be applied. Media organizations that are lax in monitoring the material they receive would then become liable for damage arising.

This will be hard to apply for communications systems such as Telegram, WhatsApp, and Signal, where content is encrypted from one end of a communication to the other. In such cases, the communications company doesn’t know what is being transmitted.

Indeed, it is via such closed communications systems that fake news often spreads these days, with Telegram a particularly bad offender.

There’s a case to be made for a coalition of every organization that values truthfulness and trustworthiness over the local benefits of spreading false information.

Forming a Coalition for Integrity (Credit: David Wood via Midjourney)

People who support this ‘coalition for integrity’ would share information about:

  • Entry points used by fake news providers to try to evade detection
  • Identification of fake news providers
  • Ways in which providers of fake news are changing their methods – and how these new methods can be combated.

Regardless of differences in political or philosophical outlook among members of this coalition, they have a common interest in defending truthfulness versus deception. They should not allow their differences to hinder effective collaboration in support of that common purpose.

7. Making trust everyone’s business

In recent decades, a variety of new job titles have been created at the highest levels within companies and organizations, such as:

  • Chief Design Officer
  • Chief Innovation Officer
  • Chief Quality Officer
  • Chief Security Officer

None of these posts free other members of the company from their responsibility for design, innovation, quality or security. These values are universal to everyone in the organization as they go about their duties. Nevertheless, the new ‘chief’ provides a high-level focus on the topic.

It should be the same with a new set of ‘Chief Trust Officers’. These executives would find ways to keep reminding personnel about:

  • The perils arising if the organization gains a reputation for being untrustworthy
  • Methods and procedures to follow to build and maintain a trustworthy reputation for the organization
  • Types of error that could result in dangerous false narratives being unwittingly transmitted

My assessment is that the organizations who appoint and support Chief Trust Officers (or equivalent) are the ones most likely to succeed in the turbulent times ahead.

8. Encouraging openness

To be clear, education often fails: people resist believing that they can be taken in by false information.

We like to think of ourselves as rational people, but a more accurate description is that we are a rationalizing species. We delight in finding ways to convince ourselves that it is fine to believe the things that we want to believe (even in the face of apparent evidence against these beliefs).

That’s why bombarding people with education often backfires. Rather than listening to these points, people can erect a strong shield of skepticism, as they prepare to lash out at would-be educators.

Indeed, we all know people who are remarkably clever, but they deploy their cleverness in support of profoundly unwise endeavors.

This state of affairs cannot be solved merely by pumping in more facts and arguments. Instead, different approaches are required, to encourage a greater openness of spirit.

One approach relies on the principle mentioned earlier, in which people pay more attention to suggestions from their close friends. Therefore, the best way to warn people they are about to fall for dangerous information is for them to be warned by people they already trust and respect.

Another approach is to find ways to put people in a better mood all round. When they have a compassionate, optimistic mindset, they’re more likely to listen carefully to warnings being raised – and less likely to swat away these warnings as an unwelcome annoyance.

It’s not enough to try to raise rational intelligence – rather, we must raise compassionate intelligence: an intelligence that seeks wisdom and value in interactions even with people previously regarded as a threat or enemy.

This is a different kind of education. Not an education in rationality, but rather an education in openness and compassion. It may involve music, meditation, spending time in nature, biofeedback, and selected mind-transforming substances. Of course, these have potential drawbacks as well as potential benefits, but since the upsides are so high, options need to be urgently explored.

9. A shared positive vision

Another factor that can predispose people to openness and collaboration, over closed-mindedness and stubborn tribal loyalties, is a credible path forward to a world with profound shared benefits.

When people anticipate an ongoing struggle, with zero-sum outcomes and continual scarcity of vital resources, it makes them mentally hostile and rigid.

Indeed, if they foresee such an ongoing conflict, they’ll be inclined to highlight any available information – true or fake – that shows their presumed enemies in a bad light. What matters to them in that moment is anything that might annoy, demoralize, or inflame these presumed enemies. They seize on fake news that does this, and also brings together their side: the set of people who share their sense of alienation and frustration with their enemies.

That is why the education campaign that I anticipate needs a roadmap to what I call a sustainable superabundance, in which everyone benefits. If this vision permeates both hearts and minds, it can inspire people to set and respect a higher standard of trustworthiness. Peddlers of fake news will discover, at that point, that people have lost interest in their untruths.

10. Collaborative intelligence

I do not claim that the nine steps above are likely to be sufficient to head off the coming wave of dangerous fake news.

Instead, I see them as a starting point, to at least buy us some time before the ravages of cleverer deep fakes run wild.

That extra time allows us to build a stronger collaborative intelligence, which draws on the insights and ideas of people throughout the coalition for integrity. These insights and ideas need time to be evolved and molded into practical solutions.

However, I anticipate not just a collaboration between human minds, but also a rich collaboration involving AI minds too.

A collaboration of minds – humans and AIs (Credit: David Wood via Midjourney)

Critically, AI systems aren’t just for ill-intentioned people to use to make their deep fakes more treacherous. Nor are they just something that can power real-time fact-checking, important though that is. Instead, they are tools to help us expand our thinking in multiple dimensions. When we use them with care, these systems can learn about our concerns regarding worse cases of deep fakes. They can consider multiple possibilities. Then they can offer us new suggestions to consider – ways probably different from any I’ve listed above.

That would be a striking example of beneficial artificial intelligence. It would see deep fakes defeated by deep benevolence – and by a coalition that integrates the best values of humans with the best insights of AIs.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter