Bitcoin Halving 2024: Final Countdown To 19 April

Please note: This article is for educational purposes only and doesn’t constitute financial advice of any kind. Please invest responsibly. 

Intro

Tick-tock, tick-tock, get closer one more block. Despite this weekend’s crypto jitters after Middle East tensions, everyone and their cat is now tuned in for this week’s 4th Bitcoin Halving, scheduled for 19 April, 2024 around 6pm UTC, and anticipation, nerves and speculation levels are off the charts. 

This momentous event, which occurs roughly every four years, will reduce the mining rewards from 6.25 BTC to 3.125 BTC per block, cutting the daily issuance of new Bitcoins in half. 

Bitcoin supporters are gunning for that $100,000 price tag milestone with laser-eyed focus, and the 2024 halving is Bitcoin’s big event after the year following the Spot ETF approvals, and a defining moment for the entire crypto industry, providing high hopes that the ensuing supply pinch will kick off another crazy bull run.

What is the Bitcoin Halving?

The Bitcoin Halving is a pre-programmed event that is hardcoded into the Bitcoin protocol. It is designed to control the supply of new Bitcoins entering circulation, ensuring that the total supply will never exceed 21 million BTC. By reducing the block reward for miners every 210,000 blocks (approximately four years), the Halving helps maintain Bitcoin’s scarcity and deflationary pressure.

Historical Behavior of 2012, 2016, and 2020 Halvings

Credit: Werner V.

To better understand the reason for optimism surrounding the upcoming 2024 Halving, let’s take a closer look at the previous three Halving events and how they affected Bitcoin’s price, narratives, and overall market sentiment.

The 2012 Halving

The first Bitcoin Halving took place on 28 November, 2012, when the price of BTC was just $12.35. In the year leading up to the event, Bitcoin’s price had recovered from the fallout of the first Mt. Gox hack, rising from $2.55 to $12.35. Post-Halving, Bitcoin’s price surged by an astonishing 2000%, reaching $260 in April 2013. This period was characterized by growing interest from tech-savvy individuals and online communities, laying the groundwork for Bitcoin’s future growth.

The 2016 Halving: Leaving behind Mt. Gox 

By the time of the second Halving on July 9, 2016, Bitcoin had faced several challenges, including the devastating Mt. Gox hack in 2014 and a tarnished reputation due to Dark Web-related criminal activity and prosecutions such as Ross Ulbricht of Silk Road. Despite these setbacks, Bitcoin’s price rose from $430 to $650 in the months leading up to the Halving. 

In the post-Halving period, Bitcoin entered a phase of runaway growth, reaching nearly $20,000 by December 2017 – a staggering 2984% increase from the Halving day price.

The 2020 Halving: Institutions arrive

The 2020 Halving occurred on 11 May, 2020, amidst the global turmoil caused by the COVID-19 pandemic. Governments worldwide pumped trillions of dollars into their economies, leading to increased interest in Bitcoin as a hedge against inflation.

In the months preceding the Halving, Bitcoin’s price recovered from a significant drop, rising from $5,000 to $8,600. Post-Halving, Bitcoin’s price rallied to an all-time high of approximately $64,000 in April 2021, driven by a surge in institutional adoption and growing interest in decentralized finance (DeFi Summer), and those silly crypto jpegs known as NFTs.

Bitcoin Halving: A Three-Act Play (Hype, Disillusion and Accumulation)

According to Galaxy Research, Bitcoin’s halving events have historically unfolded in three distinct phases, each characterized by unique market dynamics and investor sentiment. 

  • Hype: The first act, dubbed the ‘Hype Phase’, sets the stage with a surge in prices leading up to the halving. Excitement and anticipation build as market participants speculate on the potential impact of the reduced supply.
  • Disillusionment: As the curtain rises on the second act, known as the ‘Disillusionment Phase’, the market wakes up to a post-halving hangover. Prices dip or go sideways, leaving some investors questioning the immediate effects of the event. However, this act is merely an intermission, setting the stage for the grand finale.
  • Accumulation: The third and final act, the ‘Accumulation Phase’, is where the magic happens. Prices recover and embark on a sustained upward trajectory, rewarding patient investors who held through the previous two phases and market participants who recognize the long-term implications of the halving and the growing maturity of the ecosystem.

Opinions on prices after the 2024 Halving

Uber-bullish Bitcoin predictions are a satoshi a dozen right now. 

Michael Novogratz, CEO of Galaxy Digital, Morgan Creek CEO Mask Yusko, and analyst Tom Lee have all predicted that Bitcoin’s price will hit $150,000, while Skybridge founder Anthony Scaramucci thinks Bitcoin will hit at least $170,000 in the 18 months after the Halving. Additionally, billionaire investor Tim Draper has predicted that Bitcoin will reach $250,000 in 2024, and Cathie Wood’s ARK Invest has projected that Bitcoin could surpass $1 million in the long-term.

Other notable predictions include Plan B, a prominent Bitcoin analyst, who regularly shares price analyses and predictions on Twitter, ranging from $100,000 to $1 million. Fred Thiel, Chairman and CEO of Marathon Digital Holdings, also forecasts Bitcoin reaching $120,000 post-Halving.

Credit: Tesfu Assefa

What to Expect After the 2024 Bitcoin Halving

As Bitcoin’s daily emissions get slashed from 900 BTC to 450 BTC, the price of mining will go up exponentially over the coming years. Several factors are expected to contribute to Bitcoin’s potential price appreciation, including increased institutional adoption, growing interest from younger generations, and the reduced supply of new Bitcoins entering circulation. Additionally, the launch of Bitcoin ETFs and the continued development of Bitcoin’s core infrastructure, such as the Lightning Network and Taproot upgrade, are expected to further bolster Bitcoin’s growth.

All these will help other crypto networks such as Ethereum, Cardano and Solana and their AI cryptocurrencies, memecoins and DePIN, as Bitcoin’s rising tide has shown to raise all ships. 

The Impact of Bitcoin Spot ETFs

One of the most significant developments in the lead-up to the 2024 Halving has been the introduction of Bitcoin Spot ETFs. These investment vehicles hold actual Bitcoins rather than futures contracts, and provide institutional investors with a way to enter the crypto market. With major players like BlackRock and Fidelity now holding hundreds of thousands of Bitcoins in their ETFs, the institutional demand for Bitcoin is stronger than ever.

The Cost of Bitcoin Mining After the Halving

While the 2024 Halving is expected to have a positive impact on Bitcoin’s price, it will also present challenges for miners. As the block reward is reduced, the cost of mining new Bitcoins will effectively double. Some analysts, such as CryptoQuant CEO Ki Young Ju, predict that mining costs could rise from $40,000 to $80,000 per BTC for miners using the popular Antminer S19 XP.

This increase in mining costs will likely lead to a consolidation of the mining industry, with smaller, less efficient miners being forced out of the market. However, as the difficulty of mining adjusts to the reduced hash rate, the remaining miners will become more profitable, potentially leading to a more stable and secure network.

Conclusion

As the crypto world counts down the days to the 2024 Bitcoin Halving, it’s clear that this event has the potential to be a watershed moment for the world’s first cryptocurrency. With institutional adoption at an all-time high, a dramatically reduced supply of new Bitcoins, and a range of technical upgrades in the works, Bitcoin is poised for significant growth in the post-Halving period.

While it’s impossible to predict the exact price of Bitcoin in the coming years, the historical precedent set by previous Halvings suggests that we could be on the cusp of another bull run. As always, it’s essential for investors to conduct their own research, manage risk appropriately, and stay informed about the latest developments in the ever-evolving world of cryptocurrencies.

In the meantime, sit back and count down with the entire crypto space here:

https://www.nicehash.com/countdown/btc-halving-2024-05-10-12-00

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Why is Data Availability (DA) in Crypto and Blockchain Important? 

Introduction

The digital world’s driven by data, whether in Web2 or Web3. It’s no different in the cryptocurrency and blockchain sector. Gigantic new decentralized networks are spinning up and getting new layers of chain – and data availability is crucial to their integrity, security, and functionality. 

With the technology evolving at a breakneck pace, understanding data availability and its implications is key to understanding the future of cryptocurrency applications. New innovations, like data sharding and sampling, are making it cheaper and more effective to ensure reliable DA and data storage than ever before. And the DA space is only going to get more competitive from here on, with ‘modular’ chains like Celestia, which are divided into specific layers dedicated to specific tasks.

In this blog post, we will explore the concept of data availability, its challenges, and the innovative solutions being developed to address them.

What is Data Availability (DA)?

Data availability can be defined as the state when all transaction-related data is accessible to nodes on the blockchain network. For a trustless network to function, it is essential that nodes can verify transactions and compute the blockchain’s state independently. When block producers propose new blocks, they must make all data within the block, including transaction data, available for others to see and verify. 

It can get quite complicated. Let’s look at the top two chains in the world and how they handle DA. 
First up is Bitcoin:

Now, let’s take a look at Ethereum, and how its shard chains distribute data:

Credit: Shardeum

The Importance of Data Availability

Data availability is crucial for several reasons:

1. It maintains the integrity of the ledger: once data is recorded by all nodes, it is challenging to alter, ensuring the ledger’s immutability.

2. Decentralization: with multiple nodes storing copies of the ledger, data availability ensures that all nodes have access to the same data, maintaining consensus and preventing central points of failure.

3. Transparency and auditability: data availability means all participants can verify data and transactions stored on the blockchain, fostering trust among users.

4. Resilience: by distributing data across multiple nodes, the blockchain becomes more resilient to attacks or failures.

Challenges of Data Availability

While data availability is essential for the proper functioning of blockchains, it also presents some challenges:

1. Reduced throughput: requiring nodes to download and verify data reduces the overall throughput of the blockchain.

2. Increased storage requirements: as the blockchain grows, the amount of data that needs to be stored by nodes increases, leading to higher hardware requirements.

3. Centralization risk: as hardware requirements increase, fewer individuals may be willing to run nodes, potentially pushing out small operations and leaving only large orgs running nodes.

Data Availability in Blockchain Scaling Solutions

To address the challenges of scaling while maintaining data availability, various solutions have been proposed:

Rollups and Data Availability

Rollups are a layer-2 scaling solution that executes transactions off-chain before compressing and posting them in batches to the base layer. There are two main types of rollups:

1. Optimistic rollups: These use economic incentives to guarantee data availability, relying on fraud-proofs to prevent invalid state transitions.

2. Zero-knowledge rollups (ZKR): ZKRs guarantee data availability using cryptographic proofs to check transactions are valid without revealing sensitive information.

Sharding and Data Availability

Sharding involves splitting the blockchain network into multiple sub-chains (shards) that operate in parallel. Ethereum’s future scalability plans include data sharding, where nodes only store data posted in their assigned shard. This reduces the storage requirements for individual nodes while maintaining data availability across the network.

Monolithic vs. Modular Blockchains for Data Availability

As networks scale, the architecture of your network is becoming increasingly important, and the arrival of modular chains like Celestia is making even Vitalik Buterin nervous.

Monolithic blockchains handle all aspects of the blockchain in one layer – including execution, consensus, and data availability. While this approach ensures high data availability, it can limit scalability and decentralization due to the increased storage and computational requirements for nodes.

In contrast, modular blockchains separate the blockchain’s functions into distinct layers, allowing for specialization and optimization. In this architecture, a dedicated data availability layer focuses on storing and providing access to data, enabling other layers to scale more efficiently.

Innovations in Data Availability

Several innovations have been proposed to improve data availability and overcome its challenges:

Data Availability Sampling (DAS)

DAS involves randomly selecting nodes to store a subset of the data on the blockchain. This reduces the resources required to store data while maintaining its availability. DAS is often used in conjunction with erasure coding, where data is encoded with redundant data pieces, and stored across different nodes, to ensure recoverability even if some data is lost.

Data Availability Layers

In modular blockchain architectures, data availability layers are dedicated to the task of ensuring data availability. These layers can be on-chain or off-chain and focus solely on storing and providing access to data, leaving other layers free to specialize in tasks like execution or consensus.

Danksharding and Proto-Danksharding

Danksharding is a sharding architecture that utilizes binary large objects (BLOBs) for efficient data storage. It aims to increase decentralization, provide additional security, and mitigate the risks of MEV (Maximal Extractable Value). Proto-danksharding was added to Ethereum when the recent Dencun upgrade implemented EIP4884. Proto-danksharding is a step on Ethereum’s roadmap towards complete sharding, introducing a new transaction format called BLOB-carrying transactions.

Credit: Tesfu Assefa

Five Projects Utilizing Data Availability Solutions

Ethereum is actively implementing sharding as part of the Ethereum 2.0 roadmap. This sharding will split the Ethereum network into 64 shard chains for processing transactions and storing data, reducing storage requirements for nodes while prioritizing and ensuring data availability. However, it’s getting competition from others. Here are a few leading  projects that incorporate data availability solutions in their architectures.

  1. Celestia:
  • Modular blockchain architecture separating consensus, execution, and data availability layers
  • Focuses on providing a decentralized data availability layer for other blockchains to build on top of it
  • Enables scalable and interoperable blockchain solutions without compromising security or decentralization.
  1. Filecoin:
  • Decentralized storage network using blockchain for secure and efficient data storage
  • Utilizes ‘Proof-of-Spacetime’ consensus mechanism to incentivize storage providers
  • Ensures data availability with cryptographic proofs and a decentralized network of storage providers, allowing users to retrieve data on-demand
  1. NEAR Protocol:
    • Scalable blockchain platform using sharding to increase throughput and decrease latency
    • Ensures data availability through erasure coding and the Doomslug consensus mechanism
    • Enables parallel processing of transactions while maintaining data availability
    • Introduces ‘chunks’ for better load balancing and adaptability
  1. EigenDA:
    • Data availability service for high-throughput decentralized operation on Ethereum rollups
    • Uses EigenLayer restaking primitives for secure and scalable infrastructure
    • Employs erasure coding and KZG commitments to efficiently store and retrieve data
    • Aims to reduce costs through a shared security model and minimized storage requirements
  1. Avail:
    • Avail is a data availability layer to improve scaling and interoperability in Web3
    • Serves as base layer for trust-minimized applications and sovereign rollups
    • Utilizes validity proofs, erasure coding, and KZG Polynomial commitments
    • Ensures immediate and reliable data availability for efficient rollup operation

Conclusion

Data availability is a fundamental aspect of blockchain technology. Without it, we can’t trust in the integrity, security, and functionality of decentralized systems. As the demand for scalability and efficiency grows, innovative solutions such as rollups, sharding, data availability sampling, and dedicated data availability layers are being developed to address the unique challenges associated with data availability. It is likely that the best blockchains for DA will thrive in the coming years. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Grandmaster-Level Chess Without Search

Artificial Intelligence (AI) has been a significant player in the world of chess for decades, with systems like IBM’s Deep Blue making headlines in the late 90s for defeating world champion Garry Kasparov. More recently, AI advancements have led to the development of systems like AlphaZero and Stockfish 16, which use machine learning techniques to improve their gameplay. 

Research in the area still continues robustly, as exemplified by a recent paper from Google DeepMind. The DeepMind researchers have trained a transformer model with 270 million parameters using supervised learning on a dataset of 10 million chess games. Each game in the dataset was annotated with action-values provided by the powerful Stockfish 16 engine, which led to approximately 15 billion data points.

In the world of chess, a player’s skill level is often measured using the Elo rating system. An average club player might have an Elo rating of around 1500, while a world champion’s rating is typically over 2800. A Lichess blitz Elo rating of 2895, as mentioned in this paper, indicates a very high level of skill, comparable to the top human players in the world.

The model was able to achieve a Lichess blitz Elo rating of 2895 when playing against human opponents, and it was also successful in solving a series of challenging chess puzzles. Remarkably, these achievements were made without any domain-specific tweaks or explicit search algorithms.

Credit: Tesfu Assefa

In terms of performance, the model outperformed AlphaZero’s policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. The researchers found that strong chess performance only arises at sufficient scale. They also conducted an extensive series of ablations of design choices and hyperparameters to validate their results.

The researchers concluded that it is possible to distill a good approximation of Stockfish 16 into a feed-forward neural network via standard supervised learning at sufficient scale. This work contributes to the growing body of literature showing that complex and sophisticated algorithms can be distilled into feed-forward transformers. This implies a paradigm shift away from viewing large transformers as mere statistical pattern recognizers to viewing them as a powerful technique for general algorithm approximation.

The paper also discusses the limitations of the model. While the largest model achieves very good performance, it does not completely close the gap to Stockfish 16. All scaling experiments point towards closing this gap eventually with a large enough model trained on enough data. However, the current results do not allow the researchers to claim that the gap can certainly be closed.

Another limitation discussed is that the predictors see the current state but not the complete game history. This leads to some fundamental technical limitations that cannot be overcome without small domain-specific heuristics or augmenting the training data and observable information.

Finally, when using a state-value predictor to construct a policy, the researchers consider all possible subsequent states that are reachable via legal actions. This requires having a transition model ? (?, ?), and may be considered a version of 1-step search. While the main point is that the predictors do not explicitly search over action sequences, the researchers limit the claim of ‘without search’ to their action-value policy and behavioral cloning policy.

In conclusion, the paper presents a significant advancement in the field of AI and chess, demonstrating that a complex, search-based algorithm, such as Stockfish 16, can be well approximated with a feed-forward neural network via standard supervised learning. This has implications for the broader field of AI, suggesting that complex and sophisticated algorithms can be distilled into feed-forward transformers, leading to a paradigm shift in how we view and utilize large transformers.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Deep fakes: What’s next? Anticipating new twists and turns in humanity’s oldest struggle

Fake news that the Pope endorsed Donald Trump (a story that was shared more widely than any legitimate news story that year). A fake picture of former US VP Michael Pence in his youth seemingly as a gay porn star. Fake audio of UK political leader Keir Starmer apparently viciously berating a young volunteer assistant. Another fake audio of London mayor Sadiq Khan apparently giving priority to a pro-Palestinian march over the annual Remembrance Day walk-past by military veterans. Fake videos of apparent war atrocities. Fake pornographic videos of megastar pop celebrities.

What’s next? And how much does it really matter?

Some observers declare that there’s nothing new under the sun, and that there’s no special need to anticipate worse to come. Society, they say, already knows how to deal with fake news. Fake news may be unpleasant – and it’s sometimes hilarious – but we just have to keep calm and carry on.

I strongly disagree, as I’ll explain below. I’ll review ten reasons why fake news is likely to become worse in the months ahead. Then I’ll suggest ten steps that can be taken to regain our collective sanity.

It remains to be determined whether these ten steps will be sufficient, or whether we’ll all sink into a post-truth swamp, in which sneering suspicion displaces diligent understanding, fake science displaces trustworthy science, fake journalism displaces trustworthy journalism, and fake politicians seize power and impose their dictatorial whims.

Credit: David Wood via Midjourney

Deception: the back story

It’s not flattering to say it, but we humans have been liars since before the dawn of history. And, just as important, we have been self-deceivers as well: we deceive ourselves in order to be more successful in deceiving others.

In case that idea offends you, I invite you to delve into the evidence and analysis offered in, for example:

Credit: Book publishers’ websites (links above)

We implore our children to be truthful but also guide them to know when to tell white lies – “thank you for this lovely present, it’s just what I wanted!” And the same ancient books of the Bible that command us “do not bear false witness” appear to celebrate deceit when practiced by figures such as Jacob, Rachel, Rebekah, and Tamar.

I could tell you, as well, that the ancient Greek dramatist Aeschylus, known as ‘the father of tragedy’, made this pithy observation two and half millennia ago: “Truth is the first casualty in war”. One tragedy – war – births another – deception.

As it happens, it seems likely that this quotation is a misattribution. I’ll come back to that point later, when talking, not about deception, but about solutions to deception. But regardless of whoever first uttered that saying, we can appreciate the insight it contains. In times of bitter conflict, there are special incentives to mislead observers – about the casualties we have suffered, about the casualties we have inflicted on opposing forces, about our military plans for the future, and much more.

It’s not just war that provides an incentive to deceive. It’s the same with politics: opposing parties compete to set the narrative, and individual politicians seek to climb past each other on what Benjamin Disraeli dubbed “the greasy pole” of political intrigue. It’s the same with commerce, with companies ready to spread misleading ‘FUD’ (fear, uncertainty, and doubt) regarding the comparative strengths of various forthcoming products and services. And it’s the same in private life, as we seek to portray ourselves in a favorable light in the eyes of family and friends, hiding our physical and psychological warts.

In this sense, deception is old news. We’ve had ‘fake news’ for as long as there has been ‘news’.

It’s tempting, therefore, to yawn when people draw attention to more recent examples of fake news and deception.

But that would be a giant mistake.

It’s technology that’s making the difference. Technology ramps up the possibilities for fake news to be even more deceptive, more credible, more ubiquitous, more personal, and more effective. Led by leaps in capabilities of AI systems, technology is enabling dramatic new twists in the struggle between truth and lies. It’s becoming even harder to distinguish between trustworthy and untrustworthy information.

The joy of misinformation. What harm could it cause? (Credit: David Wood via Midjourney)

If we fail to anticipate these developments, we’re likely to succumb to new waves of deception. The consequences may be catastrophic.

But forewarned is forearmed. By drawing on insights from humanity’s better experiences, we should be able to create technologies, processes, and institutions that help us to block these oncoming waves.

Ten twists

1. Fake news at scale

If at first you fail, why not try again?

You tried to deceive your target audience, but they were not swayed. This time, they saw through your lies. Or perhaps they didn’t even pay attention.

But if trying is cheap and quick, you can try again, this time with a different false narrative, expressed in a different voice.

What’s changed is that it’s much cheaper to try again. You can take advantage of automation, always-on networks, social media, and generative AI, to create and distribute new pieces of fake news. It’s mass-production for lies.

You’re not constrained by only creating one bot on social media. You can create armies of them.

You’re not constrained by having to write text yourself, or create suitably misleading images. You can obtain good results from a few clicks of a mouse.

The result is that discussion is being flooded with deliberately false narratives.

2. Fake news that earns money

Some false narratives are designed to try to change people’s minds. They want to change voting decisions, purchasing decisions, relationship decisions, and so on.

But other false narratives have a different purpose: to earn money via advertising clicks or affiliate marketing revenue share.

Viewers are attracted to websites by content that is outrageous, inflammatory, intriguing, or funny. They spend more time on these sites to explore the other content there, enjoying being outraged, inflamed, intrigued, or simply humored. And while on these sites, they may click on other links that generate revenue for the owners of the site.

In this case, the content creators have no special interest in whether the content matches their own political or philosophical outlooks. They produce whatever earns them the most clicks. Indeed, some clickbait merchants set up websites posting contradictory stories, to catch traffic from both sides of the political spectrum.

As a sad side-effect, people’s minds become increasingly confused. Being misled by fake content, they become less able to distinguish fantasy from reality.

3. Fake news with a personal appeal

It’s not just that fake news is being created on a greater scale than ever before. It’s being created with a greater variety than ever before.

Technology makes it easier to create different variants of the same false narrative. Some variants can be sent to people who are supporters of Candidate A within Party P. A different variant can be sent to people who support Candidate B within Party P. Yet other different variants target people whose favored candidates are from Party Q, Party R, and so on.

More than that: once software has learned which kind of pretty face each person is likely to look at – or which kinds of music make each person want to listen – these variants can easily be generated too, and directed to each target.

4. Fake news based on footprints

You might wonder: how does software know that I am likely to be distracted by particular kinds of pretty faces, or particular kinds of music?

That’s where extensive data gathering and analysis come to the fore. We are each constantly generating online footprints.

For example, Facebook notices that when it places a chess puzzle in my timeline, I tend to click on that conversation, to consider the position in more detail. Facebook observes my interest in these puzzles. Soon, more chess puzzles are being shown to me.

That particular inference is relatively straightforward. Other inferences depend on a wider review of my online activity – which posts I ‘like’, which posts I ‘hide’, and so on.

Astute robots can learn more from our footprints than we expected (Credit: David Wood via Midjourney)

The algorithms make all kinds of deductions from such reviews. They’re not always correct, not even close. But the AI systems that create personalized fake news have greater numbers of positive hits than those that don’t.

5. Fake news that builds on top of truth

The best lies mix truth with untruth. These lies are especially effective if the truth in question is one that much of society likes to suppress.

Consider a simple example. A leaked document here, a whistleblower there – a few hints suggest something fishy is going on: there is bureaucratic corruption and nepotism within a political state. Then the news-faker adds the unjustified conclusion: the government in question is irretrievably corrupt. Hence the conclusion: kick all these politicians out of power!

Again: a narrative might give a number of examples of people experiencing remission from long-standing diseases, despite forecasts from white-coated doctors that the disease was fatal. Then it adds the lie: what matters most in healthcare is your personal attitude, rather than expensive drugs that Big Pharma are trying to sell. Therefore: stop listening to your doctor, and instead purchase my course in positive thinking for $29.99 a month!

Again: members of some minorities suffered appalling abuses in trials of various medical procedures, where there was no informed consent, and where there was an apparent casual disregard for the suffering entailed. And then the lie: present-day society is incorrigibly racist and irredeemably exploitative. Therefore: it’s time to wield pitchforks!

The cleverest fake news combines this principle with the previous one. It works out our belief-systems from our online footprints – it figures out what we already suspect to be true, or hope to be true, even though the rest of society tends to think differently. Then it whips up a fake narrative from beliefs we support plus the new message it’s trying to inject into our minds.

In this way, it flatters us, in order to better mislead us.

No wonder that we often fall for that kind of deception.

6. Fake news that weaponizes friendships

Each of us is more likely to pay attention to a message if it comes from a person that we think we like – someone we perceive as one of our special friends.

If our friend is concerned about a topic, it makes us more likely to be concerned about it too – even if, previously, we might not have given that topic a second thought.

This is where the sinister power of the systems that manufacture fake news reaches higher levels. These systems invest time to create fake personas – people who we welcome as our ‘friends’ on social media.

At first, these friends say nothing out of the ordinary. We forget whether or not we met them in real life. Their names become increasingly familiar to us. We imagine we know lots about them – even though their entire backstory is fictitious.

And that’s when the poisonous messages start seeping into your conversations and then into your thoughts. And without you realizing what has happened, a fake friend has led you into a fake idea.

7. Fake news with amplification support

If we hear the same opinion from multiple sources, we may at first resist the idea, but then start to accept it.

That’s especially true if the opinion receives apparent support from apparent credentialed experts.

Thus when some fake audio is posted to social media, other fake posts soon accompany it. “I’m an expert in audio authentication”, a bot declares. “I’ve studied the clip carefully, and I assure you it’s genuine”.

If we don’t look closely, we’ll fail to spot that the credentials are bogus, and that there’s no real-world audio expert behind these claims.

The greater the number (and the greater the variety) of the apparent endorsements, the easier it becomes for some of these fake endorsements to bypass our critical faculties and to change our minds.

8. Fake news that exploits our pride

We all like to tell ourselves: we’re not the kind of person who falls for a simple conjuring trick.

Other people – those not so smart as us, we think – might be misled by dubious claims in advertisements or social media memes. Not us!

This has been called the bias blind spot – the cognitive bias that says “other people have cognitive biases, but not me!”

But recall that our ability to deceive ourselves is key to our ability to deceive others. If we are conscious of our lies, astute listeners will notice it. That’s why our subconscious needs to mislead our conscious mind before we in turn can mislead other people.

In the same way, it is an inflated self-confidence that we are good reasoners and good observers that can set us up for the biggest failures.

Couple a misplaced pride in our own critical faculties with the warm feelings that we have developed for friends (either fake online personas, as covered above, or real-world friends who have already fallen into social media rabbit holes), and we are set up to be suckered.

9. Fake news that exploits alienation

Pride isn’t the only emotion that can tempt us into the pit of fake news. Sometimes it can be a sense of grievance or of alienation that we cling to.

Unfortunately, although some aspects of the modern world feature greater human flourishing than ever before, other aspects increase the chances of people nurturing grievances:

  • The inability of large segments of the population to afford good healthcare, good education, or good accommodation
  • The constant barrage of bad news stories from media, 24 hours a day
  • A matching barrage of stories that seem to show the “elites” of society as being out-of-touch, decadent, uncaring, and frivolous, wallowing in undeserved luxury.

As a result, fake news narratives can more easily reach fertile soil – unhappy minds skip any careful assessment of the validity of the claims made.

When you’re fed up with the world, it’s easier to lead you astray (Credit: David Wood via Midjourney)

10. Fake news with a lower barrier to entry

Perhaps you’re still thinking: none of the above is truly novel.

In a way, you would be correct. In past times, clever operators with sufficient resources could devise falsehoods that misled lots of people. Traditional media – including radio and newspapers – were spreading destructive propaganda long before the birth of the Internet.

But the biggest difference, nowadays, is how easy it is for people to access the tools that can help them achieve all the effects listed above.

The barrier to entry for purveyors of far-reaching fake news is lower than ever before. This is an age of ‘malware as a service’, dark net tutorials on guerrilla information warfare, and turnkey tools and databases.

It’s an age where powerful AI systems can increasingly be deployed in service of all the above methods.

Happily, as I’ll discuss shortly, these same AI systems can provide part of the solution to the problem of ubiquitous fake news. But only part of the solution.

Interlude: a world without trust

First, a quick reminder of the bad consequences of fake news.

It’s not just that people are deceived into thinking that dangerous politicians are actually good people, and, contrariwise, that decent men and women are actually deplorable – so that electors are fooled into voting the dangerous ones into power.

It’s not just that people are deceived into hating an entire section of society, seeing everyone in that grouping as somehow subhuman.

It’s not just that people are deceived into investing their life savings into bogus schemes in which they lose everything.

It’s not just that people are deceived into rejecting the sound advice of meticulous medical researchers, and instead adopt unsafe hyped-up treatments that have fearful health consequences.

All of these examples of unsound adoption of dangerous false beliefs are, indeed, serious.

But there’s another problem. When people see that much of the public discourse is filled with untrustworthy fake news, they are prone to jump to the conclusion that all news is equally untrustworthy.

As noted by Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab,

A pernicious harm of fake news is the doubt it sows about the reliability of all news.

Thus the frequent lies and distortions of fringe news sites like InfoWars, Natural News, and Breitbart News lead many people to conclude that all media frequently publish lies. Therefore nothing should be trusted. And the phrase “mainstream media” becomes a sneer.

(They find some justification for this conclusion in the observation that all media make some mistakes from time to time. The problem, of course, is in extrapolating from individual instances of mistakes to applying hostile doubt to all news.)

Baroness Onora O’Neill of the Faculty of Philosophy at the University of Cambridge commenced her series of Reith Lectures in 2002 by quoting Confucius:

Confucius told his disciple Tsze-kung that three things are needed for government: weapons, food, and trust. If a ruler can’t hold on to all three, he should give up the weapons first and the food next. Trust should be guarded to the end: ‘without trust we cannot stand’.

Sadly, if there is no trust, we’re likely to end up being governed by the sort of regimes that are the furthest from deserving trust.

It’s as the German historian and philosopher Hannah Arendt warned us in her 1951 book The Origins of Totalitarianism:

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, in other words, the reality of experience, and the distinction between true and false… people for whom those distinctions no longer exist.

However, the technologies of the 2020s put fearsome possibilities into our grasp that writers in 1951 (like Arendt) and in 2002 (like O’Neill) could hardly have imagined.

Big Brother will be watching, from every angle (Credit: David Wood via Midjourney)

In previous generations, people could keep their inner thoughts to themselves, whilst outwardly kowtowing to the totalitarian regimes in which they found themselves. But with ten-fold twisted fake news, even our inner minds will be hounded and subverted. Any internal refuge of independent thinking is likely to be squelched. Unless, that is, we are wise enough to take action now to prevent that downward spiral.

Regaining trust

What can be done to re-establish trust in society?

Having anticipated, above, ten ways in which the problem of fake news is becoming worse, I now offer an equal number of possible steps forward.

1. Education, education, education

Part of growing up is to learn not to trust so-called 419 scam emails. (The number 419 refers to the section of the Nigerian Criminal Code that deals with fraud.) If someone emails us to say they are a prince of a remote country and they wish to pass their inheritance to us – provided we forward them some hard cash first – this is almost certainly too good to be true.

We also learn that seeing is not believing: our eyes can deceive us, due to optical illusions. If we see water ahead of us on a desert road, that doesn’t mean the water is there.

Similarly, we all need to learn the ways in which fake news stories can mislead us – and about the risks involved in thoughtlessly spreading such news further.

These mechanisms and risks should be covered in educational materials for people of all ages.

It’s like becoming vaccinated and developing resistance to biological pathogens. If we see at first hand the problems caused by over-credulous acceptance of false narratives, it can make us more careful on the next occasion. 

But this educational initiative needs to do more than alert people to the ways in which fake news operates. It also needs to counter the insidious view that all news is equally untrustworthy – the insidious view that there’s no such thing as an expert opinion.

This means more than teaching people the facts of science. It means teaching people the methods used by science to test hypotheses, the reasons why science assesses various specific hypotheses as being plausible. Finally, it means teaching people, “here are the reasons to assign a higher level of trust to specific media organizations”.

That takes us to the second potential step forward.

2. Upholding trustworthy sources

Earlier, I mentioned that a quote often attributed to the fifth century BC writer Aeschylus was almost certainly not actually said by him.

What gives me confidence in that conclusion?

It’s because of the reliance I place in one online organization, namely Quote Investigator. In turn, that reliance arises from:

  • The careful way in which pages on that site reference the sources they use
  • The regular updates the site makes to its pages, as readers find additional relevant information
  • The fact that, for all the years I’ve been using that site, I can’t remember ever being misled by it
  • The lack of any profit motivation for the site
  • Its focus on a particular area of research, rather than spreading its attention to wider topics
  • Positive commendations for the site from other researchers that have gained and maintained a good reputation.

Other organizations have similar aspirations. Rather than “quote checking”, some of them specialize in “fact checking”. Examples include:

Credit: Fact-checking websites (links above)

These sites have their critics, who make various allegations of partisan bias, overreliance on supposed experts with questionable credentials, subjective evaluations, and unclear sources of funding.

My own judgment is that these criticisms are mainly misplaced, but that constant vigilance is needed.

I’ll go further: these sites are among the most important projects taking place on the planet. To the extent that they fall short, we should all be trying to help out, rather than denigrating them.

3. Real-time fact-checking

Fact checking websites are often impressively quick in updating their pages to address new narratives. However, this still leaves a number of problems:

  • People may be swayed by a false narrative before that narrative is added to a fact-checking site
  • Even though a piece of fake news is soundly debunked on a fact-checking site, someone may not be aware of that debunking
  • Even if someone subsequently reads an article on a fact-checking site that points out the flaws of a particular false narrative, that narrative may already have caused a rewiring of the person’s belief systems at a subconscious level – and that rewiring may persist even though the person learns about the flaws in the story that triggered these subconscious changes
  • The personalization problem: false narratives tailored to individual targets won’t be picked up by centralized fact-checking sites.

AI could hold part of the answer. Imagine if our digital media systems included real-time fact-checking analyses. That’s part of the potential of AI systems. These real-time notifications would catch the false information before it has a chance to penetrate deeply into our brain.

Our email applications already do a version of this: flagging suspicious content. The application warns us: this email claims to come from your bank, but it probably doesn’t, so take care with it. Or: the attachment to this email purports to be a PDF, but it’s actually an executable file that will likely cause damage.

Likewise, automated real-time fact-checking could display messages on the screen, on top of the content that is being communicated to us, saying things like:

  • “The claim has been refuted”
  • “Note that the graph presented is misleading”
  • “This video has been doctored from its original version”
  • “This audio has no reliable evidence as to its authenticity”
  • “There is no indication of a cause-and-effect relationship between the facts mentioned”

In each case, ideally the warning message will contain a link to where more information can be found.

4. Decentralized fact-checking

The next question that arises is: how can people be confident in relying on specific real-time fact-checkers?

We can already imagine their complaints:

  • “This fact-checker is wokism gone mad”
  • “This fact-checker serves Google, not me”
  • “This fact-checker serves the government, not me”
  • “I prefer to turn off the fact-checker, to receive my news free from censorship”

There’s no one easy answer to these objections. Each step I describe in this list of ten is designed to reduce some of the apprehension.

But an important step forward would be to separate the provision of content from the fact-checking layer. The fact-checking layer, rather than being owned and operated by the commercial entity that delivers the media, would ideally transcend individual corporations. For example, it could operate akin to Wikipedia, although it would likely need more funding than Wikipedia currently receives.

Further developing this model, the fact-checking software could have various settings that users adjust, reflecting their own judgment about which independent sources should be used for cross-checking.

Maybe the task is too dangerous to leave to just one organization: then another model would involve the existence of more than one option in the fact-checking field, with users being able to select one – or a bunch – to run on their devices.

5. Penalties for dangerous fakes

As well as trying to improve the identification of fake news, it’s important to change the incentives under which fake news is created and distributed. There are roles for ‘sticks’ (penalties) as well as ‘carrots’ (rewards).

Regarding penalties, society already imposes penalties:

  • When advertisements make misleading or unfounded claims
  • When companies make misleading or unfounded claims in their financial statements
  • When people make libelous claims about each other.

Fines or other punishments could be used in cases where people knowingly distribute misleading narratives, when the consequences involve clear harm (for example, a riot).

This proposal makes some people nervous, as they see it as an intrusion on freedom of expression, or a block on satire. They fear that governments would use these punishments to clamp down on statements that are embarrassing to them.

That’s why monitoring and prosecuting such cases needs to be done independently – by a police force and judiciary that operates at arms’ length from the government of the day.

This principle of separation of powers already applies to many other legal regulations, and could surely work for policing fake news.

Related, there’s a case for wider collection and publication of statistics of reliability. Just as hospitals, schools, and many other parts of society have statistics published about their performance, media organizations should receive the same scorecard.

In this way, it would be easy to know which media channels have a casual relationship with the truth, and which behave more cautiously. In this way, investment funds or other sources of financing could deny support to organizations whose trustworthiness ratings drop too low. This kind of market punishment would operate alongside the legal punishment that applies to more egregious cases.

6. A coalition for integrity

Some of the creators of fake news won’t be deterred by threats of legal punishment. They already operate beyond the reaches of the law, in overseas jurisdictions, or anonymously and secretly.

Nevertheless, there are still points of crossover, where new content is added into media channels. It is at these points where sanctions can be applied. Media organizations that are lax in monitoring the material they receive would then become liable for damage arising.

This will be hard to apply for communications systems such as Telegram, WhatsApp, and Signal, where content is encrypted from one end of a communication to the other. In such cases, the communications company doesn’t know what is being transmitted.

Indeed, it is via such closed communications systems that fake news often spreads these days, with Telegram a particularly bad offender.

There’s a case to be made for a coalition of every organization that values truthfulness and trustworthiness over the local benefits of spreading false information.

Forming a Coalition for Integrity (Credit: David Wood via Midjourney)

People who support this ‘coalition for integrity’ would share information about:

  • Entry points used by fake news providers to try to evade detection
  • Identification of fake news providers
  • Ways in which providers of fake news are changing their methods – and how these new methods can be combated.

Regardless of differences in political or philosophical outlook among members of this coalition, they have a common interest in defending truthfulness versus deception. They should not allow their differences to hinder effective collaboration in support of that common purpose.

7. Making trust everyone’s business

In recent decades, a variety of new job titles have been created at the highest levels within companies and organizations, such as:

  • Chief Design Officer
  • Chief Innovation Officer
  • Chief Quality Officer
  • Chief Security Officer

None of these posts free other members of the company from their responsibility for design, innovation, quality or security. These values are universal to everyone in the organization as they go about their duties. Nevertheless, the new ‘chief’ provides a high-level focus on the topic.

It should be the same with a new set of ‘Chief Trust Officers’. These executives would find ways to keep reminding personnel about:

  • The perils arising if the organization gains a reputation for being untrustworthy
  • Methods and procedures to follow to build and maintain a trustworthy reputation for the organization
  • Types of error that could result in dangerous false narratives being unwittingly transmitted

My assessment is that the organizations who appoint and support Chief Trust Officers (or equivalent) are the ones most likely to succeed in the turbulent times ahead.

8. Encouraging openness

To be clear, education often fails: people resist believing that they can be taken in by false information.

We like to think of ourselves as rational people, but a more accurate description is that we are a rationalizing species. We delight in finding ways to convince ourselves that it is fine to believe the things that we want to believe (even in the face of apparent evidence against these beliefs).

That’s why bombarding people with education often backfires. Rather than listening to these points, people can erect a strong shield of skepticism, as they prepare to lash out at would-be educators.

Indeed, we all know people who are remarkably clever, but they deploy their cleverness in support of profoundly unwise endeavors.

This state of affairs cannot be solved merely by pumping in more facts and arguments. Instead, different approaches are required, to encourage a greater openness of spirit.

One approach relies on the principle mentioned earlier, in which people pay more attention to suggestions from their close friends. Therefore, the best way to warn people they are about to fall for dangerous information is for them to be warned by people they already trust and respect.

Another approach is to find ways to put people in a better mood all round. When they have a compassionate, optimistic mindset, they’re more likely to listen carefully to warnings being raised – and less likely to swat away these warnings as an unwelcome annoyance.

It’s not enough to try to raise rational intelligence – rather, we must raise compassionate intelligence: an intelligence that seeks wisdom and value in interactions even with people previously regarded as a threat or enemy.

This is a different kind of education. Not an education in rationality, but rather an education in openness and compassion. It may involve music, meditation, spending time in nature, biofeedback, and selected mind-transforming substances. Of course, these have potential drawbacks as well as potential benefits, but since the upsides are so high, options need to be urgently explored.

9. A shared positive vision

Another factor that can predispose people to openness and collaboration, over closed-mindedness and stubborn tribal loyalties, is a credible path forward to a world with profound shared benefits.

When people anticipate an ongoing struggle, with zero-sum outcomes and continual scarcity of vital resources, it makes them mentally hostile and rigid.

Indeed, if they foresee such an ongoing conflict, they’ll be inclined to highlight any available information – true or fake – that shows their presumed enemies in a bad light. What matters to them in that moment is anything that might annoy, demoralize, or inflame these presumed enemies. They seize on fake news that does this, and also brings together their side: the set of people who share their sense of alienation and frustration with their enemies.

That is why the education campaign that I anticipate needs a roadmap to what I call a sustainable superabundance, in which everyone benefits. If this vision permeates both hearts and minds, it can inspire people to set and respect a higher standard of trustworthiness. Peddlers of fake news will discover, at that point, that people have lost interest in their untruths.

10. Collaborative intelligence

I do not claim that the nine steps above are likely to be sufficient to head off the coming wave of dangerous fake news.

Instead, I see them as a starting point, to at least buy us some time before the ravages of cleverer deep fakes run wild.

That extra time allows us to build a stronger collaborative intelligence, which draws on the insights and ideas of people throughout the coalition for integrity. These insights and ideas need time to be evolved and molded into practical solutions.

However, I anticipate not just a collaboration between human minds, but also a rich collaboration involving AI minds too.

A collaboration of minds – humans and AIs (Credit: David Wood via Midjourney)

Critically, AI systems aren’t just for ill-intentioned people to use to make their deep fakes more treacherous. Nor are they just something that can power real-time fact-checking, important though that is. Instead, they are tools to help us expand our thinking in multiple dimensions. When we use them with care, these systems can learn about our concerns regarding worse cases of deep fakes. They can consider multiple possibilities. Then they can offer us new suggestions to consider – ways probably different from any I’ve listed above.

That would be a striking example of beneficial artificial intelligence. It would see deep fakes defeated by deep benevolence – and by a coalition that integrates the best values of humans with the best insights of AIs.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Biggest Crypto Narratives of Q2 2024

As we enter the second quarter of 2024, the cryptocurrency market is gearing up for what is expected to be an explosive phase of the 2024 bull run

The narratives that drove the market in the previous months have evolved, and new trends are emerging, which will force crypto holders to take a long hard look at their portfolios and probably make some changes.

Let’s take a look at the hottest sectors in crypto for Q2 2024 and what to expect.

Please note, nothing in this article should be considered financial advice and any readers planning on investing should do their own research. Crypto assets are highly volatile in price and you could lose everything if you invest. 

Bitcoin Ecosystem

Surprisingly, some of the most cutting-edge developments are taking place on the world’s original crypto chain, which has been known to adapt to new tech with great difficulty. 

The Bitcoin ecosystem is seeing significant development after its Taproot upgrade enabled DeFi and smart contract development. The last year has been up-only for smart contract-powered layer-2 networks like Stacks, and of course the creation of Casey Rodarmor’s Bitcoin Ordinals protocol, which has now led to over 60 million inscriptions and thousands of Ordinal projects. 

These projects are bringing new functionality to the Bitcoin blockchain, enabling new decentralized applications and unique digital assets. Rodarmor’s latest project Runes kicks off next month and promises to solve a lot of the congestion issues caused by BRC20 tokens. 

Memecoins

With Bitcoin is in price discovery mode post-$70k, memecoin mania has broken out all over crypto and shows no signs of abating, 

They continue to capture the attention of crypto traders that resemble a cult, with animal-themed projects like Pepe, Dog Wif Hat (WIF), Bonk, Shiba Inu, Ballz, and Brett making millionaires (and ‘brokies’) of degen traders. While these projects all lack fundamental value, they can generate significant short-term gains and serve as a gateway for new users entering the crypto space who don’t understand the difference between zero-knowledge and optimistic rollups and frankly don’t care. All they care about is whether to ‘ape’ (buy) or ‘jeet’ (sell). 

Layer-1 Chains

One of the key sectors to watch is the Layer-1 blockchain space. While established players like Ethereum, Cardano and Solana remain relevant, newer entrants such as TON (the Telegram chain), Avalanche, Arweave, Fantom, Near, and Sui are gaining traction with various narratives. 

These platforms are attracting attention due to features such as TON’s bullish tokenomics, Arweave’s decentralized storage solutions, Fantom’s high-performance blockchain, Near’s focus on AI (Fantom recently featured on a Jensen Huang-led panel at a major AI conference last week), and Aptos and Sui’s groundbreaking Move programming language, inherited from the defunct Facebook currency project Diem. 

Layer-2 Chains

Another notable trend is the rise of Layer-2 solutions, particularly those built on Ethereum. Ethereum’s recent Dencun upgrade that introduced Proto-danksharding (EIP4884) was a huge boon to Layer-2 solutions because it has now dramatically lowered their transaction fees, opening up new applications. 

Optimism stands out as a promising project. Coinbase’s Base blockchain is built on top of its OP Stack, and as Base begins to gain adoption, Optimism is expected to see increased usage and value capture. Base’s royalty payments fund them directly, and the prestige of providing the foundation of the technology for the Coinbase chain helps raise their profile.

Zero-knowledge proof rollups, supposedly technologically superior to optimistic rollups, continue to accumulate market share, and with the recent launch of StarkNet and others like Polygon Era and ZkSync building like mad (the latter expected to launch and airdrop its token later this year) the Layer-2 wars are far from over.

Credit: Tesfu Assefa

DEXs

This week it was announced that Coinbase would go to court with the SEC, while centralized exchange KuCoin was charged by US authorities for a litany of serious financial crimes such as breaking Anti-Money Laundering regulations. With many centralized exchanges still not offering sufficient KYC regimes, expect this housecleaning to continue to clear the way for TradFi firms to enter the market. 

This means that the flight to self-custodial solutions continues, having started with the collapse of FTX and others in 2022. In particular, decentralized exchanges (DEXs) are also poised for growth, with ‘perpetual DEXs’ like GMX and Aveo leading the charge. These platforms offer users the ability to trade leveraged positions without centralized intermediaries. 

Traditional DEXs, in particular Solana-based ones such as Jupiter, Orca, and Cosmos-based ones like Raydium and Astroport, are also expected to cash in on increased trading activity brought about by the memecoin and AI crypto narratives.

DePIN and AI

The AI and decentralized physical infrastructure (DePIN) sectors are closely intertwined and present significant opportunities. Projects like Bittensor and Akash Network are at the forefront of decentralized AI hardware and cloud computing, respectively. Other notable projects in the DePIN space include AIOZ, and Render, which focus on various aspects of decentralized infrastructure. 

Crypto AI pioneer SingularityNET is also expected to continue its impressive growth this year as its ecosystem and marketplace expands and likely takes on some of its peers as new partners.

Crypto Gaming

Crypto Gaming is another sector that can finally surge back to investor awareness after two years in the doldrums. Projects like ImmutableX, Injective and Beam are building the infrastructure necessary to support the next generation of blockchain games, by offering features such as gas-free NFT minting, custom-tailored gaming blockchains, and strong partnerships with established gaming companies.

Real-world Assets (RWAs)

If you follow any crypto discussions on social media and news outlets, you’ll know that RWAs have been touted as a massive new market for crypto, after BlackRock Larry Fink’s espoused the benefits and future potential of the technology. 

Real-world assets (RWAs) are increasingly being tokenized on the blockchain, and several projects are charging forwards in this sector. Ondo, Centrifuge, and Pendle are some of the key players in the RWA space, offering a range of financial products and services, including borrowing, lending, and yield generation.

Cross-chain interoperability is becoming increasingly important as the blockchain ecosystem matures. Projects like THORChain are building the infrastructure necessary to facilitate seamless asset-transfer across different blockchains, enabling greater liquidity and user adoption.

Conclusion

Q2 2024 will almost certainly be marked by some explosive volatility, as forces such as the Bitcoin Halving, a potential Fed reduction of interest rates, and the run-up to the 2024 US Presidential Elections continue to shape market behavior. 

It is essential for investors to remain informed and adaptable, and maintain a strong understanding of their risk exposure and what they’re willing to lose. By understanding the key narratives and projects driving the market, investors can position themselves to capitalize on the potential gains. Hold on to your hat, or sell your dogwifhat, it’s going to be a wild ride.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Revolutionizing Digital Archives: ChatGPT Integration in the Latest Version of the Internet Archive’s TV/News Archive | Mindplex Podcast – S2EP14

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Coinbase Report: Are Crypto AIs a Mirage Or For Real?

Artificial Intelligence (AI) has been making rapid strides in recent years, with breakthroughs like ChatGPT, Midjourney and Claude capturing the public imagination. At the same time, the world of cryptocurrency and blockchain technology is expanding, vying for the attention of a still-young digital economy. Can these two cutting-edge fields co-exist and aid each other’s evolution? The question presents both exciting opportunities and complex challenges.

A new report by Coinbase, a leading US-based cryptocurrency exchange, which also launched the surging Base layer-2 network, delves into the current state of the crypto-AI landscape. The report highlights that while there is significant potential in the overlap between technology’s two brightest sectors, the path to widespread adoption is not that straightforward. Different sub-sectors within this intersection have vastly different opportunities and development timelines.

One key observation is that decentralization alone is not enough for an AI product to succeed in the crypto space. It must also reach feature-parity with centralized alternatives. Crypto-based AI solutions must offer compelling advantages beyond just being decentralized.

The report also suggests that the value of AI tokens may be overstated due to the current hype around AI. Many AI tokens may lack sustainable demand-side drivers in the short to medium term, despite the excitement surrounding them.

Key Trends in Crypto AI

Open Source Models Carry On

The AI sector has a thriving open-source ecosystem, with platforms like HuggingFace.co hosting a wide range of publicly-available models. This open-source culture coexists with a competitive commercial sector, ensuring that non-performant models are quickly weeded out.

Smaller AI Models Gain Traction

Despite this, smaller AI models are increasing in quality and cost-effectiveness. Fine-tuned open-source models can even outperform leading closed-source models in certain benchmarks. This trend, combined with the open-source culture, enables a future where performant AI models can be run locally, offering a high degree of decentralization.

AI Integrations Strongly Benefit Existing Platforms

The report notes that existing platforms with strong user lock-in or concrete business problems are well-positioned to “disproportionately” benefit from AI integrations. 

  • For example, GitHub Copilot‘s integration with code editors enhances an already powerful developer environment. Similarly, embedding AI interfaces into various tools like mail clients, spreadsheets, and CRM software are natural use-cases for AI.
  • In such scenarios, AI models augment existing platforms rather than creating entirely new ones. 
  • AI models that improve traditional business processes internally often rely on proprietary data and closed systems, making them likely to remain closed-source.

Hardware and Compute Trends

In the AI hardware and compute space, there are two distinct trends:

One is shifting computation from training to inferencing: with more models now available, the focus moves towards making queries to these already-trained models. This trend favors platforms that can reliably run production-ready models securely.

A second, related trend is that the competitive landscape around hardware architecture is evolving, with new processors from Nvidia, Google, and Groq potentially shifting cost dynamics in the AI industry. Cloud providers that can quickly adapt, procure hardware at scale, and set up associated infrastructure stand to reap the rewards of these developments.

Credit: Tesfu Assefa

Crypto’s Role in the AI Pipeline: Four Stages

The Coinbase report next examines crypto’s potential impact on four stages of the AI pipeline: 

1) data collection and management

2) model training and inferencing

3) output validation

4) tracking

1) Data Collection and Management

Historical blockchain data is a rich source of training data for AI models. However, commercial models tend to use proprietary datasets, posing challenges for decentralized data marketplaces, which need to compete with both open-source data directories and corporate silos.

Decentralized storage also faces hurdles in the AI industry. While decentralized storage can offer potential cost savings, it currently lacks the tooling, integrations, and predictable costs of mature cloud systems. Regulatory and technical challenges around sensitive data storage on decentralized platforms remain significant barriers.

2)  Model training and Inferencing

In the model training and inferencing stage, decentralized compute solutions like Render and NuNet aim to leverage idle computing resources to provide an alternative to centralized cloud providers. While some projects have seen increased usage, long-term success faces strong competition from established players. Technical limitations like network bandwidth constraints also pose challenges for decentralized compute networks.

3) Output Validation

Validating AI model outputs, and ensuring trust is another area where crypto-based solutions are being explored. However, the complexity of model benchmarking and the increasing feasibility of running models locally on consumer hardware raise questions about the demand for trustless inferencing solutions.

4) Tracking

Finally, the importance of tracking AI-generated content and proving online identity is growing. While decentralized identifiers and on-chain data hashes can help address these issues, centralized alternatives like KYC providers and AI watermarking techniques are also being developed.

Trading the AI Narrative

 Growth in the past year (Credit: BanterBubbles.com)

Despite the challenges, AI tokens have outperformed major cryptocurrencies and AI-related equities in recent months. The report suggests that AI tokens benefit from strong performance in the crypto market and in the AI industry, leading to upside volatility even during bitcoin drawdown periods. Hype drives demand, and investors will be piling in for some time to come. 

However, the lack of clear adoption forecasting and metrics has enabled speculative trading that may not be sustainable in the long run. Eventually, as in every crypto cycle, price and utility will need to converge, either through rising use-cases or falling prices.

Looking Ahead

The marriage of AI and crypto is still in its very early stages, and is likely to evolve rapidly as the broader AI sector develops. A decentralized AI future, as envisioned by many in the crypto industry, is not guaranteed. Crypto-based solutions are technically feasible, but to drive adoption they must provide meaningful advantages over centralized alternatives.

The AI industry itself is undergoing swift changes, fighting more and more headwinds as public opinion often turns against it. Therefore it is crucial to navigate this space carefully. Deeper examination of how crypto-based solutions can offer substantially better alternatives, or at least a clear understanding of the underlying trading narrative, is essential for investors and entrepreneurs alike.

As the AI and crypto landscapes continue to search for a sustainable symbiosis, ongoing research and experimentation will be vital to unlocking the potential of this area while meeting its challenges. 

The future of decentralized AI is still being written, and it will be shaped by the ingenuity and perseverance of the innovators working at the forefront of these transformative technologies.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter