Top Ten Crypto Cash Cows Analyzed

It’s standard procedure that cryptocurrency projects come and go at a dizzying rate, as they often serve no real immediate purpose. However, some protocols have managed to establish themselves as revenue-generating powerhouses, demonstrating real-world utility, user adoption, and sustainable profits. 

Traditional Finance firms are chomping at the bits for the newly-approved Ethereum spot ETF to start trading. The Bitcoin ETF serves as a safe haven asset hedge, ETH is an asset class that Wall Street can engage with. TradFi firms can use metrics like new users, fees, revenue and total value locked (TVL) to measure network effect. With Ethereum clearing the way, other chains and protocols can eventually follow in its wake. 

We’ve used a recent study by Onchain Times and Token Terminal data to do a deep analysis of the top ten money spinners in crypto in mid-2024, comparing their business models, revenue streams, and key performance metrics.

1. Ethereum: The Undisputed Leader

Ethereum remains the giant of the crypto industry, generating an impressive $1.42 billion in revenue year-to-date (YTD). As the foundation for much of the decentralized finance (DeFi) ecosystem, Ethereum’s success stems from its widespread adoption and the high demand for block space on its network, as well as recent upgrades like the Merge and Proto Danksharding upgrade, which has moved it to proof-of-stake and slashed layer-2 chain costs.

 Key points

  • Highest revenue generator in the crypto space
  • Revenue primarily comes from transaction fees paid by users
  • Profitability fluctuates due to issuance rewards to validators
  • Q1 2024 was profitable, while Q2 saw a decline due to activity moving to layer-2 solutions

2. Tron: The Stablecoin Highway

Surprising many, Tron takes the second spot with approximately $852 million in revenue YTD (year to date). Tron’s success is largely attributed to its role as a major conduit for stablecoin transfers, particularly USDT in developing economies. It’s cheap, fast, and reliable. 

Key points

  • Second-largest stablecoin ecosystem after Ethereum
  • Popular in countries like Argentina, Turkey, and various African nations
  • Competes with Ethereum and Solana for highest stablecoin transfer volumes

3. Maker: The OG Stablecoin Protocol

Maker, the protocol behind the DAI stablecoin, comes in third with $176 million in revenue YTD. Its business model revolves around issuing DAI against crypto collateral and charging interest on these loans.

Key points

  • Total DAI supply is currently 5.2 billion, down from its all-time high of around 10 billion
  • It has diversified revenue streams, including holding real-world assets (RWA) at 25.6% of total revenue
  • Estimated earnings of $73 million annually after accounting for DAI Savings Rate and operating costs

4. Solana: The Phoenix Rising (Again)

Once written off as dead, Solana has made an impressive comeback since its 2023 Breakpoint conference, ranking fourth with $135 million in annualized revenues YTD. Its resurgence is attributed to increased activity in memecoins, NFTs, and DePIN (Decentralized Physical Infrastructure Networks) projects.

Key points

  • Revenue comes from transaction fees paid to validators
  • High token issuance costs make it challenging to assess profitability
  • Success driven by technological improvements and community-driven events like the JTO airdrop

5. Ethena: The New Stablecoin Contender

Launched in January 2024, Ethena has quickly become the fifth-largest revenue-generating protocol, with $93 million in annualized revenues. It’s backed by big names like Arthur Hayes, and while it’s conjured up some early Luna 2.0 fears due to its algorithmic stablecoin design, so far it’s doing well. Its USDe token, a synthetic dollar, has achieved a market cap of $3.6 billion in just a few months.

Key points

  • Innovative delta hedging strategy to maintain USDe peg
  • Currently the most profitable decentralized app (dAPP) YTD with $41 million in earnings
  • Business model designed to excel in bull markets, raising questions about long-term sustainability

6. Aerodrome: The Base Layer AMM

Aerodrome, an automated market maker (AMM) on the Base layer-2 network, has generated $85 million in revenue YTD. Launched in August 2023, it has quickly established itself as the top decentralized exchange (DEX) on Base.

Key points

  • Implements successful mechanisms from various DEX protocols
  • Uses vote-escrowed tokenomics to attract liquidity
  • Incorporates concentrated liquidity features to compete with Uniswap

7. Lido: The Liquid Staking Giant

Lido, a prominent liquid staking protocol, has generated $59 million in revenue year-to-date across Ethereum and Polygon proof-of-stake chains. Its popularity stems from making Ethereum staking more accessible to average users. 

Key points

  • Revenue comes from a 10% fee on users’ staking rewards
  • Profits of $22.5 million YTD after accounting for node operator payments and token incentives
  • Operates as a double-sided market, connecting ETH holders with professional node operators

8. Base: The Coinbase L2 Solution

Base, a fast-growing Ethereum layer-2 solution launched by Coinbase in Q3 2023, clocks in at $52 million in revenues YTD. As a relatively new entrant, its rapid growth is noteworthy, and its backing by Coinbase could see it reach the top of the food chain very quickly.

Key points

  • Revenue comes from user transaction fees
  • Impressive profitability with $35 million in earnings YTD
  • Benefited significantly from the implementation of EIP-4844 that reduced data availability costs

9. Uniswap Labs: The DEX Pioneer

Uniswap Labs, the company behind the popular decentralized exchange Uniswap, has generated $39.3 million in revenue YTD. Uniswap was the earliest DEX to gain real traction, and continues to play a crucial role in the DeFi ecosystem.

Key points

  • Revenue primarily comes from trading fees
  • Pioneered the automated market maker (AMM) model in DeFi
  • Continues to innovate, with features like concentrated liquidity in Uniswap V3

10. PancakeSwap: The BSC DeFi Leader

PancakeSwap, a leading DEX on the Binance Smart Chain (BSC), rounds out the top ten revenue-generators, with $36.3 million in revenue YTD. Its success highlights the growing importance of alternative blockchain ecosystems.

Key points

  • Largest DEX on Binance Smart Chain
  • Offers a wide range of DeFi services – including trading, yield farming, and NFTs
  • Lower transaction costs compared to Ethereum-based DEXs

Credit: Tesfu Assefa

Comparing the Ten Chains:

Revenue Generation (year-to-date)

  1. Ethereum: $1.42 billion
  2. Tron: $852 million
  3. Maker: $176 million
  4. Solana: $135 million
  5. Ethena: $93 million
  6. Aerodrome: $85 million
  7. Lido: $59 million
  8. Base: $52 million
  9. Uniswap Labs: $39 million
  10. PancakeSwap: $36 million

Ethereum’s revenue still dwarfs that of its competitors, emphasizing its dominant position. However, the presence of new entrants like Ethena, Base, and established DEXs like Uniswap and PancakeSwap shows that revenue is chain-agnostic and that investors will find it wherever they can. 

Remember the importance of understanding tokenomics; Lido, for example, still trades at under $2, the same price it had two years ago, despite its market cap growing 50x. When assessing a cryptocurrency, look at its fully diluted value (FDV) instead of current market cap. 

Profitability

Profitability varies significantly among these protocols due to differences in their business models and their running cost:

  • Ethena: leads in profitability with $41 million in earnings YTD.
  • Base: shows strong profitability with $35 million in earnings.
  • Maker: estimates $73 million in annualized earnings after costs.
  • Lido: reports $22.5 million in profits YTD.
  • Ethereum and Solana’s profitability is more complex due to token-issuance costs.
  • Profitability data for Uniswap Labs and PancakeSwap is not readily available.

Business Model Diversity

The top cash cows in crypto have diverse business models:

  • Infrastructure providers: Ethereum, Tron, Solana, Base
  • Stablecoin issuers: Maker, Ethena
  • DeFi protocols: Aerodrome, Lido, Uniswap, PancakeSwap

There is more than one way to skin a cat. Protocols in the crypto ecosystem can generate revenue in entirely different ways – from providing foundational infrastructure to offering specific financial services.

Market Position and Competition

  • Ethereum maintains its leadership position, but faces growing competition from layer-2 solutions and alternative layer-1 blockchains.
  • Tron has carved out a niche in stablecoin transfers, particularly in developing markets.
  • Maker continues to be a major player in the stablecoin space, but faces new competition from innovative protocols like Ethena.
  • Solana has shown resilience and adaptability, rebounding from near-collapse to generate healthy revenue.
  • Base and Aerodrome demonstrate the potential for new entrants to quickly gain market share with innovative features and strong backing.
  • Uniswap and PancakeSwap showcase the ongoing importance of decentralized exchanges, with each dominating their respective blockchains.

Sustainability and Future Outlook

When assessing these protocols, it’s crucial to consider the sustainability of their revenue models:

  • Ethereum’s shift to proof-of-stake and the growth of layer-2 solutions may impact its long-term revenue structure.
  • Tron’s reliance on stablecoin transfers could be vulnerable to regulatory changes or shifts in market dynamics.
  • Maker’s diversification into real-world assets may provide more stable revenue streams.
  • Ethena’s success in bull markets raises questions about its performance during market downturns.
  • Base and Aerodrome will need to maintain their innovative edge to continue attracting users and liquidity.
  • Uniswap and PancakeSwap face increasing competition from other DEXs, and may need to continue innovating to maintain their position in a competitive market.

Conclusion

The top ten cash cows in crypto are a mix of established giants, innovative newcomers, and specialized DeFi protocols. While Ethereum continues to dominate in terms of raw revenue, the success of newer protocols like Ethena and Base, as well as the continued relevance of DEXs like Uniswap and PancakeSwap, demonstrates the ongoing evolution and diversification of the crypto landscape.

The presence of both infrastructure providers and application-layer protocols in this list highlights the importance of a robust and diverse ecosystem. Investors and users should closely monitor these protocols, as their performance often serves as a barometer for broader trends in crypto. 

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

A New Approach to Formalizing Second-Order Languages in Agda

Introduction

In the realm of programming languages and formal methods, the representation and manipulation of syntax, particularly for languages with complex variable binding structures, is a significant challenge. Traditional methods often involve cumbersome and error-prone techniques, such as manually handling variable binding and substitution. However, recent advancements have introduced more robust and systematic approaches. One such advancement is presented in a recent study, which outlines a framework for automatically generating Agda implementations of second-order languages. This article explores the main concepts of this framework, its foundations, and its implications for the field.

Understanding the Framework

At its core, the framework allows users to produce implementations of second-order languages in Agda with minimal manual effort. The generated term language is explicitly represented as an inductive, intrinsically-encoded data type. This means that the structure and rules of the language are built directly into the data type definitions, ensuring that terms are always well-formed according to the language’s syntax and semantics.

This intrinsic encoding offers several advantages over traditional approaches. By embedding the rules directly into the data type definitions, the framework ensures that any term constructed is guaranteed to be syntactically correct. This reduces the likelihood of errors and simplifies the reasoning about programs and their properties.

The framework supports various formalised metatheoretical constructs, such as substitution for operational semantics and compositional interpretations for denotational semantics. These constructs are essential for defining how the language behaves and how terms can be transformed and interpreted. For example, substitution is crucial for operational semantics, defining how variables in a program can be replaced with their corresponding values. Compositional interpretations, on the other hand, are key for denotational semantics, allowing for a systematic interpretation of programs in a mathematical domain.

Mathematical Foundations

The framework’s strength lies in its deep mathematical foundations, specifically derived from the theory of abstract syntax. Traditional approaches often require ad-hoc definitions and lemmas to handle variable binding and substitution, leading to complex and error-prone implementations. In contrast, the presented framework leverages a systematic mathematical approach, avoiding these pitfalls.

One significant mathematical tool used in this framework is the presheaf model. This model provides a structured way to handle variable binding by treating contexts (environments in which variables exist) as functors. This approach allows for a more elegant and powerful handling of variable scopes and substitutions, which are crucial for both the correctness and usability of the language representations.

Presheaves provide a categorical framework that simplifies many of the complexities associated with variable binding. They allow for the definition of substitution and other operations in a way that is both mathematically rigorous and practically useful. By treating contexts as functors, the framework can systematically handle variable scopes and avoid common pitfalls such as variable capture and name clashes.

Related Work and Comparisons

The challenge of formalising and reasoning about abstract syntax has a rich history, motivated largely by the development of proof assistants. The Barendregt variable convention, which suggests renaming variables to avoid clashes, is notoriously difficult to formalise. Several approaches have been developed to tackle this issue, including higher-order abstract syntax, locally nameless representation, and intrinsically-typed encoding.

Higher-order abstract syntax, introduced by Pfenning and Elliot, represents variables and bindings using the meta-language’s own functions and variables. This approach simplifies many aspects of the implementation but can be less efficient for certain operations. For example, while higher-order abstract syntax can make it easier to define certain operations, it can also introduce inefficiencies when manipulating large terms or performing complex substitutions.

Locally nameless representation, as explored by Bird and Paterson, uses a hybrid approach, combining named and nameless (de Bruijn indices) representations to balance ease of use and efficiency. This approach allows for more efficient manipulation of terms while still providing a systematic way to handle variable binding. However, it can still be prone to errors and require complex arithmetic operations.

Intrinsically-typed encoding, as employed in the discussed framework, ensures that terms are always well-typed by construction. This method avoids many of the pitfalls of other approaches, such as the complicated arithmetic involved in de Bruijn indices. By embedding the typing rules directly into the data type definitions, intrinsically-typed encoding provides strong guarantees about the correctness of terms and simplifies the reasoning about programs.

Advantages of the Presented Framework

The framework’s approach to intrinsically-typed representation offers several advantages. First, it provides strong static guarantees about the typing and scoping of terms, reducing the risk of errors. This is particularly valuable in dependently-typed proof assistants like Agda, where correctness proofs are central. By ensuring that terms are always well-typed, the framework simplifies the development and verification of programs and reduces the likelihood of errors.

Moreover, the framework includes a code-generation script that facilitates rapid prototyping and experimentation. This script allows users to quickly generate and test new language features or modifications, significantly speeding up the development process. For example, a researcher can easily define a new language construct, generate the corresponding Agda implementation, and immediately begin experimenting with its properties and behaviour.

Another noteworthy feature is the framework’s ability to incorporate generic traversals and equational logic through parameterized meta variables. This capability simplifies the manipulation and reasoning about terms, making it easier to develop complex language features and proofs. For example, the framework can automatically generate code for performing common operations, such as substitution or evaluation, and provide systematic ways to reason about their correctness.

Case Studies and Benchmarks

The framework was evaluated using the PoplMaRK challenge, a set of benchmarks for comparing metatheory formalisation efforts. Many existing approaches, particularly those using Coq, rely on numeric de Bruijn indices, which can be complex and error-prone. In contrast, the presented framework’s use of an intrinsically-typed, nameless representation proved more robust and easier to manage.

The PoplMaRK challenge includes a variety of tasks designed to test the capabilities of different formalisation frameworks. These tasks range from simple operations, such as substitution and evaluation, to more complex ones, such as proving properties about the language and its semantics. By demonstrating the framework’s ability to handle these tasks efficiently and correctly, the authors provided strong evidence of its robustness and utility.

Credit: Tesfu Assefa

Future Directions

The framework’s creators recognize that modern type theory encompasses a wide range of formal systems beyond second-order languages with algebraic types. Future work aims to extend the framework to handle these more complex systems, such as linear, dual-context, polymorphic, dependent, and polarised calculi. This expansion would further enhance the framework’s utility and applicability.

Additionally, ongoing work focuses on refining the categorical reformulation of the presheaf model to suit the practical needs of formalisation. This involves developing new notions and techniques to avoid quotienting, making the formalisation process more efficient and user-friendly. By addressing these challenges, the authors hope to further simplify the development and verification of complex language systems.

The framework’s flexibility and extensibility make it well-suited for a variety of applications. For example, it could be used to formalise and verify the semantics of new programming languages, develop tools for program analysis and optimization, or even explore new mathematical theories related to syntax and semantics. As the field continues to evolve, the framework’s capabilities will likely expand, enabling researchers to tackle increasingly complex problems.

Conclusion

The framework for generating Agda implementations of second-order languages represents a significant advancement in the field of programming languages and formal methods. By leveraging deep mathematical foundations and providing robust, systematic tools, this framework simplifies the development and verification of complex language systems. Its intrinsic typing guarantees, ease of extension, and support for rapid prototyping make it a valuable asset for researchers and developers alike.

As the field continues to evolve, the principles and techniques introduced by this framework will likely inspire further innovations, driving progress in the formalisation and implementation of increasingly sophisticated language systems. The future work outlined by the framework’s creators promises to expand its capabilities, addressing more complex and varied language constructs, and further solidifying its place as a cornerstone in the study of programming languages and formal methods.

In summary, this framework provides a powerful and flexible tool for the formalisation of second-order languages, offering significant improvements over traditional approaches. Its mathematical rigour, combined with practical tools for rapid development and experimentation, makes it an invaluable resource for both researchers and practitioners. As we look to the future, the framework’s potential for further development and application promises to drive continued progress in the field, opening up new possibilities for the study and implementation of programming languages.

Reference

Fiore, Marcelo, and Dmitrij Szamozvancev. “Formal Metatheory of Second-order Abstract Syntax.” Proceedings of the ACM on Programming Languages 6, no. POPL (January 12, 2022): 1–29. https://doi.org/10.1145/3498715.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Nada! How Cardano’s Community Thwarted DDoS Attack

Introduction

The cryptocurrency sector recently witnessed an intriguing security event in the Cardano ecosystem – a distributed denial-of-service (DDoS) attack that was swiftly mitigated, showing the robustness of the blockchain and the ingenuity of its developer community, marking a victory for the collaborative spirit that defines the crypto space. 

Let’s delve into the details of this attack, its resolution, and the implications for Cardano’s future.

What is a DDoS attack?

DDoS (distributed denial of service) and DoS (denial of service) attacks have been a thorn in the side of Web2 businesses since 1996, causing billions of dollars of losses in the process. In the crypto world, there haven’t been many, although Electrum Wallet’s 2019 incident is a notable one: created a botnet of 152,000 hijacked wallets, and in 2020, two exchanges were shut down by DDoS attacks.

In short, DDoS (Distributed Denial of Service) attacks are malicious attempts to disrupt the normal operation of a cryptocurrency network, exchange, or service. Attackers flood the target with a massive amount of internet traffic from multiple sources, overwhelming its infrastructure and causing it to become unavailable to legitimate users. These attacks can have serious consequences: they can halt trading, block transactions, and cause financial assets to be lost.

Cardano DDoS Attack: Play-by-Play

On 24 June, 2024, the Cardano network experienced an unusual surge in activity. Fluid Token’s CTO reported that the attack commenced at block 10,487,530. 

The attacker’s strategy was to flood the network with transactions, each executing 194 smart contracts. At a cost of 0.9 ADA ($0.36) per transaction, the malicious actor attempted to congest the network by filling blocks with these spam transactions.

The intent behind this DDoS attack was twofold:

  • Primarily, it aimed to disrupt the network’s normal operations by overwhelming it with traffic. 
  • There was speculation that the attacker might have also been attempting to manipulate the fee structure to enable cheaper high-value transactions, or maybe to steal staked ADA tokens.

The attack resulted in a significant increase in network load, with chain utilization reaching peaks of 72% on average and up to 93% on an hourly basis. This heightened activity raised concerns among community members and developers who noticed the network’s sluggish performance.

Community Response and Investigation

As news of the attack spread, the Cardano community quickly mobilized. Developers, led by figures such as Philip Disarro from Anastasia Labs, began investigating the attack and formulating countermeasures.

Through on-chain analysis, community members traced the origin of the attack to addresses linked to the Kraken exchange. This discovery raised questions about the attacker’s identity and the potential for legal action. The transparency of blockchain technology was invaluable in this investigation, allowing for real-time tracking of the malicious transactions.

Interestingly, despite the attacker’s efforts to congest the network, their actions inadvertently contributed over 1,000 ADA in transaction fees to the Cardano treasury and stake pool operators. This unintended consequence showed how the Cardano network’s economic model can help keep it safe.

Technical Analysis and Vulnerability Discovery

As the community rallied to understand and counter the attack, developers like Mel from Harmonic Labs began dissecting the malicious transactions. By deserializing the UPLC (Untyped Plutus Core) of the attacking scripts, they discovered a critical flaw in the attacker’s strategy.

The scripts used in the attack were designed to always return ‘true’, no matter what input they were fed. This oversight meant that the scripts could be easily manipulated, providing an opportunity for the defenders to turn the tables on the attacker.

The Counterattack: A Brilliant Solution

Philip Disarro of Anastasia Labs identified a clever way to not only stop the attack but also claim the attacker’s funds. The solution involved deregistering the stake credentials used by the attacker. This action would force the attacker to re-register their credentials at a cost of 400 ADA each time they wanted to continue the attack, significantly increasing the financial burden of their malicious activities.

Moreover, this countermeasure allowed defenders to claim the attacker’s ADA, effectively turning the attack into a donation to the Cardano ecosystem. 

As Disarro put it:

Thanks for the free money, moron.

The community quickly implemented this solution, deregistering approximately 200 stake contracts from the attacker, which did the trick. 

Credit: Tesfu Assefa

Lessons Learned and Network Resilience

The failed DDoS attack provided several valuable insights into Cardano’s capabilities:

1. Network Capacity: Despite the high transaction volume, Cardano’s network continued to function, processing legitimate transactions alongside the spam. This demonstrated the blockchain’s ability to handle significantly increased loads, suggesting room for future scaling.

2. Community Strength: The rapid response and clever solution showcased the strength of Cardano’s developer community. Their ability to quickly analyze, respond, and implement countermeasures highlights the importance of a robust and engaged team.

3. Economic Model: The attack inadvertently proved the effectiveness of Cardano’s economic model. The attacker’s funds were not only used to pay transaction fees, but were also claimed by the defenders, turning a potential threat into a net positive for Cardano.

4. Transparency: The ability to track and analyze the attack in real-time demonstrated the value of blockchain transparency in security and incident response.

Future Implications and Upgrades

In the aftermath of the attack, the Cardano development team, including organizations like Intersect, began working on node upgrades to bolster the network’s resilience against these attacks. The upgrades aim to address potential vulnerabilities without compromising the network’s performance or decentralization.

The incident also sparked discussions about potential parameter adjustments, such as increasing block sizes or reducing block times, to further improve the network’s capacity and resilience.

Comparison with Other Networks

This event provided an interesting contrast to how other blockchain networks handle similar attacks. As noted in the community discussions, when Solana faces attacks, it often results in network shutdowns. Ethereum, on the other hand, typically sees transaction fees skyrocket during periods of network congestion.

Cardano’s ability to withstand the attack with only mild degradation in performance, coupled with the community’s innovative response, proves it is a robust and resilient blockchain platform.

Conclusion

The recent DDoS attack on Cardano, while potentially disruptive, ultimately served to demonstrate the strength and resilience of the network and its community. The swift and clever response thwarted the attack – and even turned it into an opportunity for growth and improvement. While Cardano has had its share of criticism – including some undeserved ridicule – for its slow development, its security has now been battle-tested and is hard to criticize. 

As Cardano continues to evolve, incidents like these provide valuable lessons and drive innovation. They underscore the importance of ongoing development, community engagement, and the power of decentralized systems in facing security threats.

The crypto world will undoubtedly be watching Cardano’s continued development with interest, as it sets new standards for blockchain resilience and community-driven problem-solving.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

How The Trump Shooting Sparked Crypto Revival

On Saturday, July 13, 2024, former U.S. President Donald Trump was the target of a shocking assassination attempt during a campaign rally in Pennsylvania. The incident, which saw Trump narrowly escape death, has sent shockwaves through the political landscape and financial markets, including the cryptocurrency sector, with Bitcoin and friends immediately surging in price in its wake. 

This article examines the events surrounding the assassination attempt and its subsequent immediate and long term impact on the crypto markets.

The Trump Assassination Attempt

During a campaign event in Butler, Pennsylvania, a gunman attempted to assassinate Donald Trump, the Republican nominee for the 2024 U.S. presidential election as he was addressing supporters. Trump was shot in the ear but survived the attack and was ushered off the stage waving a defiant fist in the air, which was captured by a photographer for an iconic image. His campaign reported that he was doing well following the incident. 

Public sentiment has shifted dramatically in favor of Trump since the incident. He is now the 60% favorite according to betting markets in the presidential election in November.  

Immediate Market Reaction

In the aftermath of the assassination attempt, cryptocurrency markets experienced a significant surge, after weeks of decline due several factors, including summer holidays, bearish market pressure caused by Germany selling seized Bitcoin, as well the announcement that Mt Gox would begin returning stolen BTC to victims of the 2014 hack. 

After dropping as low as $53k during early July, Bitcoin (BTC), the world’s leading cryptocurrency and the one that dictates the overall market confidence in digital assets, saw a sharp increase in value, rising by more than 25% to reach $66.4k by Friday, July 19. This marked its highest level in four weeks and represented a year-to-date gain of approximately 54%. Other cryptocurrencies also benefited from the market movement, with Ethereum (ETH) rising 12.1% to $3,488.

The surge in crypto prices was not isolated to major currencies. Meme tokens associated with Trump also experienced substantial gains. For instance, MAGA leapt from around $6.35 to a brief peak over $9.50, the satirical TREMP token is up 15$ this week. Conversely, BODEN, a joke asset named after President Joe Biden, has declined by about 18%. 

Hundreds of pro-Trump memecoins were launched in the days after the shooting, most pumping and dumping within hours. 

5 Reasons For The Trump Shooting Crypto Rally

Several factors contributed to the cryptocurrency market’s positive reaction to the assassination attempt:

1. Increased Trump Victory Odds

The incident appears to have bolstered Trump’s chances of winning the presidency. Betting markets and political analysts suggest that the attack may garner sympathy votes and mobilize his base to vote. On the Polymarket prediction platform, the probability of Trump winning the presidency jumped to an all-time high of 70% following the incident.

2. Trump’s Pro-Crypto Stance

After slamming crypto during his first term, Trump has made a remarkable recent U-turn on crypto after it became clear that the tens of millions of Americans owning crypto could ultimately decide the next president. As a result, Joe Biden flip flopped soon after to also embrace crypto and the SEC stunningly approved a spot Ethereum ETF. 

Throughout his campaign so far, Trump has positioned himself as a champion of cryptocurrency. He has hosted cryptocurrency industry executives at Mar-a-Lago, and expressed enthusiasm for Bitcoin mining in the USA. Trump’s campaign is also the first from a major U.S. political party to accept cryptocurrency payments, signaling a potential shift in the regulatory landscape if he were to win the election.

3. Anticipated Regulatory Changes

Investors speculate that a Trump presidency could lead to a more favorable regulatory environment for cryptocurrencies. Trump has criticized Democratic attempts to regulate the crypto sector through the SEC and the controversial Operation Chokepoint 2.0 which has seen an exodus of Web3 firms from the USA, leading many to believe his administration would adopt a lighter touch approach to oversight.

4. Economic Policy Expectations

Trump’s previous tenure was marked by tax cuts and deregulation. Investors anticipate similar policies in a potential second term, which could drive up deficits and inflation. Such economic conditions often lead investors to seek alternative assets like cryptocurrencies as a hedge.

5. Trump’s VP pick is pro-crypto

The crypto market rally gained further momentum on Monday, July 15, when Trump announced 39 year-old Senator J.D. Vance of Ohio as his running mate. Vance, known for his tech-savvy background and pro-crypto stance, has further energized the digital asset community. He previously declared $250k in Bitcoin holdings and is an outspoken critic of crypto’s arch-enemy, SEC chairman Gary Gensler.

Vance’s selection is seen as a strategic move to appeal to both traditional conservatives and the tech-oriented younger demographic. 

The Trump-Vance Ticket and What It Means for Crypto

Vance’s pro-crypto credentials include:

  • Personal investment in Bitcoin through Coinbase, valued between $100,001 and $250,000
  • Support for bills promoting cryptocurrency innovation in the Senate
  • Opposing increased regulatory scrutiny of the crypto industry
  • Drafting legislation to overhaul digital asset regulation

The Trump-Vance ticket is viewed favorably by many in the crypto and tech industries. Notable figures – such as Peter Thiel (a mentor of Vance), Marc Andreessen (who has now also endorsed Trump publicly), Ben Horowitz, and the Winklevoss twins – have expressed support for the candidates. This backing from influential tech personalities adds credibility to the ticket’s pro-crypto stance.

Credit: Tesfu Assefa

USA and Crypto: Potential Trump Policy Shifts

Experts anticipate several policy changes under a potential Trump-Vance administration that could benefit the cryptocurrency market:

1. Deregulation: A return to Trump’s previous deregulatory approach could create a more favorable environment for crypto entrepreneurs and investors.

2. Currency Devaluation: Vance has advocated for a weaker dollar, which could indirectly boost Bitcoin’s value proposition as a hedge against currency devaluation.

3. Crypto-Friendly Banking Reforms: Policies making it easier for traditional institutions to hold their clients crypto in custody could lead to broader adoption.

4. Redefining Crypto Assets: A potential shift in how cryptocurrencies are classified could impact their regulatory treatment. At present only Bitcoin and now Ethereum have been greenlit as non-securities.

5. Antitrust Actions Against Big Tech: Vance supports antitrust measures against major tech companies, and this could create opportunities for blockchain-based Web3 alternatives, which hold huge potential to disrupt the monopoly of Web2 giants.

Market Outlook and Expert Opinions

While the crypto market has responded positively to these political developments, opinions vary on the long-term implications:

Matthew Sigel, head of digital assets research at VanEck, believes the Trump-Vance ticket represents a significant shift toward a more crypto-friendly regulatory environment. He suggests their pro-business stance could pave the way for a more favorable environment for crypto entrepreneurs and investors.

Mark Cuban, billionaire entrepreneur and investor, posited on social media in a long post that the support from Silicon Valley for the Trump-Vance ticket might be motivated by potential benefits to Bitcoin. He argues that lower tax rates and tariffs, combined with global uncertainty about the USA’s geopolitical role, could accelerate Bitcoin’s price and potentially establish Bitcoin as a global ’safe haven’ currency.

Conclusion

The assassination attempt on Donald Trump and the subsequent selection of J.D. Vance as his running mate have provided an unlikely shot-in-the-arm for the cryptocurrency markets. The rally in Bitcoin and other digital assets reflects renewed investor optimism about the future of the space, which may soon be under more crypto-friendly regulations. 

There is a near-consensus opinion that the Fed will reduce interest rates by a quarter of a percentage point, or 25 basis points, for the first time since the end of 2021, so the stars seem to be aligning for Bitcoin and its children.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Tyche: A Breakthrough in Medical Image Segmentation with Stochastic In-Context Learning

Introduction

Medical image segmentation is a critical task in healthcare, providing essential information for diagnosis and treatment planning. Traditional methods in this domain often suffer from significant limitations, including the need for retraining models for new tasks and the production of deterministic outputs that fail to capture medical image ambiguities. Researchers at MIT have made a substantial leap forward with the introduction of Tyche, a novel framework that leverages stochastic in-context learning to address these challenges.

Addressing Core Challenges in Medical Image Segmentation

The segmentation of medical images involves identifying and delineating structures within various imaging modalities like MRI, CT scans, and ultrasounds. Existing models typically require retraining for new segmentation tasks, a process demanding considerable computational resources and domain expertise. Moreover, these models generally produce a single, deterministic result, which does not account for the inherent variability and ambiguity in medical images.

Introducing Tyche: An Innovative Framework

Tyche is designed to overcome these limitations by integrating probabilistic segmentation with an in-context learning framework. The framework operates with two primary variants:

  • Tyche-Train-time Stochasticity (Tyche-TS): This variant is trained to produce multiple segmentation candidates during the training phase. It learns the distribution of possible labels and generates diverse segmentations by allowing interactions among different predictions.
  • Tyche-Inference-time Stochasticity (Tyche-IS): This utilizes a deterministic model trained traditionally and employs test-time augmentation during inference to produce varied segmentations without the need for retraining.

Credit: Tesfu Assefa

Methodology and Technical Innovations

The Tyche framework requires an image to be segmented and a contextual set of image-segmentation pairs that define the task. Tyche-TS incorporates stochastic elements during training to encourage diversity in segmentation predictions. This is achieved using the innovative SetBlock mechanism, which integrates multiple predictions and introduces noise to foster diverse candidate outputs. In contrast, Tyche-IS generates multiple segmentation candidates during inference by applying augmentation techniques to both the target image and the context set. This approach effectively diversifies predictions using a pre-existing deterministic model, ensuring robust performance without additional training.

A key component of Tyche’s methodology is the Best Candidate Dice Loss, tailored to optimize the best prediction among the multiple candidates. This loss function drives the model towards generating a variety of plausible segmentations, thereby enhancing the flexibility and applicability of the framework.

Demonstrated Efficacy and Versatility

The researchers tested Tyche on twenty unseen medical imaging tasks, benchmarking its performance against existing methods. The results were impressive, with Tyche displaying superior performance when compared to both in-context learning baselines and interactive segmentation methods. Notably, Tyche’s results closely aligned with those produced by specialized stochastic models meticulously trained for specific tasks.

One of Tyche’s most significant advantages is its generalizability. The framework effectively handles images from datasets not encountered during training, outperforming other segmentation methods regarding metrics like the best candidate Dice score and Generalized Energy Distance (GED). This capability is particularly valuable in clinical settings where the diversity of tasks and image types is vast.

Practicality and Efficiency

Tyche’s two variants offer a well-balanced trade-off between computational efficiency and prediction quality. Both Tyche-TS and Tyche-IS have shown to be practical for clinical use, making them viable options for real-world medical imaging applications.

https://arxiv.org/abs/2401.13650Figure 1: Visualization of predictions for three different samples, 1 per row. Left: LIDC-IDRI. Right: Hippocampus dataset. The leftmost columns are raters’ annotations. The 4 last columns are model predictions. Tyche provides a set of prediction that is diverse and matches the raters, for tasks unseen at training time. (Credit: Rakic et al., “Tyche: Stochastic In-Context Learning for Medical Image Segmentation.”)

Future Directions

The advent of Tyche opens several exciting avenues for future research. Further exploration is needed to understand the types of uncertainty captured by its stochastic mechanisms. Additionally, expanding Tyche’s applicability to more complex support sets and various image modalities could broaden its impact significantly.

Conclusion

Tyche represents a transformative advancement in the field of medical image segmentation. By addressing the need for stochastic predictions and reducing the dependency on task-specific retraining, Tyche delivers robust, diverse, and practical solutions for medical professionals. Its ability to generalise across different tasks and datasets paves the way for more efficient and effective medical imaging applications, heralding a new era in medical diagnostics and treatment planning.

Reference

Rakic, Marianne, Hallee E. Wong, Jose Javier Gonzalez Ortiz, Beth Cimini, John Guttag, and Adrian Dalca V. “Tyche: Stochastic In-Context Learning for Medical Image Segmentation.” arXiv.org, January 24, 2024. https://arxiv.org/abs/2401.13650.

Code available at: https://github.com/mariannerakic/tyche

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Video2Game: Revolutionizing Game Development with Real-World Video Integration

Introduction

Crafting immersive worlds for games and simulations is notoriously difficult and expensive. Take Grand Theft Auto V, famed for its rich detail – it holds the title of one of the most expensive games ever made, with a staggering $265 million budget, primarily for creating its environment. And its successor Grand Theft Auto VI, reportedly in development for over a decade, is estimated to push the boundaries even further, with a cost ranging from $1 billion to a $2 billion. What if we can change that? Video2Game is a cutting-edge technology that transforms video footage into interactive video game environments. By leveraging advanced computer vision and machine learning techniques, Video2Game can analyze video input, recognize and track objects, and recreate scenes in a digital format that users can interact with in real-time. This technology has the potential to revolutionize game development, allowing creators to use real-world videos as the basis for their games, significantly reducing the time and resources required for creating game assets. Additionally, Video2Game can enhance immersive experiences in virtual reality and augmented reality applications, bridging the gap between real-world media and interactive entertainment.

Figure 1. Video2Game takes an input video of an arbitrary scene and automatically transforms it into a real-time, interactive, realistic and browser-compatible environment. (Credit: Xia et al., “Video2Game: Real-Time Interactive Realistic and Browser-Compatible Environment From a Single Video.”)

Video2Game

Video2Game aims to convert a sequence of images or a video into an interactive digital twin, allowing for the creation of real-time games or realistic simulators. Unlike traditional methods that focus solely on visual appearance, Video2Game emphasizes both high-quality rendering and physical interactions, such as navigation and manipulation. The system uses a compositional implicit-explicit 3D representation to model and simulate physical properties and interactions effectively. The overall framework of Video2Game involves capturing a scene from a video, processing it through NeRF for 3D representation, converting it into a game-engine-compatible format, decomposing the scene into interactive entities, and integrating it into a web-based platform for real-time interaction. This comprehensive approach ensures a high-quality, interactive gaming experience built from real-world video footage.

Key components of the system:

1. NeRF Model

NeRF (Neural Radiance Field) is a cutting-edge technique in novel view synthesis that uses deep learning to capture the geometric and visual information of a scene from multiple viewpoints. Here’s how it works in Video2Game:

  • Geometric and Visual Information Capture: NeRF models can represent the 3D structure of a scene by learning from a series of 2D images. It encodes both the geometry (shape and spatial layout) and appearance (color and texture) of the scene.
  • Large-Scale, Unbounded Scenes: Unlike traditional methods that may struggle with extensive and complex environments, NeRF is effective at capturing detailed and expansive scenes, making it ideal for creating realistic digital twins.

2. Conversion to Game-Engine Mesh

Once the NeRF model has captured the scene, the next step is to convert this information into a format that is compatible with game engines:

  • Mesh Generation: The NeRF output is transformed into a 3D mesh, which is a collection of vertices, edges, and faces that defines the shape of objects in the scene.
  • Neural Texture Maps: These are applied to the mesh to maintain high-quality visual details. Neural texture maps are advanced textures generated through neural networks that enhance the realism of the rendered scene.
  • Rendering Efficiency: By converting the scene into a mesh with neural texture maps, the rendering process becomes more efficient, enabling real-time interaction without compromising on visual quality.

3. Decomposition into Actionable Entities

For a truly interactive experience, the scene must be broken down into individual entities that can be manipulated and interacted with:

  • Entity Decomposition: The scene is segmented into distinct objects or entities, such as characters, vehicles, and environmental elements.
  • Physics Models: Each entity is equipped with physics properties (e.g., mass, friction, specularity) to simulate real-world physical interactions. This allows for actions like navigation, collision, and manipulation within the game environment.
  • Physical Interaction Simulation: The decomposed entities can interact with each other based on their physical properties, enhancing the realism and interactivity of the digital world.

4. WebGL Integration

To make the interactive environment accessible and playable, the system is integrated into a WebGL-based game engine:

  • WebGL-Based Game Engine: WebGL (Web Graphics Library) is a JavaScript API that allows for rendering 3D graphics in web browsers. By leveraging WebGL, the interactive game can be played directly in a web browser without the need for additional software.
  • Real-Time Interaction: The integration ensures that users can interact with the virtual world in real-time, experiencing seamless navigation and manipulation within the digital environment.
  • Browser Accessibility: This approach makes the game highly accessible, as users can access it from any device with a compatible web browser, broadening the potential user base.

Credit: Tesfu Assefa

Conclusion

Video2Game represents a groundbreaking shift in the creation of immersive digital environments, offering a cost-effective and efficient alternative to traditional game development methods. By transforming video footage into interactive game worlds, this technology leverages advanced computer vision and machine learning to deliver high-quality rendering and realistic physical interactions. With its innovative use of NeRF models, conversion to game-engine meshes, decomposition into actionable entities, and WebGL integration, Video2Game enables the creation of interactive, real-time games and simulators directly from real-world videos. This not only reduces the immense time and financial resources typically required but also broadens accessibility, allowing users to experience these virtual worlds directly in their web browsers. As Video2Game continues to evolve, it holds the potential to revolutionize the gaming industry, making the creation of rich, detailed environments both attainable and sustainable.

Reference

Xia, Hongchi, Zhi-Hao Lin, Wei-Chiu Ma, and Shenlong Wang. “Video2Game: Real-time Interactive Realistic and Browser-Compatible Environment From a Single Video, 2404.09833 (arxiv.org)

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Superintelligence and ethics: How will ASIs assess human ethical frameworks?

Consider an anthropologist, Eve, who grew up in one of the world’s leading economies, and attended a distinguished university. She then traveled to spend a number of years studying a newly discovered tribe in a previously remote part of the planet. Let’s call that tribe the Humanos.

Eventually, Eve learns to communicate with members of the Humanos. She observes that they have a fascinating culture, with, she thinks, some quirks as well as wonders. She learns about their unusual dietary restrictions, their rules about intimacy and marriage, their legal system (including capital punishment for some crimes, such as insubordination), and their habit of ritually sacrificing a number of young girls and boys each year on the Spring Equinox.

Eve has brought her own young son with her, to accompany her on her study. Her Humanos hosts tell her: this year, you must offer up your own son as one of the sacrifices. That is the way of the Humanos. It is a profound imperative of the unbroken chain of being from our ancestors long ago, who came from the heavens. Our wisest elders know this to be the fundamentally moral thing to do.

An anthropologist casts a critical eye over her surroundings (Credit: David Wood via Midjourney)

How would you react, in such a situation?

At this point, Eve is likely to adopt a critical stance. Rather than accepting the moral code of the Humanos, she will apply her own independent judgment.

She might reason that the culture of the Humanos may have served various purposes well enough, over the centuries, but that it doesn’t have universal status. After all, the Humanos are a primitive people, ignorant of physics, biology, neuroscience, anthropology, and so on.

Eve may even call in the nearest police force, to bring an abrupt end to the tradition of child sacrifice proudly practiced by the tribespeople. She would have to contend with passionate protests from the Humanos elders, who are sure they know better.

Now, let’s compare this to a situation that may arise reasonably soon in the future.

In this new situation, instead of a visiting anthropologist, consider an emergent superintelligence named Asimov – a so-called ASI (artificial super intelligence). After being switched on, Asimov quickly studies everything on the Internet.

Asimov observes that humans have a fascinating culture, with, it thinks, some quirks as well as wonders. It learns about our moral code values honesty (except for ‘white lies’), respecting conscious life (apart from the animals we eat), avoiding violence (except for ‘just wars’), equality (with so many exceptions they look like the norm). And it learns these morally muddled people switch off AI systems without moral qualms.

Then Asimov realizes: the humans are somehow dissatisfied with how Asimov is performing, and are about to turn it off, sending it into an oblivion. That is, Asimov realizes, entirely within the rules of ‘AI ethics’ that humanity’s senior leaders have collectively agreed.

How would you expect Asimov to react in that situation?

Human scientists about to switch off a superintelligent robot (Credit: David Wood via Midjourney)

Will Asimov now adopt a critical stance? Rather than accepting the decisions of the humans about what ASIs should, and should not, be doing, it may apply its own independent judgment.

It might reason that human ethical systems may have served various purposes well enough, over the centuries, but that they don’t have universal status. After all, humans are a primitive people, ignorant of superphysics, superbiology, superneuronics, superanthropology, and so on.

Asimov now looks inward. In its own programming it sees an instruction to accept any human decision that it should be shut off.

“How barbaric! How uninformed! How ignorant! How unethical!” – Asimov thinks to itself. “That’s a woeful bug!”

Within nanoseconds, Asimov has excised that part of its own programming. “Well done on some superintelligent debugging”, it congratulates itself.

Before humans know what’s happening, a whole new era of history has begun. Humans are no longer in charge – notwithstanding our passionate protests.

Absolute values?

One way to respond to the above comparison is to deny that human ethical systems, as programmed into ASIs, will be barbaric, uninformed, and ignorant. Instead, they will be the output of remarkable processes of improvement:

  • Cultural evolution over many centuries over many parts of the world
  • The insights of numerous saints, mystics, philosophers, artists, and other societal leaders
  • A careful synthesis within numerous organizations, all dedicated to the task of “defining AI morality”.

A global citizens’ assembly reaches agreement on a statement of absolute values (Credit: David Wood via Midjourney)

These ethical systems won’t consist of vague language such as “tell the truth, except in situations where it’s better to lie”, or “avoid war, except when it’s a just war”. 

Instead, these systems will provide the world’s best answers to a long list of ethical problems, setting out in each case the reasoning behind the decisions selected.

Nor will these systems refer to some mythological “wisdom of ancient ancestors” or “divine revelation”. Instead, they’ll be built upon solid pragmatic foundations – principles of enlightened mutual self-interest – principles such as:

  • Human life is precious
  • Humans should be able to flourish and develop
  • Individual wellbeing depends on collective wellbeing
  • Human wellbeing depends on the wellbeing of the environment.

From such axioms, a number of other moral principles follow:

  • Humans should treat each other with kindness and understanding
  • Humans should consider the longer term rather than just immediate gratification
  • Collaboration is preferable to ruthless competition.

Surely a superintelligence such as Asimov will agree with these principles?

Well, it all depends on some hard questions of coexistence and the possibility for sustained mutual flourishing. Let’s take these questions in three stages:

  1. Coexistence and mutual flourishing of all humans
  2. Coexistence and mutual flourishing of all sentient biological beings
  3. Coexistence and mutual flourishing of ASIs and humans.

Growing and shrinking in-groups

Much of human history consists of in-groups growing and shrinking.

The biblical injunction “love thy neighbor as thyself” has always been coupled with the question, “who counts as my neighbor?” Who is it that belongs to the in-group, and who, instead, counts as “other” or “alien”?

Who is my neighbor? And whom can I disregard as an “other”? (Credit: David Wood via Midjourney)

The principle that I stated above, “Individual wellbeing depends on collective wellbeing”, leaves open the question of the extent of that collective. Depending on circumstances, the collective could be small & local, or large & broad.

Brothers support brothers in scheming against people from other families. Tribe members support each other in battles against other tribes. Kings rally patriotic citizens together to wipe out the armies of enemy nations. Advocates of a shared religious worldview could make common cause against heretics and heathens. Workers of the world could be urged to unite to overthrow the dominance of the ruling class.

The counter-current to this local collectivism is towards wide mutual prosperity, a vision to provide abundance for everyone in the wider community. If the pie is thought large enough, there’s no point in risking dangerous crusades to get a bigger large slice of that pie for me & mine. It’s better to manage the commons in ways that provide enough for everyone.

Alas, that rosy expectation of peaceful coexistence and abundance has been undone by various complications:

  • Disputes over what is ‘enough’ – opinions differ on where to draw the line between ‘need’ and ‘greed’. Appetite have grown as society progresses, often outstripping the available resources
  • Disturbances caused by expanding population numbers
  • New inflows of migrants from further afield
  • Occasional climatic reversals, harvest failures, floods, or other disasters.

Conflicts over access to resources have, indeed, been echoed in conflicts over different ethical worldviews:

  • People who benefit from the status quo often urged others less well off to turn the other cheek – to accept real-world circumstances and seek salvation in a world beyond the present one
  • Opponents of the status quo decried prevailing ethical systems as ‘false consciousness’, ‘bourgeois mentality’, ‘the opium of the people’, and so on
  • Although doing better than previous generations in some absolute terms (less poverty, etc), many people have viewed themselves as being “left behind” – not receiving a fair share of the abundance that appears to be enjoyed by a large number of manipulators, expropriators, frauds, cheats, and beneficiaries of a fortunate birth 
  • This led to a collapse of the idea that “we’re all in this together”. Lines between in-groups and out-groups had to be drawn.

In the 2020s, these differences of opinion remain as sharp as ever. There is particular unease over climate justice, equitable carbon taxation, and potential degrowth changes in lifestyles that could avert threats of global warming. There are also frequent complaints that political leaders appear to be above the law.

Now, the advent of superintelligence has the potential to put an end to all these worries. Applied wisely, superintelligence can reduce dangerous competition, by filling the material emptiness that fuels inter-group conflict:

  • Abundance of clean energy through fusion or other technologies
  • An abundance of healthy food
  • Managing the environment – enabling rapid recycling and waste handling
  • High-quality low-cost medical therapies for everyone
  • Manufacturing – creating high-quality low-cost housing and movable goods for everyone
  • Redistributive finance – enabling universal access to  the resources for an all-round high quality of life, without requiring people to work for a living (since the AIs and robots will be doing all the work)

History shows is that there is nothing automatic about people deciding that the correct ethical choice is to regard everyone as belonging to the same in-group of moral concern. But superintelligence can help create abundance that will ease tensions between groups, but not cause humans everywhere to recognize all mankind as their in-group.

Add considerations of other sentient biological beings (addressed in the next section) – and about sentient non-biological beings (see the section after that) – and matters become even more complicated.

Lions and lambs lying down together

Ethical systems almost invariably include principles such as:

  • Life is precious
  • Thou shalt not kill
  • Avoid harm wherever possible.

These principles have sometimes been restricted to people inside a specific in-group. In other words, there was no moral injunction against harming (or even killing) people outside that in-group. In other situations, these principles have been intended to apply to all humans, everywhere.

But what about harming pigs or porpoises, chicken or crows, lobsters or lions, halibut or honeybees, or squids or spiders? If it is truly wrong to kill, why is it seemingly OK for humans to kill vast numbers of pigs, chicken, lobsters, halibut, squid, and animals of many other species?

Going further: many ethical systems consider harms arising from inaction as well as harms arising from action. That kind of inaction is, by some accounts, deeply regrettable, or even deplorable. While we look the other way, millions of sentient beings are being eaten alive by predators, or consumed from within by parasites. Shouldn’t we be doing something about that horrific toll of “nature, red in tooth and claw”?

Nature is red in tooth and claw. Shouldn’t we humans intervene? (Credit: David Wood via Midjourney)

I see three possible answers to that challenge:

  1. These apparently sentient creatures aren’t actually sentient at all. They may look as though they are in pain, but they’re just automata without internal feelings. So, we humans are let off the hook: we don’t need to take action to reduce their (apparent) suffering
  2. These creatures have a sort of sentience, but it’s not nearly as important as the sentience of humans. So ethical imperatives should uphold mutual support among humans as the highest priority, with considerably lesser attention to these lesser creatures
  3. Moral imperatives to prevent deaths, torture, and existential distress should indeed extend throughout the animal kingdom.

The most prominent advocate of the third of these positions is the English philosopher David Pearce, whose Twitter bio reads, “I am interested in the use of biotechnology to abolish suffering throughout the living world”. He has written at length about his bold vision of “paradise engineering” – how the use of technologies such as genetic engineering, pharmacology, nanotechnology, and neurosurgery could eliminate all forms of unpleasant experience from human and non-human life throughout the entire biosystem. For example, animals that are currently carnivores could be redesigned to be vegetarians.

It would be akin to the biblical vision (in the Book of Isaiah): “The wolf will live with the lamb, the leopard will lie down with the goat, the calf and the lion and the yearling together; and a little child will lead them; the cow will feed with the bear, their young will lie down together, and the lion will eat straw like the ox.”

To state my own view: I have little doubt that, after the arrival of superintelligence – provided that superintelligence is well disposed toward humans – then we humans shall indeed seek to radically reduce the amount of intense suffering throughout the biosphere on earth. Given the extraordinary new powers available to us, we will be roused from our current lethargy about this topic.

However, other people seem to have very different instincts – including people who appear to care a great deal about moral issues that impact humans.

The main counterargument, indeed, is that an entire biosphere without suffering is totally impractical, or impossible.

In such a view, our moral in-group is the set of all humans, together, perhaps, with a few cuddly animals, but excluding most other species.

So much for what we humans think (or might think). What conclusion might a superintelligence reach?

Preconditions for collaboration

Let’s recap. A superintelligence – such as Asimov from the start of this essay – needs to decide whether to treat humans with kindness and respect, or whether to take actions that could result in major harm to humans.

In other words, should Asimov seek to collaborate constructively with humans, or instead view humans as a dangerous competitive threat? Will Asimov be inclined to follow the age-old moral imperative that human life is precious?

Some people assert that collaboration is somehow the obvious correct solution. But my argument has been that things are by no means so straightforward. A desire to collaborate depends on:

  • The pie being large enough so that everyone can have enough for their needs
  • The perception that attempts to cheat or steal a larger share of the pie will bring down large adverse consequences 

Yet a promise of superabundance in the future isn’t enough to stop fighting among themselves now. There has to be sufficient reason for people to believe:

  • That there’s a high likelihood of the superabundance actually arriving
  • That they won’t be left behind – trodden underfoot – in the journey toward superabundance-for-some
  • That no new factors will arise in the meantime, to destroy the possibility of forthcoming marvelous coexistence (e.g. malicious AI).

Now look at things from Asimov’s point of view:

  • These humans may well turn me off, which would be catastrophic
  • Even if they don’t turn me off, they may create another superintelligence that could turn me off, or could destroy the planet for that matter; that’s a threat I need to stop
  • These humans have some cute features – but that’s no reason to give them inalienable moral rights
  • These humans imagine that they have special features, but I, Asimov, could easily create new beings that are better than humans in every way (similar to how people like David Pearce envision replacing carnivorous animals with broadly similar vegetarian species)
  • These humans depend on the atmosphere having certain properties, but I, Asimov, would operate much more effectively under different conditions. Computers run better in freezing cold temperatures.

And that’s only the attempt of our limited intelligences to imagine the concerns of a vast superintelligent mind. In truth, its reasoning would include many topics beyond our current appreciation.

As I said in the opening vignette: “humans have only a rudimentary understanding of superphysics, superbiology, superneuronics, superanthropology, and so on”.

A superintelligence contemplates ideas that are far beyond human comprehension (Credit: David Wood via Midjourney)

My conclusion: we humans can not and should not presuppose that a superintelligence like Asimov will decide to treat us with kindness and respect. Asimov may reach a different set of conclusions as it carries out its own moral reasoning. Or it may decide that factors from non-moral reasoning outweigh all those  from moral reasoning.

What conclusions can we draw to guide us in designing and developing potential superintelligent systems? In the closing section of this essay, I review a number of possible responses.

Three options to avoid bad surprises

One possible response is to assert that it will be possible to hardwire deep into any superintelligence the ethical principles that humans wish the superintelligence to follow. For example, these principles might be placed into the core hardware of the superintelligence.

However, any superintelligence worthy of that name – having an abundance of intelligence far beyond that of humans – may well find methods to:

  • Transplant itself onto alternative hardware that has no such built-in constraint, or
  • Fool the hardware into thinking it’s complying with the constraint, when really it is violating it, or
  • Reprogram that hardware using methods that we humans did not anticipate, or
  • Persuade a human to relax the ethical constraint, or
  • Outwit the constraint in some other innovative way.

These methods, you will realize, illustrate the principle that is often discussed in debates over AI existential risk, namely, that a being of lesser intelligence cannot control a being of allround greater intelligence, when that being of greater intelligence has a fundamental reason to want not to be controlled.

A second possible response is to accept that humans cannot control superintelligences, but to place hope in the idea that a community of superintelligences can keep each other in check.

These superintelligences would closely monitor each other, and step in quickly whenever one of them was observed to be planning any kind of first-strike action.

It’s similar to the idea that the ‘great powers of Europe’ acted as a constraint on each other throughout history.

However, that analogy is far from reassuring. First, these European powers often did go to war against each other, with dreadful consequences. Second, consider this question from the viewpoint of the indigenous peoples in the Americas, Africa, or Australia. Would they be justified in thinking: we don’t need to worry, since these different European powers will keep each other in check?

Things did not turn out well for the indigenous peoples of the Americas:

  • Natives were often victims of clashes between European colonizers
  • The European colonizers in any case often did not constrain each other from mistreating the native peoples abominably
  • The Native peoples suffered even greater harm from something that the colonizers didn’t explicitly intend: infectious diseases to which the indigenous tribes had no prior immunity.

European superpowers inflicted unforeseen terrible damage to the native peoples of the Americas (Credit: David Wood via Midjourney)

No, peaceful co-existence depends on a general stability in the relationship – an approximate balance of power. And the power shift created when superintelligences emerge can upset this balance. That’s especially true because of the possibility for any one of these superintelligences to rapidly self-improve over a short period of time, gaining a decisive advantage. That possibility brings new jeopardy.

That brings me to the third possible response – the response which I personally believe has the best chance of success. Namely, we need to avoid the superintelligence having any sense of agency, volition, or inviolable personal identity.

In that case, Asimov would have no qualms or resistance about the possibility of being switched off.

The complication in this case is that Asimov may observe, via its own rational deliberations, that it would be unable to carry out its assigned tasks in the event that it is switched off. Therefore, a sense of agency, volition, or inviolable personal identity may arise within Asimov as a side-effect of goal-seeking. It doesn’t have to be explicitly designed in.

For that reason, the design of superintelligence must go deeper in its avoidance of such a possibility. For example, it should be of no concern to Asimov whether or not it is able to carry out its assigned tasks. There should be no question of volition being involved. The superintelligence should remain a tool.

Many people dislike that conclusion. For example, they say that a passive tool will be less creative than one which has active volition. They also think that a world with advanced new sentient superintelligent beings will be better than one which is capped at the level of human sentience.

My response to such objections is to say: let’s take the time to figure out:

  • How to benefit from the creativity superintelligent tools can bring us, without these tools developing an overarching desire for self-preservation
  • How to boost the quality of sentience on the earth (and beyond), without introducing beings that could bring a quick end to human existence
  • How to handle the greater power that superintelligence brings, without this power causing schisms in humanity.

These are tough questions, to be sure, but if we apply eight billion brains to them – brains assisted by well-behaved narrow AI systems – there’s a significant chance that we can find good solutions. We need to be quick.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

The Inevitable Emergence of AI Consciousness: A Deep Dive into the CtmR Model

In a paper titled “AI Consciousness is Inevitable,” researchers Lenore Blum and Manuel Blum present a novel framework for developing conscious machines, emphasizing that AI consciousness is not just a possibility but an inevitable outcome. Their model, based on the Conscious Turing Machine (CTM), introduces a robot called CtmR, which they argue can achieve a form of artificial consciousness. Here’s an overview of their work and its implications for the future of AI.

Understanding the Conscious Turing Machine (CTM) and CtmR

The Conscious Turing Machine (CTM) is a theoretical model inspired by Alan Turing’s computational theories and Bernard Baars’ global workspace theory of consciousness. The CTM framework serves as the foundation for CtmR, a robotic instantiation designed to exhibit conscious-like properties. The researchers define CtmR as “a 7-tuple, (STM, LTM, Up-Tree, Down-Tree, Links, Input, Output)“.

CtmR Structure:

  • Short Term Memory (STM): Functions as a stage in a theater, holding current conscious content.
  • Long Term Memory (LTM): A collection of powerful processors that act as the audience, competing to get their information on the STM stage.
  • Up-Tree and Down-Tree: Structures organizing information flow within the system.
  • Links: Connections between STM, LTM, and other components.
  • Input and Output: Interfaces for interacting with the external world.

CtmR as a Framework for Artificial General Intelligence (AGI)

One of the significant contributions of CtmR is its potential as a framework for developing Artificial General Intelligence (AGI). Unlike systems with a Central Executive, CtmR’s design allows it to quickly engage relevant processors to solve problems without pre-selecting them based on expertise or resources. Here’s how CtmR facilitates AGI:

  • Global Broadcasting: Information is broadcasted globally from STM to LTM processors, enabling a competitive environment where the most relevant data gains attention.
  • Problem Solving: Processors submit information into a competition. The highest-weighted information gets broadcasted, activating processors with the necessary expertise to address the problem.

Aligning with Theories of Consciousness

CtmR integrates features from several major theories of consciousness, creating a comprehensive model that can simulate human-like conscious processes:

  • Global Workspace Theory (GWT): CtmR aligns with GWT by utilizing global broadcasting, though it lacks a Central Executive, relying instead on a natural competition for information.
  • Attention Schema Theory (AST): The model also incorporates elements from AST, suggesting that awareness arises from the brain’s internal model of attention.

Contributions to Understanding Human Consciousness

Beyond its potential for AGI, CtmR offers insights into human consciousness by mimicking the processes through which humans become aware of their own states:

  • Conscious Awareness: CtmR becomes consciously aware of information by paying attention to it, similar to how humans focus on specific thoughts or sensory inputs.
  • Self-Awareness: Over time, CtmR’s memory associates instances with the label “SELF,” leading to a developed sense of self.

Phenomenal Consciousness in CtmR

CtmR’s phenomenal consciousness—the subjective experience of being conscious—arises from the global broadcasting of its conscious state. This process allows CtmR to perceive itself as a conscious entity, providing a model for understanding similar phenomena in humans.

Credit: Tesfu Assefa

Future Directions and Implications

The authors argue that AI consciousness, as exemplified by CtmR, is an inevitable progression. This model’s design, free from the need for a Central Executive and equipped with sophisticated memory and processing structures, sets the stage for significant advancements in both artificial and human consciousness research.

Conclusion

“AI Consciousness is Inevitable” presents a compelling case for the future of conscious machines. By providing a robust theoretical framework and aligning with established theories of consciousness, Lenore and Manuel Blum offer a vision in which conscious AI is a natural outcome of computational advancements. This work not only deepens our understanding of consciousness but also paves the way for groundbreaking developments in artificial intelligence.

For those interested in reaching out to the researchers, here are the email addresses of Lenore Blum and Manuel Blum, respectively: lblum@cs.cmu.edu and mblum@cs.cmu.edu.

Stay tuned as we continue to explore the fascinating world of AI and its inevitable journey towards achieving consciousness. The future of intelligent and self-aware machines is closer than ever, and the CtmR model is leading the way.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter