Bitcoin Hits All-Time High Notes in Early 2024

Bitcoin is back into your friends’ and family’s chats, as the world’s most valuable digital asset edged its prior all-time high price record in dramatic fashion. The months of sustained growth that also saw other crypto niches like AI cryptocurrencies (riding the artificial intelligence trend) and memecoins (easy-to-understand mainstream tokens) balloon in value appeared unstoppable after most of the Grayscale spot ETF redemptions were done and investors are frothing at the sky-high predictions they are starting to hear across social media and cable news. 

After spending over two years trading far far below its November 2021 peak, Bitcoin defied the naysayers by skyrocketing past $73,000 to set a new supreme price milestone of $73,750 on March 14th.

This latest achievement capped an extraordinary bull run for the crypto market leader, which saw its price increase by over 50% so far in 2024 amid frenzied trading activity. On the day it reached its new record high, $100+ billion worth of Bitcoin changed hands as investors pile in, hoping to ride the wave higher.

Credit: Coinglass

Bitcoin’s new personal best above $73,000 follows on from record-setting earlier in the month, when it hit $69,000 for the first time on 5 March. That proved too great to turn resistance into support, and caused a cascade of liquidations, with its price plunged by double-digits, falling over 15% to $59,000, before paring some of the losses in a rebound to a V-shaped recovery to the new high, and to the $71,000 where it currently trades. 

This whiplash-inducing price action underscores the intense tug-of-war still playing out across crypto markets, where TradFi whales have now entered the ring and brought with them some heavy bags to throw around. 

So what forces converged to propel the world’s largest cryptocurrency to such lofty new heights in the first quarter of 2024? As is often the case in these bullish periods, the rally was fueled by the perfect storm of optimistic narratives and real-world regulatory and economic developments. We already covered many of these in last year’s 2024 Crypto Bull Run article, and I recommend you go revisit it. 

Bitcoin spot ETFs create insatiable BTC demand

Credit: The Block

Restoring interest back in Bitcoin this year were the highly-anticipated launches of spot Bitcoin exchange-traded funds (ETFs) by major financial players like BlackRock, Fidelity, and Grayscale. After years of denial by US regulators, multiple spot Bitcoin ETFs finally received approval in January 2024.

These ETFs allow institutional and retail investors to gain exposure to Bitcoin’s price movements through a regulated, familiar product that directly holds the underlying crypto assets. Huge sums of investment capital flooded into these ETFs right out of the gate, creating immense buy pressure in the spot Bitcoin markets as the funds raced to accumulate enough BTC to back their fund shares.

Just two months after launching, the Bitcoin spot ETFs had already vacuumed up a staggering combined total of over $45 billion in assets under management, representing around 684,000 Bitcoin tokens: over 3% of the total circulating supply. Inflows showed no signs of stopping, with BlackRock’s fund alone reaching $10 billion in just seven weeks.

Bitcoin Halving

Another major factor driving Bitcoin’s rally has been the building anticipation of the next ‘halving’ event for the cryptocurrency, expected in April 2024. This systemic halving of Bitcoin’s mining reward happens automatically every four years, resulting in a 50% reduction in new supply hitting the market. The latest halving will see BTC rewards drop to 3.125, which is down four halvings from 2011’s block rewards of 50 BTC. With over 19 million Bitcoin already mined (and possibly a quarter lost) there is less and less to go around.  Previous halvings reliably preceded massive price increases as the decreasing supply dynamic helped fuel further buying demand.

Many analysts and industry experts have forecasted Bitcoin to surge well past $100,000 within 12-18 months after April’s halving, based on the historic patterns of past cycles where prices eventually climbed 10× or higher following the supply shocks. Speculators piled in early to front-run the perceived upside.

FASB Accounting

Bitcoin’s rally also got a boost in late 2023 from the official embrace of long-awaited new accounting rules for US public companies around cryptocurrencies. Starting in 2025, new FASB guidance will allow businesses to value certain crypto assets at fair market value on their balance sheets each reporting period, marking a huge upgrade from the current treatment of Bitcoin as an ‘indefinite-lived intangible asset’.

This rule change laid the groundwork for even broader institutional adoption of Bitcoin, removing one of the final barriers for public companies and investment funds looking to add exposure without complex workarounds.

US Elections, SEC and Interest Rate Cuts

Another important consideration to make is the impact of US policymakers’ adoption of Bitcoin and crypto. While the SEC under Herr Gensler views most cryptos as securities, it views Nakamoto’s coin as a commodity thanks to its proof-of-work origins. The Fed, whose monthly FOMC meetings became horror movie nights for crypto investors in 2022 and 2023, will likely soon have to relent and start reducing those inflation-crunching high interest rates, or combat a recession next. And of course, America will decide on the next President this year, and this should see crypto bashing through Operation Chokepoint 2.0 this year. Even Donald Trump is warming to Bitcoin now. 

Credit: Tesfu Assefa

What’s different this time round? 

While the Bitcoin frenzy and associated wildly optimistic price forecasts provoked understandable flashbacks to the 2017 and 2021 crypto bubbles for market veterans, the 2024 rally did have some key fundamental differences.

This time, sustained upward price pressure came not from shadowy derivatives platforms like FTX and opaque stablecoin ecosystem hazards like you-know-who, but from transparent, battled-hardened regulated funds and publicly-traded companies allocating directly to Bitcoin’s core layer-1. Record volumes and open interest levels on trusted exchanges – centralized and decentralized, spot and derivative – reflected genuine liquidity.

And whereas the 2021 peak was fueled by hype around experimental blockchain technologies and corporate marketing gimmicks, Bitcoin’s 2024 renaissance had the more grounded narrative of finally fulfilling its long-awaited promises as a now matured value reservoir and decentralized financial network, emboldened by the embrace of legacy institutional capital via ETFs, which may soon extend to Ethereum.

Of course, only time will tell if Bitcoin can maintain these lofty price levels or if the market mania will once again dissolve into despair. It’s easy to get caught up in the FOMO, and even easier to hit that sell button at a loss when market euphoria wears off and those 20% drops or weeks of sideways action break your resolve. 

But in its latest epic price run, crypto’s top dog (sorry Doge) demonstrated its incredible ability to capture imaginations and animate markets around the world like no other asset.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Ghostbuster: Unprecedented Accuracy in AI-Generated Text Detection

Text generated by language models, like ChatGPT, is getting better and better at mimicking human language. But doubts have been raised about the authenticity and trustworthiness of writing produced by AI. In response, scientists at the University of California, Berkeley have created Ghostbuster, a sophisticated technique for identifying text written by artificial intelligence.

Methodology

Ghostbuster uses an innovative technique that involves using a number of less powerful language models and running a systematic search over their features. It may determine if a document is artificial intelligence (AI) created by training a linear classifier on specific attributes. Interestingly, Ghostbuster can identify text produced by unknown or black-box models because it doesn’t need token probabilities from the target model. Three additional datasets were made available by the researchers for benchmarking detection across different domains.

Figure 1: An outline of the Ghostbuster model training procedure. The researchers fed each document into a series of weaker language models to obtain token probabilities. Then, they ran a structured search over combinations of the model outputs and trained a linear classifier on the selected features. (Credit: Berkeley Artificial Intelligence Research (BAIR))

Performance and Comparison

Ghostbuster performed exceptionally well in assessments, outperforming competing detectors like DetectGPT and GPTZero by a wide margin with an in-domain classification score of 99.0 F1. It showed better generality over language models, prompting techniques, and writing domains. These astounding results demonstrate Ghostbuster’s dependability and its capacity to identify AI-generated material.

Ethics

There are a lot of ethical questions raised by the use of AI-generated text detection methods. Such models’ false positive rates, which mistakenly identify genuine human work as AI-generated, can have serious consequences. Prior research has revealed some biases, such as the disproportionate marking of writings written by non-native English speakers as AI-generated. Nonetheless, Ghostbuster helps to address these ethical issues thanks to its enhanced performance and generalization skills. Ghostbuster is a technological and moral advance since it ensures more accurate identification while lowering false positives.

Challenges and Future Directions

The paper notes that there are still difficulties in identifying language produced by artificial intelligence, especially when dealing with hostile prompting and paraphrasing attacks. But Ghostbuster’s emphasis on full paragraphs or papers produced by language models offers a viable direction for further investigation. It is imperative to prioritize transparency and fairness in the creation and implementation of AI-generated text detection systems to guarantee impartial treatment and prevent unwarranted harm.

Constraints

Despite Ghostbuster’s outstanding performance, it’s important to recognize its limitations. The quality and diversity of the weaker language models utilized in the detection process can affect the efficacy of the system. Furthermore, adversarial strategies might develop and provide problems for the accuracy of the system. To overcome these restrictions and further expand the system’s capabilities, more research is required.

Credit: Tesfu Assefa

Conclusion

In summary, Ghostbuster is a noteworthy development in the area of artificial intelligence-generated text detection. Its exceptional performance and ethical advancements make it an effective tool for recognizing text generated by artificial intelligence in a variety of sectors. Ghostbuster addresses potential biases and lowers false positives, promoting the safe usage of AI-generated text detection systems. Continued research and development is essential to overcoming obstacles, enhancing system performance, and guaranteeing the moral use of AI-generated text identification tools. With the prevalence of text produced by artificial intelligence (AI), Ghostbuster provides a useful way to ensure the reliability and trustworthiness of written material while giving ethical issues priority.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Crypto AI Revenues Could Exceed $10 Billion by 2030

The integration of crypto incentives and blockchain technology with artificial intelligence (AI) could drive widespread adoption and advancement of decentralized AI and AI cryptos over the next decade, according to a new report by digital asset manager VanEck. Its timing couldn’t be better as we’ve been seeing an absolute explosion of AI-related cryptocurrency prices so far in 2024, the most recent rally coming on the back of OpenAI’s Sora launch.

In a recent in-depth report ‘Crypto AI Revenue Predictions by 2030’, VanEck analysts made several bullish predictions about the growth and the revenue potential of businesses operating at the intersection of crypto and AI. While any predictions should be taken with a grain of salt, considering where we’re at in the current crypto 2024 bull cycle, the report asks a lot of important questions and offers some surprising answers in the process. 

Van Eck analyst Gabor Gurbacs has taken a controversial stand against AI, predicting its misuse will result in the biggest spam attacks the world has ever seen. Gurbacs sees Bitcoin as its only salvation, so read on to find why. 

Public Blockchains Uniquely Positioned to Aid AI Progress

The Van Eck report is extremely bullish on the benefits that blockchain can bring to AI. He feels that AI and AI Agents “provide the raison d’être” for blockchain technology. It’s a big statement, but backed up by some sound reasoning: blockchain networks like Bitcoin and Ethereum possess several key attributes that make them well-suited to address existing challenges facing the AI industry, most notably:

  • Transparency – Being public ledgers recording all transactions, blockchains enable oversight related to data usage, model ownership, etc. This supports trust and accountability.
  • Immutability – Records made on blockchains cannot be altered afterwards. This immutability lends integrity to data used for critical model training and testing.
  • Ownership – Tokens, NFTs, and smart contracts allow us to clearly define ownership rights related to data, models, and even model outputs.
  • Adversarial Testing Environment – The adversarial, incentivized environments of crypto networks that financially reward hacking and optimization force rigorous, real-world testing of systems.

He projects that these properties of blockchain will speed up the adoption of, and trust in, AI over the next seven years.

Crypto Market Share of AI Could Reach Billions Annually

Applying economic and productivity growth assumptions to previous McKinsey research, Van Eck predicts a total addressable market (TAM) for global productivity gains enabled by AI automation will clock a crazy $5.85 trillion in 2030. Assuming 33% adoption by businesses worldwide, annual AI revenues could top $250 billion within the decade.

Moreover, the report forecasts crypto’s potential market share across major AI business categories:

  • Software: $6.27 billion
  • Infrastructure/Compute: $1.9 billion
  • Identity: $878 million
  • Safety/Compliance: $1.12 billion

Tallying these up, it’s predicted that annual crypto AI revenues could approach $10.2 billion by 2030 in VanEck’s base case. Under more aggressive assumptions about adoption and market capture, AI cryptocurrencies’ stake could exceed $50 billion.

How blockchain helps AI solve identity issues 

Artificial intelligence is increasingly critical to the global economy, akin to an essential utility. Web2 giants like Amazon and Google dominate current AI infrastructure, while blockchain technology supports specific, high-demand needs with its decentralized approach, offering flexibility and customization for AI development. This scenario positions blockchain as a dynamic adjunct in the AI infrastructure market, similar to how ride-sharing platforms like Uber complement, rather than replace, traditional transport services.

The importance of secure AI identities is rising, with blockchain playing a key role in preventing Sybil attacks by establishing verifiable digital identities. This defense is crucial as AI applications extend into areas like autonomous vehicles and healthcare, where safety is paramount. 

Blockchain’s immutable records serve as reliable “proofs of safety,” essential for high-stakes accountability and compliance. Despite blockchain’s potential to revolutionize AI safety and identity verification, a significant portion of this market is expected to stay centralized, preferring established reliability and trust in sensitive sectors.

Bitcoin miners find surprising synergy with the AI sector

Bitcoin miners, traditionally focused on mining proof-of-work cryptocurrencies like Bitcoin, are diversifying into the AI sector due to their shared high energy consumption needs. These miners have historically invested heavily in energy infrastructure, often utilizing cost-effective but carbon-intensive power sources. This positions them uniquely to offer lower-cost energy solutions for AI’s backend infrastructure, contrasting with Big Tech’s move towards renewable energy and vertically integrated operations. 

As AI’s energy demands potentially outpace current projections, bitcoin miners’ cost advantage in electricity could become increasingly significant, prompting a shift towards providing high-margin AI services, particularly in GPU provisioning.

Companies like Hive, Hut 8, and Applied Digital are leading this transition, with some reporting substantial revenue growth from AI-related operations compared to traditional bitcoin mining. 

For instance, Hive’s AI operations are notably more profitable on a per-megawatt basis. However, despite the promising shift and potential for revenue diversification, bitcoin miners face challenges in scaling up for AI, including skills gaps in data center construction, the need for a specialized salesforce, and limitations imposed by network latency and bandwidth in remote locations. These hurdles could impede their pivot to AI, despite the opportunities in the sector.

Credit: Tesfu Assefa

Notable Blockchain Use Cases Emerging Across the AI Landscape

Public blockchains and crypto token incentives have already sparked solutions addressing several pressing needs for progress across the AI landscape:

Decentralized Compute/Infrastructure

Projects like Akash, Render and Bittensor provide decentralized cloud computing platforms and infrastructure to help supply scarce GPUs for AI model training and deployment. This helps to pull down current limitations around access, cost, security, and customization options.

Model Optimization

Protocols like Numerai use tokenized incentives to organize data science competitions aimed at building optimized models for tasks like quantitative finance or natural language processing. Crypto tokens reward the most accurate and effective models.

Data Integrity

Emerging zero-knowledge proof solutions from startups like MODA allow AI model owners to mathematically prove certain claims about data usage or model performance without revealing proprietary intellectual property or sensitive information. This supports copyright protections and model accountability.

Digital Identity

Initiatives like WorldID (with its Worldcoin cryptocurrency) – spearheaded by AI thought leader Sam Altman from OpenAI and Sora – controversially leverage blockchain and biometrics to establish verified digital identity firmly linked to real humans. As automation increases, reliably determining humanity could help ensure security for computer networks and systems.

Conclusion

In summary, crypto-based networks already demonstrate clear potential to overcome some of the hardest barriers holding back innovation and mainstream adoption in artificial intelligence. However, integrating these two exponentially advancing technologies comes with persistent technical and adoption challenges.

If solutions continue maturing at their current brisk pace, crypto and AI seem well positioned to drive tremendous value for one another over the next 7 to 10 years. But uncertainties and speculation still cloud the most aggressive growth projections put forth. According to VanEck, the base case for 2030 remains strong, but the roadmap to billions in annual revenue still requires some visionary bets on both technologies to pay off.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

Universal Human Values and Artificial General Intelligence

The field of value alignment is becoming increasingly important as AGI developments accelerate. By alignment we mean giving a generally intelligent software system the capability to act in a way that is beneficial to humans. One approach to this is to instill AI programs with human values. 

However, the research in this field tends to focus on algorithms for maximizing metrics like pleasure or happiness. Much less attention has been devoted to the actual set of values they should follow.

I propose that the evidence from worldwide religions, traditional philosophy, evolutionary psychology and survey research finds surprising agreement on basic human values. Abstracting from this work, I propose a five-tier system of values that can be applied to an AGI. 

Level 1 is the values of life, survival, and persistence. It evolves from a concern for living and reproducing. An agent acts to avoid destruction, seek out energy and pass on characteristics to a future generation. 

Level 2 contains truth, wisdom, and knowledge. It is about agents that value truth and the ability to understand and interact well with their immediate surroundings.

Level 3 is a bundle of three sets of values concerning oneself: freedom, temperance, and growth. These values affect the agent’s internal state and behavior. The ‘self’ in question might be a human being or an AI designed as an autonomous moral agent. 

Level 4 is a bundle of five values: empathy, authority, security, justice, and conformity. These are all concerned with group interaction. They apply to biological fellow beings or to multi-agent AI systems. 

Level 5 contains values for dealing with nature and the surrounding ecosystem as well as the universe beyond. A sufficiently powerful general intelligence could have a terrestrial and extra-terrestrial impact and so needs to be concerned with the larger environment.

Credit: Tesfu Assefa

The values are concentric – they start with fundamental metaphysical and epistemological assumptions and radiate outwards to be more inclusive in space, in time, and in variety.

However there are a number of unresolved issues:

  1. Universality – The debate of moral relativism vs. moral universalism in which different values are emphasized or traded off.
  2. Hierarchy – The ranked order of the values in terms of their importance or goal relevance.
  3. Change – Ethics for a transhuman or posthuman society might necessitate alterations due to changes in the nature of humanity or other AIs.
  4. Human centricity – These values are human and created by our unique evolution and societal organizations.
  5. Consciousness – Agents may need to feel – to have qualia – in order to be morally responsible. This does not mean that an AGI cannot act morally.
  6. Implementation – This listing of said values does not describe the mechanisms by which they can be instantiated.

In light of these challenges and opportunities within the realm of value alignment in AGI, the upcoming Beneficial AGI Summit emerges as a pivotal platform. The BGI Summit, with its gathering of leading minds in artificial intelligence, ethics, and related fields, presents an unparalleled opportunity to delve deeper into these unresolved issues.

The universality of values, the hierarchy of ethical considerations, the evolving nature of ethics in a transhuman society, the human-centric viewpoint, consciousness, and the practical implementation of these values in AGI systems – all these topics are ripe for discussion at the BGI Summit. 

This event could mark a significant step forward in our understanding and ability to integrate these complex value systems into AGI, fostering a future where artificial intelligence compliments and enhances human values and ethics. Join me at the BGI Summit to explore these topics. For more information and to register visit bgi24.ai

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

NEOLAF: Introducing a Never-Ending Learning Framework for Intelligent Agents

The article “NEOLAF: A Neural-Symbolic Cognitive Architecture for Generalized Intelligence”  introduces an integrated neural-symbolic cognitive architecture aimed at modeling and constructing intelligent agents. Unlike traditional approaches such as pure connectionist or symbolic models, NEOLAF stands out for its unique features, including superior explainability, incremental learning, efficiency, collaborative and distributed learning, human-in-the-loop enablement, and self-improvement. This study highlights the advanced learning capabilities of the framework with a captivating experiment featuring a NEOLAF agent tasked with tackling challenging math questions from the MATH dataset.

NEOLAF serves a broad purpose in constructing intelligent agents, particularly self-improving intelligent tutor agents within adaptive instructional systems. Inspired by human cognitive development, NEOLAF combines the best features of both connectionist (ChatGPT, for example) and symbolic (SOAR, ACTR) techniques to overcome the shortcomings of each paradigm. The framework is a flexible tool for creating intelligent agents because of its unique benefits, which include explainability, incremental learning, efficiency, collaborative learning, and self-improvement.

The methodology behind NEOLAF involves instantiating learning agents from a DNA-like starter kit, leveraging pre-trained large language models (LLMs) for foundational reasoning. Like human cognition, NEOLAF agents function on two cognitive levels: rapid and slow. As a depiction of the knowledge-experience duality, the KSTAR framework (Knowledge, Situation, Task, Action, Result) is presented, enabling agents to learn via ongoing, iterative, and multitasking procedures.

NEOLAF agents exhibit two types of memory: implicit memory, which involves offline knowledge injection for model fine-tuning, and explicit memory, which represents past knowledge stored through the KSTAR process for each encounter. Like humans consolidate their memories as they sleep, NEOLAF agents can consolidate knowledge thanks to this dual-memory architecture.

Credit: Mindplex

Building on related work in the chain of thought reasoning, NEOLAF incorporates recent advances in LLM, reinforcement learning (RL), multitask learning, and planning to conduct tasks in the KSTAR process. The study describes a preliminary implementation of NEOLAF for a math problem-solving agent and evaluates its effectiveness against other models, like ChatGPT. The experiment uses difficult questions from the AIME and USAMO Math Competitions to assess different performance measures.

Beyond its application to math problem-solving, NEOLAF is envisioned as a cognitive architecture for an agent-based learning environment (Open Learning Adaptive Framework – OLAF). OLAF creates a dynamic and interactive learning environment by integrating three types of agents: learners, human teachers, and AI agents.

In summary, the NEOLAF architecture combines system-1 LLM capabilities with system-2 explicit reasoning and external services, which is a breakthrough approach to AI. NEOLAF solves significant issues with conventional methods by utilizing a dual-memory architecture and the KSTAR representation for problem-solving. Beyond addressing mathematical problems, the framework may be used to develop co-habitat ecosystems called BotLand and multimodal reasoning, which will promote interaction and evolution between intelligent agents and humans.  NEOLAF emerges as a lightweight and continually improving AI model, offering a promising alternative to current leading LLMs that are expensive to train and maintain.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter