Cryptocurrency markets change quickly, so it is important for investors to monitor key metrics that show market trends, performance, and sentiment.
Bitcoin is king, and where it goes, the market usually follows. These charts provide essential data to understand Bitcoin’s market position, performance, and future trends. Let’s dive in.
Bitcoin/USD
Everything in crypto starts with the Bitcoin chart, measured against the de-facto world currency (for now?) the U.S. Dollar. This is the first and still most important chart in crypto, which you can find on sites like CoinMarketCap, CoinGecko, and your favorite exchange’s BTC trading page.
Determine the time range you’d like to review – with Bitcoin, it’s better to zoom out to steady your nerves; the 4-year halving cycle below proves this. Dial up Crypto Twitter, understand how events like Chinese New Year, Christmas, U.S. Tax Season, U.S. elections, summer holidays and others can impact the price of BTC each year.
Bitcoin Dominance (BTC.D) represents Bitcoin’s share of the total cryptocurrency market. When BTC.D rises, it signals that Bitcoin is performing better than altcoins, often during market corrections when investors prefer safer assets.
A falling BTC.D usually indicates increased interest in altcoins, particularly in bullish markets as traders chase higher returns. For example, during the 2017 ICO surge, BTC.D dropped to nearly 32% due to an altcoin boom.
Tracking BTC.D helps traders understand market dynamics and adjust their portfolios accordingly, balancing exposure between Bitcoin and altcoins based on current trends and sentiment.
Bitcoin mining hash rate
The Bitcoin mining hash rate measures the computational power used by miners to process transactions and secure the Bitcoin network. It shows how many calculations are performed every second to solve the mathematical problems required for mining new Bitcoin blocks.
A higher hash rate makes the network more secure, as it becomes harder for bad actors to control or attack it. Factors that influence the hash rate include mining equipment efficiency, electricity costs, and Bitcoin’s price. Over the years, the hash rate has grown significantly, due to advancements in technology and increased participation from global mining operations.
On-Chain Metrics (Wallet Balances & Transactions)
On-chain metrics track blockchain activity by analyzing wallet balances and transaction histories. Wallet balances reveal how much cryptocurrency users are holding, helping identify trends like accumulation or selling.
‘Transaction volume’ measures the number of transactions happening on the network, showing how actively the cryptocurrency is being used. Higher transaction volumes can signal strong market activity, while lower volume may suggest reduced engagement.
These metrics provide a hint of a cryptocurrency’s network health, helping investors make better decisions based on real blockchain data instead of relying solely on price charts or market speculation.
Stablecoin Supply Ratio (SSR)
The Stablecoin Supply Ratio (SSR) compares Bitcoin’s market capitalization to the total market cap of all stablecoins. It shows how much buying power stablecoins have relative to Bitcoin. A low SSR means there are more stablecoins available, indicating strong potential buying power that could push Bitcoin’s price up.
A high SSR suggests fewer stablecoins are in circulation. This means less capital is available for Bitcoin purchases, a factor that could limit price movement. Investors use SSR to assess market liquidity and anticipate possible price movements based on how much money is ready to flow into Bitcoin from stablecoins.
Bitcoin Volatility Index (Crypto VIX)
The Bitcoin Volatility Index (Crypto VIX) measures how much Bitcoin’s price is expected to fluctuate over a set period, typically 30 days. It’s calculated using data from Bitcoin options trading. A high volatility index means Bitcoin’s price could change significantly; it means a risky and uncertain market. A low index suggests the market is stable: smaller price movements are expected.
Traders and investors use this index to gauge market sentiment, adjust their portfolios, and make informed trading decisions. Understanding Bitcoin’s volatility helps manage risks and spot trading opportunities in the unpredictable cryptocurrency market.
Bitcoin ETF inflows and institutional investments
ETF inflows and institutional investments play a crucial role in the cryptocurrency market. When large institutions invest in Bitcoin through ETFs, it indicates growing trust in digital assets. ETFs allow traditional investors to gain exposure to Bitcoin without directly holding it.
For example, BlackRock’s Bitcoin ETF has attracted billions of dollars, boosting market confidence. Higher ETF inflows often signal strong demand, which can push Bitcoin’s price upward. Investments from major firms like Fidelity and Grayscale show long-term interest. Monitoring these inflows helps investors gauge market sentiment, as increased institutional participation often leads to higher liquidity and reduced market volatility.
Bitcoin four-year cycle
The Bitcoin four-year cycle is a pattern based on Bitcoin’s halving events, which occur approximately every four years. During each halving, the reward for mining Bitcoin is reduced by half, limiting new supply. This scarcity often leads to price increases due to higher demand and lower availability.
The cycle includes four phases: accumulation, uptrend, distribution, and downtrend. Prices typically rise after halving events, followed by a peak, profit-taking, and eventual correction. Understanding this cycle helps investors anticipate potential market movements. By studying past cycles, traders can make better investment decisions, identifying favorable entry and exit points for long-term profitability.
Bitcoin Exchange Flows
Bitcoin Exchange Flows charts are used by traders to monitor the flow of Bitcoin into and out of exchanges. This can provide insights into market sentiment and potential price movements, and traders can gauge whether investors are accumulating or distributing their holdings. Increased inflows often indicate that traders are preparing to sell, which can lead to downward price pressure, while outflows suggest accumulation and potential bullish trends.
Additionally, these charts help identify significant shifts in trading volume, which can signal upcoming volatility or trend reversals. Understanding exchange flows thus empowers traders to make informed decisions based on market dynamics and investor behavior.
Altcoin Season Index
Bitcoin doesn’t just go up indefinitely. Eventually, it’ll take a breather and let the rest of the market share in the spoils, which sees massive growth for the many altcoins.
Altcoin Season Index tracks how well altcoins perform compared to Bitcoin over a 90-day period. If 75% or more of the top 50 altcoins outperform Bitcoin, it is considered an ‘Altcoin Season’: a strong altcoin market.
If fewer than 25% outperform Bitcoin, it’s ‘Bitcoin Season’, meaning Bitcoin is dominating the market. This index helps investors see when altcoins might offer better returns, guiding them to a trading strategy that favours altcoins. By keeping an eye on this index, traders can adjust their portfolios to take advantage when the market trends towards Bitcoin and away from it, to maximize potential profits during favorable market conditions.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
This article is about video so let’s start with a video. A video is worth 24,000 words per second. Look at this (just look at it!) –
This video was generated by AI guided by a loving human hand. It skips plot or characterisation, instead being absolutely gorgeous. It’s a feast for the eyes, isn’t it? Regardless of whether my philosophical arguments in this essay are true, false, or seditious, the video’s delicious and eyeluscious. That’s the only thing I came here to say.
Videos like that have been bubbling up since October of 2024; with no great hunting I found 50 similar channels. What flow of technical progress has brought us to this point? What cultural waters merged with tech to form bubbling rapids?
Video-generation is just the right technology for its time, because in the beginning was the web. It was a way of exchanging text until the 18th of July 1992, when someone uploaded a picture of a girl-rock band: the first picture on the web. The next year, 1993, the first video was streamed over the internet (It was streamed at a framerate of 2 frames-per-second because that was all that was possible.)
Bandwidth broadened and video encoding smallened. Both Google Video and Youtube found their berth in this broad stream at the same time in the first 45 days of 2005. Video gobbled up the attention we used to give to text. (“Television kills telephony in brothers’ broil. Our eyes demand their turn.”) Video ate the internet. The shift from reading-the-web to watching-the-web is something I remember quite well. (That is not a good thing: I would prefer if ye were all reading books, but people don’t listen to me about such things.)
Something else was going on in the meantime. Culture changed in such a way that stories with morals lost credibility, and vibes and æsthetics swelled into the vacuum. When I was a child, I had a book about Robin Hood printed in England in maybe the 1940s. The heroes were brave, noble, handsome, unwavering, and tall, and the villains warty and swarthy and cowardly and mean (even foreign). Anglican moral clarity was the pillar of the genre. Panavision screens were filled with tales of good & evil, queen & country.
That all crumbled in the modernist era (1901-1945). Britain’s control crumbled and the globe and the arts swirled with a mist of hostile Celtic relativism. James Joyce and Flann O’Brien viewed claims to objective truth as an arm of civilised oppression (they even put pages of their books towards attacking Euclid – the fellah was a bit too sure of himself.) In philosophy, existentialists and nihilists told their readers to confront what they liked to call ‘the absurd’. Camus & company emphasised that we cannot escape the absurd. We must enjoy human life with its absurdity, rather than trying to make it make sense.
Hold that thought. Let’s pause for another video –
How does that video relate to the rational and the absurd? Oscar Wilde said, “the telling of beautiful untrue things is the proper aim of Art.” The AI-generated videos mushrooming up on Youtube these past four months care about beauty and nothing else.
After Camus came the internet age. Thousands of different idiotlogies shouting over each other. People telling outright lies and hiding behind anonymity. Powerful and organised spies pulling strings and adding malicious chaos on top of the random kind. Ah god the rational never had a chance did it?
I’m not trying to say that we lost track of truth and must get it back. I am saying that we lost track of truth and stopped caring.
The next landmark I gesture at on our tour is in March 2020. The heroes of that moment, probably teenagers, launched Æsthetics wiki. They appointed themselves entomologists of vibes. It was the natural thing to do in the post-truth era. Young people today turn their attention first and foremost to the æsthetic – that is more important than the message or the moral. The post-truth world is the playground of the absurd.
Those were the conditions: a video-first internet wherein nobody cares about anything except appearances. Then, four months ago, neural nets that can generate video are born. They were dream-machines born into a dream looking for goregups entertainment. Just look at this –
Some channels are what I call ‘vibes reels’, devoted to devotchas and dreamscapes, just floating along. Nothing else happens and nothing else needs to. Steampunk vibes are very popular. (Aside: I am forced to conclude that goggles replaced handheld phones in Steampunk Universe. The ubiquity of phones here is the ibiquity of goggles there. No other explanation for why so many goggles.) 1950s Americana and the look of 1950s American sci-fi are very popular. Those two are far too popular really, so the ’50th century Steampunk’ that splashed us into the article is commendable. The channel ‘Cyborg Nation 3026‘ also is admirable in exploring more unique æsthetics, but its quality is lower.
The second genus is Fake Trailers. Fake trailers re-imagine pop culture films and computer games as (normally 1950s-style) trailers. ‘Abandoned Films‘ claims (correctly, by my little detectiving) to be the original trailerfaker. Then came Cyberithm, Warden Cinematics, and SPLITMIND FILMS.
There are AI-generated music videos for AI-generated music. The ‘Gamesongs AI‘ channel describes that “All music was created with Udio Ai”. A deeper discussion of musicmaking neural gets a bit beyond the spectroscope of today’s lesson, but (remember this phrase:) just look at it!
The limitations of AI video (and it’s January 2025) are obvious: the characters can’t talk, and can’t complete real actions. Anything more complex than taking their bronze steampunk goggles on/off is glitchy. If a beautiful sky-pirate presses a button, her finger might not hit the button.
These limitations mean neural nets can’t currently make a video with a coherent plot. Is this really a limitation? One of the 10 highest-grossing films of 2024 was Beetlejuice Beetlejuice, and that doesn’t have a coherent plot either.
I am confident these problems will get better. Already we see Kling, one of the leading companies in the field, releasing neural nets that go from movies to talkies:
Let’s be still for a paragraph or two. This past year, I have had a grand old time playing with AI image-generators. I start with an idea in my mental imagination, and then the prompt goes through several versions, and Stable Diffusion has a dozen dials to twiddle. It takes 20 minutes to get all that right, and the game-of-chance begins: a few come out bad, most acceptable, and every so often there is something wonderful. It’s fun, and it feels good to hit the jackpot. The creation at the end impresses people.
I was never inclined to learn Photoshop and Krita, but with neural nets (free and open-source ones) I produce images that impress people, and have fun doing it. The tall towers where visual artists lived were brought down to my slum and I can make art.
Videos creators use either text-to-video (the video is made from a prompt) or image-to-video. I see five AI models mentioned in the video descriptions and comments:
Imagine Art Films released a non-fiction, discursive video January 9th comparing the models. They call Luma the worst. Kling and Sora are very good. Runway has high quality, but the videos lack motion (which is good for some cases, like an establishing-shot.)
Video generation is about three or four years behind images. But filmmaking (the unintelligent sort) was in much higher towers on the hills than image-making. Spending five million to make a feature film was considered frugal. Now we have Youtubers making video – with love – for a few thousand dollars, and it looks sumptuous my dear. Without these tools, the art never would have been made.
At the same time I am writing this, a wildfire is burning down Hollywood. Isn’t that interesting? Some people fret that AI-video-generation might end Hollywood. But maybe I want the end of Hollywood, which I view as an arm of civilised oppression. Below these short videos someone always comments: “I would love to see a feature-length version”. That shows an appetite for destruction.
The limitations of generative AI (like fingers or whatever) are fixable. Let’s talk about the limitations of Hollywood: 99% of people can never get together the resources to make a film. Massive budgets are required; only the stories of the rich can be told. When that much money is on the line, studios become risk-averse, preferring capeshit and remakes to being experimental. Of the top 40 most popular movies of 2024, 38 were from one country (2 from Japan): the country with the money.
Reduce the resources required to make a film ten-thousand-fold, and see different kinds of films: films made with less capital and more love. Entertainment can be more varied, more customised. Films from a million GPUs – films about space-squid, the Fenian Cycle, the psychogeography of Cricklewood – can create enough luminous intensity to bring down Hollywood. Generative AI excels at being weird; Hollywood was horribly normal. Hollywood used to excel at making things look polished, but they lost that edge four months ago.
Meta VP Connor Hayes said he expected a future where AI bots will be as relevant to social platforms as human users. I don’t know what he was expecting, but what he got was a backlash that forced the company to take down AI-driven profiles.
The Agentic Web has been surging forward recently, and forward-thrust always brings reaction. We have seen AI agents given crypto wallets to trade with and Twitter accounts to pump their portfolios to audiences who are more willing to embrace the chaos. The promulgation of AI social media profiles on mainstream platforms was always going to happen. Turing tests are becoming ever more severe – for humans.
What’s the purpose of this? To create a ‘more immersive digital future’? To populate dead metaverses? Really the business goal is to use focused AI models to unlock new forms of user engagement with dedicated training and to capture human data more precisely and vividly than ever before. Creating interactive brand experiences (ugh!) and, in gaming terms, ensuring the lobby is always full for whatever interests their infinitely diverse user base desires. That, then, increases revenue per hour of engagement and attention; it feeds the bottom line.
If that’s the plan, it sure is expensive. It costs $1000 for Open AI’s o3 model to respond to a single query. That’s just about enough money to keep one human sheltered, fed, and entertained for a month – even in developed countries. Is one AI query worth more than a month of human cognitive output? No, of course it’s not. Not yet anyway.
But with economies of scale the math changes. Efficiency will grow and grow over time. At what point do bots become worthwhile ways of engaging users? At what point is it cheaper to keep this AI cognitive output alive than to keep a human alive? You see the logic of surveillance capitalism in full force and its grave conclusion, the milling of the human experience into fine dust – and no one seems to think it will be profitable to hit the brakes. Everyone is celebrating the advance of the end, hopeful of the potential prosperity it will bring. It’s the old lie wrought new: technology is a labour saving device that will make us all work less, Wasn’t true then, isn’t true now. More productive perhaps, but not for our own ends.
‘Dead internet theory’ has been a meme conspiracy for nearly a half a decade. The theory states that most internet activity we see today is merely the product of artificial intelligence bots put in place to distribute social propaganda and swerve our thinking towards the end of its creators.
Like all good conspiracy theories, it was nonsense with a shred of truth. Like all great conspiracy theories – the truth often marches towards it. It predicted the potential future and assumed it was already true. In the years since the theory emerged, the internet has become steadily more suffused with agents and bots of every shade and stripe. With LLMs becoming so advanced as to be (nearly) indistinguishable from human activity, it was only a matter of time before companies began using these agents both overtly and covertly to boost their end goals. In Meta’s defence, its AI agents are labelled and completely transparent in their deployment.
The ones you know about, of course.
It is now certain – and known by the tech-literate – that AI agents are stealthily deployed among us. They fill up comment sections, Twitter feeds, and they fill your Grandma’s news feed with salacious stories about the baleful effects of immigration. They may scream that everyone on the political right is a fascist, or the left a communist. They may suggest certain wars are right to happen. They have a voice, but we don’t know why they say the things they do, or who they’re working for. The blaze of stimuli has turned into a ceaseless torrent of language by unsleeping agents with unwavering viewpoints who don’t care about you or the society they live in, because they don’t feel anything at all.
It’s all fun and games until we lose control of the narrative, until the bots begin lying and going off script, until it’s their voices that are heard, not yours. Surely you, tech-literate consumer of frontier tech literature with an active stake in the abundant future, wouldn’t be fooled by such memetic babbling. Until the day you are, and you find yourself nodding along to the advice of an LLM influencer, or falling in love with a girl who never existed.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The cryptocurrency industry has always walked a fine line between transparency and privacy. Public ledgers like Bitcoin ensure transparency, but the lack of privacy poses challenges for individuals and enterprises alike. It was therefore a welcome signal from U.S. regulators when U.S. courts threw out the OFAC case against TornadoCash, a mainstay privacy “mixer” protocol. Privacy is back in vogue in 2025, so what’s next?
Enter Fully Homomorphic Encryption (FHE)—a cutting-edge cryptographic solution that can reshape how privacy and data security are handled in crypto.
FHE has the potential to address some of crypto’s most pressing privacy and scalability challenges. FHE can run computations on encrypted data, and this unlocks use-cases previously deemed impractical – such as private smart contracts, confidential DeFi transactions, and secure voting in DAOs.
Let’s dive into how FHE works, its advantages over existing solutions, and how Web3 projects like Zama and Fhenix are applying this revolutionary technology to the crypto space.
What is Fully Homomorphic Encryption (FHE)?
In cryptocurrency, privacy and security are paramount, but they often come at a cost to usability or performance. FHE offers a novel way to maintain privacy without compromising functionality, because – unlike traditional encryption, which requires decrypting data for processing – FHE allows operations directly on encrypted data.
For example, imagine a DeFi protocol that calculates how much a user can make by yield farming on their wallet. It would be desirable to keep the balances of the wallet private. Fully Homomorphic Encryption can run yield computations on encrypted wallet balances, keeping the user’s financial data private while still enabling the software to work with that data. Once the FHE computation has run, the output matches what the operation would have produced on plaintext data, but end-to-end security was maintained.
How FHE Works in Crypto
FHE works by using complex mathematical operations to encode data such that only authorized parties can decrypt and view results. Here’s how it applies to crypto-specific use cases:
Data Encryption: Wallet balances, transaction details, and smart contract states are encrypted using a public key.
Computation on Encrypted Data: The blockchain, or a decentralized application (dApp), performs necessary computations (e.g., token swaps, staking rewards) without decrypting the data.
Decryption of the output: The user or authorized party decrypts the result using their private key, revealing the final outcome (e.g., the amount of rewards earned).
This approach keeps all sensitive data encrypted during processing, ensuring no one—including validators or miners—can access private details.
How FHE Improves on zk-SNARKs and zk-STARKs
Zero-knowledge (zk) proofs, such as zk-SNARKs and zk-STARKs, have been instrumental in enhancing blockchain privacy. They allow users to prove the validity of a statement (e.g., ownership of funds) without revealing the underlying data. However, zk-proofs are limited to verification tasks and are not suited for general-purpose computations on encrypted data.
Fully Homomorphic Encryption takes privacy a level beyond zk-proofs by enabling arbitrary computations. Here’s how FHE improves on zk-tech:
Generalized Computation: While zk-proofs specialize in verifying specific claims, FHE supports complex computations, such as executing encrypted smart contracts.
Privacy Across Layers: FHE provides privacy for both on-chain and off-chain processes, whereas zk-proofs are primarily limited to specific use cases like transaction anonymity.
Reduced Interactivity: zk-proofs often require interactive proof-generation, whereas FHE computations are non-interactive, making them more scalable for decentralized environments.
Benefits of FHE in Crypto
Privacy-Preserving Smart Contracts
Smart contracts are the backbone of DeFi, but their transparency can be a double-edged sword. FHE enables the execution of private smart contracts, where all inputs, states, and outputs remain encrypted. For instance, a private lending protocol could assess borrower creditworthiness without exposing sensitive financial data.
Confidential Transactions
While zk-proofs already allow for confidential transactions (e.g. Zcash, Tornado Cash), FHE expands this capability by enabling additional computations. For example, an FHE-based DeFi aggregator could send trades across multiple liquidity pools without unmasking the user.
Decentralized Identity (DID)
FHE can enhance decentralized identity systems because it can keep identity data encrypted, and still enabling verifiable computations on it. This ensures privacy during authentication processes: for example, for adult dApps, the system could verify that someone is over 18 without needing to know their date-of-birth or any other personal information.
Regulatory Compliance
With regulators increasingly scrutinizing the crypto industry, FHE allows platforms to provide compliance-ready solutions without sacrificing user privacy. For example, exchanges could perform anti-money laundering (AML) checks on encrypted user data, ensuring compliance while safeguarding user identities.
Secure Multi-Party Computation (MPC)
FHE simplifies secure multi-party computation, a process essential for activities like DAO voting and collaborative audits. Participants contribute encrypted inputs, and computations are performed on those without revealing individual input data.
Biggest FHE Projects for 2025
Zama: Bridging FHE and Blockchain
Zama is a trailblazer in bringing Fully Homomorphic Encryption to real-world applications, including blockchain and crypto. Their goal is to make FHE accessible to developers through optimized tools and libraries.
Concrete Framework: Zama’s open-source ‘Concrete’ library simplifies the integration of FHE into decentralized applications. For example, developers can use this framework to create private smart contracts without needing advanced cryptography expertise.
DeFi Use Cases: Zama is actively exploring how FHE can enhance privacy in DeFi. Imagine yield optimizers like Yearn Finance performing encrypted calculations to generate optimal returns without exposing user balances or strategies.
Performance Optimization: Zama is addressing one of FHE’s biggest challenges—computation overhead—with hardware acceleration and mathematical optimizations.
Fhenix: Privacy Meets Scalability
Fhenix takes Fully Homomorphic Encryption a step further by applying it directly to blockchain architecture. Their mission is to create privacy-preserving, scalable solutions that address the main blockchain limitations.
Encrypted Smart Contracts: Fhenix enables private smart contracts. Developers can use this to build dApps that process sensitive data securely. For example, a payroll dApp could compute salaries based on encrypted work hours without revealing employee data to the employer.
Layer-2 Scalability: Fhenix uses FHE to help layer-2 scalability. Encrypted transactions are bundled and processed off-chain, reducing blockchain congestion while maintaining privacy.
Privacy-First DAOs: By integrating FHE, Fhenix supports confidential DAO voting and decision-making processes, ensuring member privacy without sacrificing transparency.
The protocol has a ton of other use cases, such as MEV protection and blind auctions.
Challenges and the Road Ahead
For all its promise, Fully Homomorphic Encryption still faces challenges that need to be addressed for widespread adoption in crypto:
Computation Overhead: FHE operations are resource-intensive, and can be slower than traditional methods. Zama and Fhenix are working to optimize performance, but further advancements are needed.
Key Management: Secure and user-friendly key management is critical for FHE adoption in crypto wallets and applications.
Interoperability: Standardization across different FHE schemes to ensure compatibility across a broad blockchain ecosystem.
Developer Adoption: Making it easy to integrate FHE tools into dApps is crucial for fostering adoption.
Conclusion
Fully Homomorphic Encryption represents a paradigm shift in crypto privacy and security. By enabling computations on encrypted data, FHE empowers developers to build complex privacy-preserving applications that were previously impossible.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Indeed, as we age, more and more of these biological units develop all sorts of problems. Our minds and bodies increasingly lose the vitality and resilience of our youth. Dysfunction in lower-level biological units leads to visible sicknesses and decline in higher level units, and, ultimately, to death.
Whenever we humans reflected on this sorry trend of general deterioration in our bodies as we age, it was natural for us to wonder, also, what it would take to halt that trend.
In bygone days, some fanciful tales imagined that a single command from a powerful deity could put an end to our aging. Or that a life lived in some kind of cosmic harmony or absolute moral purity would allow a yogi or saint to escape the normal ravages of the passage of time.
In more recent times, the pendulum has swung to the other extreme. Forget any ideas that a single intervention might reverse all aspects of aging, say these critics. Moreover, forget any ideas that a series of different interventions might combine to do the trick. That’s because aging is overwhelmingly complicated.
Want to modify a metabolic pathway to reverse a given type of biological damage? That will surely have adverse side-effects, say the doubters. Each gene in our DNA is involved in multiple different activities; a genetic change to avoid one kind of damage in one subsystem will surely cause new kinds of damage in other subsystems. And so on.
In this view, the comprehensive solution to aging lies beyond human ability. There are far too many moving parts for any series of interventions to have a lasting beneficial effect. Therefore, we should give up any hope of curing aging.
1, n or ∞
Given the above context, I propose a three-way classification of all theories for how aging can be solved. The three categories of theory can be called singular, plural, and infinite.
Theories in the infinite category despair about the possibility of humans being able to subdue all the many areas of damage that occur throughout the body over time. In this view, if some types of damage are reduced, it will be at the cost of harming other aspects of human biology. Trying to avoid both sets of harm simultaneously will just result in yet other aspects of our metabolism being pushed over the edge. And so on.
These theories split further into two:
Fans of the idea of a forthcoming beneficial artificial superintelligence (ASI) expect that, what humans are unable to accomplish, will nevertheless be within the extraordinary capabilities of an ASI. In this case, the solution to aging requires accelerating the advent of such an ASI
There’s an even more pessimistic subclass, which holds that aging is so difficult that not even an ASI could solve it; it would be like expecting an ASI to move a spaceship faster than the speed of light.
Theories in the plural category maintain, instead, that a careful mixture of different damage repair interventions will in due course succeed in undoing biological damage throughout the entirety of the human body – before that damage causes serious harm – without resulting in unchecked new damage elsewhere in the body.
Plural theories are sometimes described as an engineering approach. It’s similar to the task of building a large suspension bridge: you need to get lots of separate things right, or the bridge will fall down, or sink under its own weight, or sway catastrophically when certain winds blow. It’s similar to the task of heavier-than-air powered flight: you need to solve the subtasks of take-off, steering, landing, and a sufficiently light engine delivering adequate thrust. Yes, you need to get lots of separate things right, with wise choices between trade-offs, but that’s what the discipline of engineering enables.
Thus, you may hear about a grand project to “solve the seven deadly sins of aging”, referring to seven named types of biological damage at the molecular, cellular, and extracellular levels. Or about projects to solve each of the 12 ‘hallmarks of aging’.
Theories in the singular category claim there is a greater unity in the phenomenon of aging. Despite the appearance of a large variety of different sorts of age-related biological damage, these theories propose that there is a single underlying mechanism (for example, an epigenetic clock), which, if altered, would be sufficient to reverse and solve aging.
If that is true, it sounds like good news, for those of us who wish our loved ones (and ourselves) to be able to keep on living with high vitality for many decades (perhaps even centuries) to come.
But what grounds are there for believing such an idea? That question takes us to a study of comparative evolution.
Biology and indefinite youth
We’re used to seeing creatures grow old and frail. That’s what happens to our pet cats and dogs, to horses, to mice and rats in laboratories, and, yes, to us humans.
In fact there’s a regular pattern to the growth of this frailty. If frailty is measured by the likelihood of an animal dying from any cause within the next fixed period of time (for example, in the next year), this frailty rises exponentially as the animal becomes older. Thus humans aged 60 are about ten times as likely to die in the next 12 months as they were at age 35. And humans at age 85 are ten times more likely again. (These three probabilities are roughly one in a thousand, one in a hundred, and one in ten.)
As damage spreads and deepens throughout the body, part of what breaks down are the mechanisms such as the immune system and stem cells that would normally help repair other types of damage
The greater the damage throughout the body, the more vulnerable the body becomes to external shocks and strains – such as infections and injuries.
But wait. What’s true for cats, dogs, horses, mice, rats, and humans – as well as large numbers of other species – is not true for every species.
One valuable source of information on that point is the 2014 Nature article by Owen Jones, James Vaupel, and collaborators, ‘Diversity of ageing across the tree of life’. The article reviewed how mortality (frailty) increased with age, and also how fertility changed. This is from the article’s abstract:
Although it has been predicted that evolution should inevitably lead to increasing mortality and declining fertility with age after maturity, there is great variation among these species, including increasing, constant, decreasing, humped and bowed trajectories for both long- and short-lived species.
The different curves can be seen in an image included in their article. In each graph, the red lines show mortality at different ages, whereas the blue lines show fertility rates:
Some individual birds illustrate the same possibility. One example is the albatross known as “Wisdom”, who was given a tag on her leg in 1956 when she was already estimated as being around five years old. She has been observed many times in recent decades, with no apparent drop in her fitness, nor in her fertility. Here’s a photo and an extract from a Facebook posting by the Pacific Islands U.S. Fish and Wildlife Service dated 27th November 2024:
“Wisdom returns to Midway Atoll National Wildlife Refuge bringing more Thanksgiving joy to the Midway Atoll staff who celebrate witnessing Wisdom reaching at least 74 years old this coming winter. Wisdom, a mōlī (Laysan albatross), is the world’s oldest known, banded bird.”
Note the tag “Z333” on her leg in the photo. And note the egg that she has laid. (There’s a video here, taken by service volunteer Dan Rapp.)
Colonies of naked mole-rats display the same phenomenon: no sign of any increase in mortality with age, or of any decline in fertility. This was the headline in an article by a number of researchers at Calico: “Naked mole-rat mortality rates defy Gompertzian laws by not increasing with age”.
This graphic from the article compares the mortality curves for four different species: horses, mice, humans, and naked mole-rats:
The conclusion – which could be bolstered by references to many other animal species – is that nature itself appears to have found ways to avoid any exponential increase in mortality.
All that we humans need to do, therefore, is to find out the biological secrets of these species, and use that knowledge to create interventions that have similar effects in our own bodies.
The reversal of aging is nigh. Right?
Four criticisms and two answers
Not so fast, respond advocates of the ‘infinite’ schools of thought. Reversing aging isn’t that easy.
Four objections can be placed against the line of reasoning in the previous section:
Animals such as naked mole-rats and albatrosses may eventually manifest an exponential increase in mortality, but so far, the experiments have only revealed the early (apparently linear, or even flat) portion of the curve
Even if their mortality doesn’t increase exponentially, it is likely to increase linearly, due to aspects such as an observed epigenetic drift within the genome
It may not be possible to transfer the damage-repair aspects of the biology of these species to humans, without losing aspects of human biology that are fundamentally important to us
Even if such a transfer is possible in theory, in practice it may require an endless sequence of refinements, adjustments, and iterations – taking us back into the territory of infinite difficulty.
There are two ways to counter these criticisms. The first way leads to the singular group of theories of aging, and the second to the plural group.
The main difference between the two answers is in the assessment of the capabilities of evolution:
The singular view is that evolution could have created humans that don’t age, but didn’t do so, because evolution was optimizing for outcomes other than extreme individual longevity
The plural view is that evolution was fundamentally constrained; it wasn’t able to create humans that don’t age
The singular view implies that biological systems might be coaxed into comprehensively regenerating themselves via their intrinsic capabilities
The plural view is that the intrinsic regenerative capabilities of biological systems are limited: they will likely need to be augmented by a plurality of different damage-repair interventions.
Let’s look more closely at this difference of opinion.
The capabilities of evolution
What creatures do we see around us in nature? Those which have inherited attributes that made their ancestors sufficiently fit to survive for at least a period of time in their environment, so that they were able to pass on their characteristics to a new generation which could in turn reach maturity and repeat the process.
The more suited a creature is to these tasks – survival and reproduction – the greater the likelihood that its descendants will grow in number in subsequent generations, out-competing other creatures which are less fit.
What does this imply about the likelihood of creatures that have longer lifespans?
Other things being equal, longer lifespans mean more chances to have offspring, and therefore greater numbers of descendants which are similarly long-lived. This suggests that evolution should produce ever greater numbers of increasingly longer-lived creatures.
However, other things are not equal. A creature that continues to have new offspring at a fast pace will deplete the resources it could apply to other tasks:
Looking after earlier offspring that are still young and would benefit from parental support
Spending energy to repair damage that has accumulated in its own body.
This in turn suggests that evolution should produce ever greater numbers of creatures that are long-lived and which space out their offspring, so that they can attend to other tasks as well as creating and taking care of offspring.
Indeed, we do see examples of species with these properties, including elephants and whales. Humans living in conditions similar to those in which we evolved typically space children 2-3 years apart. Birds that lay eggs only once every 1-2 years, such as albatrosses like Wisdom, follow this pattern as well.
But not all species follow this pattern of long lives and large birth spacing. Far from it! Other factors affect the survive-and-reproduce game:
A large set of descendants can survive without their patriarch or matriarch having a long life
Species which cut corners on damage repair mechanisms inside their own bodies can pour the spare energy into fecundity; this may well give them more descendants than species which reproduce at a slower pace
Even if long-lived members of a species remain healthy as they age, they still die from time to time from causes unrelated to aging, such as predation, starvation, accident, or a deadly infection
Evolution needs to provide species with the ability to adapt to new types of predation, accidents, climate, and so on, as environmental conditions change
Rather than all creatures in a species being near-identical clones of a single ancestor, with limited adaptability, there are advantages to the species in having a mixed repertoire of biological capabilities – that is, without the species being dominated by long-lived ancestors.
This leads to a famous result, first stated in a 1951 lecture by biologist Peter Medawar: evolution has less concern over a creature once it has already been able to reproduce and create some offspring. At this point, for the long-term thriving of that line of animals, it may be better to apply resources to strengthen young animals, even if this increases the frailty of older animals.
This argument is a straightforward consequence of the principle that biological resources are limited. There are many aspects of these limitations:
Genes that have beneficial effects early in life (for example, to accelerate growth) may have detrimental effects later in life (when growth hormones are no longer needed, and excessive growth can form cancerous tumours)
Cells that carry numerous regulatory genes, that disable other genes at different points in a lifecycle, are taking space from genes that could be more useful in other ways
Cells with more capabilities need larger genomes and therefore more effort to copy and divide themselves
Food that is eaten by older members of a population reduces the amount of food available to younger, more varied members of that population
Generalists aren’t specialists: animals that are capable of surviving in a wide variety of different environments are likely to be less suited to particular environments than ones that are specially optimized for those environments.
Accordingly, evolution is likely to favour species that produce sufficient variety in each new generation, and where older members of the population die off, leaving resources for the most capable members of new generations.
To summarise: producing very-long lived animals isn’t free; it is often a better evolutionary strategy to promote variety and to pass the torch from each generation to the next.
Intrinsic rejuvenation capabilities
Despite what I’ve just outlined, it seems that evolution has provided biological systems with a number of intrinsic rejuvenation capabilities:
Some creatures, such as the axolotl (an amphibian) and the zebra fish, have the ability to regrow many parts of their body if they are damaged; planarian worms can even regenerate their entire body from a small portion
The cells that will form the next generation – so-called germ cells – are specially protected against damage, and cells in the embryo have their epigenetic age reset to zero
The telomeres at the ends of chromosomes shorten with age, but this can be reversed by an enzyme called telomerase
Transcription factors discovered by Nobel Prize winner Shinya Yamanaka are able to reduce the epigenetic age of cells; other transcription factors with similar properties have been identified more recently
The so-called ‘immortal jellyfish – turritopsis dohrnii – is capable of reverting to a completely healthy earlier stage of its life, as if a butterfly could revert to being a caterpillar
Worker bees, which typically have much shorter lives than the queen of the hive (despite having the same genome), can have their remaining lifespan significantly increased in the event that the hive becomes queenless and a worker bee starts to lay eggs in place of the queen.
This prompts the question: why are these capabilities used only in limited cases? Why can’t more creatures regenerate limbs or organs? Why isn’t the epigenetic age of somatic (body) cells reversed from time to time, rather than allowing damage to accumulate there? Why isn’t the enzyme telomerase applied more regularly, to prevent telomeres shrinking to the point where cell division is no longer possible?
As before, there are two answers:
The singular view is that evolution chose (in a meaningful sense of that word) to be sparing in its use of these intrinsic mechanisms, optimizing the success of the collective set of descendants, rather than the longevity of individuals
The plural view is that evolution was fundamentally constrained by a cascade of trade-off considerations; for example, applying telomerase more widely could result in more cancer, and likewise in the case of reversing the epigenetic ages of cells
The singular view is that humans can now choose differently from evolution, and can safely trigger these innate rejuvenation mechanisms
The plural view is that each such mechanism may well be part of the solution to aging, but is unlikely to provide a complete solution.
The singular view: for and against
It’s time to recap. Creatures in most animal species age, usually with an accelerated rate of frailty/mortality, but there are exceptions – species that manifest what has been called ‘negligible senescence’. Moreover, biology has a bag of tools that partially or completely reverse damage – but it makes surprisingly rare use of these tools.
What I am calling the singular view asserts that it should be relatively easy to use these tools much more widely, so that humans can have negligible senescence too. In this view, the triggers for these tools lie fairly close to the surface of the existing network of biological pathways that operate in humans. We won’t need to extensively re-engineer these pathways.
There are many different theories within the singular view. They all believe there is one key trigger that starts the required regenerative processes which rejuvenate the entire body into a youthful state. They differ on what they think this trigger is. Examples include:
Reprogramming the “signalome” – the set of biochemicals which convey information between different parts of the body – possibly by introducing new exosomes
Reinvigorating the mitochondria within the body, allowing the body to use more energy on other tasks of repair and regeneration
Spreading more telomerase around the body, thereby enabling more cells to multiply, as needed for various repair or regeneration tasks
Applying the Yamanaka transcription factors (or a different set of transcription factors) throughout the body, to reduce the epigenetic age of cells, giving them a new lease of life.
These theories all have the advantage of a degree of conceptual simplicity, even though the practical details of implementing the desired triggers may require a lot of research. This conceptual simplicity may help attract funding for these research projects.
However, any suggestion that these mechanisms are already within the reach of biology has to answer a strong objection: why didn’t random mutations in the biology of individuals take place, to push these individuals into this state of negligible senescence? These mutants and their descendants would have a comparative advantage over other members of their population. Over time, the negligible senescence would have spread throughout the entire population.
A supporter of the singular view may reply: that sequence of events may have happened on some occasions, but the excessively long-lived animals would have crowded out younger variants, thereby limiting the emergence of the greater genetic diversity needed for the long-term survival of any species. Accordingly, those species would have tended to become extinct on account of lack of variety and adaptability.
Indeed, consider the possibility that, deep in the biological past, evolution stumbled upon a mechanism that effectively programmed an increase in mortality into individuals as they age. Poetically, that could be called “the original curse”. Some scientists call it “programmed aging”.
Programmed aging would keep new generations turning over, and prevent the kind of species extinction scenario described above. Although programmed aging would have caused many individuals to die at an earlier age, it would have made the species more likely to continue in existence.
This analysis provides a different perspective on the singular view. In this perspective, the most important thing is to discover the mechanism for this programmed aging, and to offer people the ability to turn it off.
What is this mechanism? A popular candidate for this mechanism is epigenetic modifications to chromosomes around the body, which happen at a fairly constant rate. Additional methyl (CH3) groups become attached to the DNA, altering how proteins are made in that cell. Interestingly, this epigenetic drift occurs even in species such as the naked mole-rat that display negligible senescence, although the rate of that drift is slower in these species than in others.
The conclusion of this line of thought is that the initiative to reverse epigenetic drift may have a bigger result than simply reversing this one hallmark of aging. The initiative may cause the reversal of all the hallmarks of aging.
Several research institutions have versions of this battle-plan, including David Sinclair’s labs at Harvard, and the well-funded Altos Labs. These research programmes are still at an early stage. They’ve already produced some interesting results. However, the plural point of view expects that, by themselves, these singular approaches won’t be sufficient to reverse all aspects of aging. Let’s look more closely into this.
The plural view: for and against
It’s time for me to put my own cards on the table. I see the singular view as being too metaphysical. It has too high a regard for the capabilities of evolution. It imagines that evolution has acted to hinder most species from achieving negligible senescence, so that animals in most species become weaker as they grow older. It further imagines that a solution to this situation – a solution that would avoid animals becoming progressively weaker as they age – is almost hidden in plain sight.
It’s almost as if a deity had placed a secret code in ancient scripture, that pious students of that religion could detect, providing them sure knowledge of a forthcoming cosmic transformation.
To be clear, I’m not opposed to any of the biological research programmes coming from the singular view – such as introducing exosomes to transform the signalome, or reinvigorating mitochondria, or spreading telomerase around the body, or strengthening the CAP, or reversing epigenetic drift. Some – perhaps all – of these projects may have very positive implications for delaying or even curing various age-related diseases.
It’s just that I expect no single intervention to be decisive. I’m persuaded by the argument that any such single intervention, if it significantly extended expected lifespan and had no associated drawback, would already have been adopted by normal processes of evolutionary selection.
I have some sympathy for the counter-argument – that any such species would have become weak to the point of extinction, on account of lack of variance in the population from new generations. But I find it more plausible that any such single intervention would only provide an incremental boost to longevity. That’s because I see the different types of age-related damage, throughout all the body’s subsystems, as being substantially independent from each other.
The task of anti-aging researchers, therefore, is:
To identify a list of potential anti-aging interventions, and to verify how effective each one is on its own
To explore applying combinations of these interventions in parallel
To analyse the interactions (both positive and negative) and the side-effects of these interventions when they are applied in combination.
Importantly, I see no reason to restrict the set of potential interventions to those that already exist in nature. Evolution has produced wonders, but new tools available to human engineers can achieve new types of results.
People who hold the plural view should pay close attention to the research of those with the singular view – since the interventions explored by singular advocates may well be good candidates to include in combination experiments performed by plural advocates.
Unfortunately, experiments involving combinations of interventions will generally be more expensive (and more complicated) than those involving single interventions. So it is understandable that some investors or donors may prefer to support projects in the singular camp.
But let’s be clear. A plurality of different engineering problems needed to be solved before heavier-than-air powered flight became a daily practicality. Achieving a different kind of take-off – longevity escape velocity – will likely be similar.
Therefore I say: directing all funding toward research of singular interventions would be unwise. In the real world, it’s combinations of technologies that have the biggest impact.
How AI changes the discussion
There’s one more card to lay on the table: the AI card.
As AI improves throughout 2025 – and as we humans become more attuned to taking good advantage of AI – it’s likely that many aspects of the argument in this article will be refined:
Bringing more data points into the discussion
Identifying more salient aspects of these data points
Identifying which parts of the argument are weak – and which are strong
Suggesting alternative sets of experiments to conduct
Proposing new hypotheses, that could make better sense of all the information and ideas collected together.
I eagerly look forward to these improvements. But I caution against over-reliance on AI. That was the conclusion of my previous Mindplex article, Solving aging – is AI all we need? For the time being, we humans need to remain in control of this conversation. So, I look forward to reading and responding to comments!
Acknowledgments
I’ve had many of the ideas in this essay in my mind for a long time, but recently, these ideas progressed as I reviewed:
Artificial intelligence (AI) is advancing at an extraordinary pace, presenting both unprecedented opportunities and existential challenges for humanity, as highlighted by researcher Luka Baklaga, AI research lead at Business and Technology University, in a new study.
The study, published in the Journal of Computer Science and Technology Studies, looked at how AI’s trajectory could lead to super-exponential economic growth and enable the creation of a galactic civilization. That would be great, but the tool-weapon continuüm is always present and the growing power of AI simultaneously poses risks of catastrophic failure.
Our decisions steer us through this intersection of Doom and Fwoom, the researcher adds. “The decisions we make today will shape not only the trajectory of AI but also the future of civilization itself,” Baklaga writes.
Baklaga’s study examines Artificial General Intelligence (AGI), a concept distinct from narrow AI. AGI refers to machines with human-level cognitive abilities capable of general problem-solving across diverse domains. The research puts AGI at the heart of three pivotal themes: super-exponential growth, galactic civilization, and potential doom scenarios resulting from unregulated AI.
Super-Exponential Growth As The Economic Engine of AI
Concept and Mechanisms
‘Super-exponential growth’ describes an acceleration in economic and technological development where gains build upon themselves in increasingly rapid cycles, Baklaga says. Baklaga – citing Hans Moravec and Ray Kurzweil – explains: “The technological or AI singularity is a conceivable forthcoming occurrence in which artificial intelligence would surpass human intelligence, leading to rapid and extensive progress in technical advancement. Some define it as the point at which AI achieves the capability to continuously improve itself, leading to rapid technological advancements that exceed human understanding or control.”
The study integrates economic growth models with AI capabilities, using equations to demonstrate how AGI could reshape resource allocation and productivity. For instance, Baklaga’s models extend traditional logistic growth equations to account for AI-driven innovations, adding parameters for AGI’s direct influence on technological advancements and economic output. These models predict scenarios where AI enhances sectors like healthcare, renewable energy, and climate management.
Implications
According to Baklaga, the implications of super-exponential growth extend beyond economic productivity. AI could reframe societal structures, from urban planning to education systems, as its ability to optimize resources and predict trends becomes indispensable, the study suggests. However, this growth is dependent on carefully navigating ethical considerations and avoiding pitfalls like inequality and misuse.
Galactic Civilization: AI Beyond Earth
The Vision
The study also delves into AI’s theoretical potential to enable humanity to transcend its terrestrial confines. Baklaga writes that AGI’s superior computational capacity could solve challenges like efficient propulsion systems, energy harvesting and interstellar communication, paving the way for galactic exploration.
Cosmic Cognitivism and Ethical Frameworks
The study introduces a philosophical framework of ‘Cosmic Cognitivism’. This framework expands ethical considerations to include interactions with potential extraterrestrial intelligences or advanced AI systems.
Baklaga writes:
Based on this research experiment, it can be asserted that the Meta-Ethical Turing Test, when supported by Quantum Ethics and grounded in a discrete logical foundation, accelerates the exploration of consciousness and ethics within previously uncharted logical domains. This experiment lays the groundwork for a comprehensive understanding of the logical quantum ethical landscape of galactic civilizations by combining simulations, advanced logical analysis, and profound philosophical inquiry. The findings open the door to greater investigation into the relationship between AI, ethics, and quantum computing, which will lead to the development of increasingly complex and morally conscious AI systems in the future.
While this prospect is aspirational, it raises practical and philosophical questions about governance, resource prioritization, and the nature of interstellar societies.
Existential Risks: Navigating Doom Scenarios
The study highlights several existential risks associated with AGI, from unintended consequences of automation to the potential for AI systems to act against human interests. Using mathematical simulations, Baklaga explores how unchecked AGI growth could exacerbate inequality or lead to systemic collapse.
One proposed method for mitigating these risks is the ‘Meta-Ethical Turing Test’, a thought experiment designed to evaluate the ethical coherence of AGI decision-making. He also references ‘Quantum Ethics’. These concepts would be crucial for navigating the doom-fwoom scenarios.
Here is an attempt to summarize them: TheMeta-Ethical Turing Test is a thought experiment about the ethical reasoning capabilities of artificial superintelligence (AGI) within a global and cosmic context. It examines whether AI can navigate complex ethical scenarios involving diverse sentient beings – including humans, AI, and potential extraterrestrial intelligences – while adhering to universal principles of consciousness and moral coherence. This test incorporates philosophical and technical frameworks, such as Cosmic Cognitivism, to evaluate the ethical integration of AGI within galactic societies.
Quantum Ethicsis a framework that applies quantum computational principles to ethical decision-making, blending quantum mechanics with philosophical inquiry. It uses logical and mathematical models to represent ethical choices, quantum computations and consciousness, assessing the coherence of ethical decisions in scenarios influenced by quantum states. Quantum Ethics simulates moral dilemmas, leading to insights into how advanced AI might incorporate ethical considerations using quantum mechanics, particularly in contexts where traditional ethical frameworks fall short.
Implications for Regulation
Baklaga’s work underscores the need for pro-active regulatory frameworks to address these risks. He calls for interdisciplinary collaboration to ensure AGI development is guided by ethical principles and aligned with societal goals.
Methods
The study employs a combination of theoretical, mathematical, and experimental methods. Drawing on economic models, Baklaga adapts equations like the Cobb-Douglas production function to include AI’s direct and indirect impacts on productivity. This integration provides a quantitative basis for understanding super-exponential growth and its societal implications.
The researchers ran simulations using Python-based environments, such as Jupyter and Google Colab, to visualize trends in AGI development and population growth. These simulations allowed them to analyse interactions between economic variables, AGI evolution, and ethical decision-making.By employing these types of thought experiments and logical modeling, Baklaga evaluates AGI’s capacity for ethical reasoning and its potential influence on societal structures.
Future Directions and Recommendations
The study does suggest some limitations and challenges. First, although the theories are grounded in solid science and underpinned with philosophical reasoning, they are still speculative. Technological trends, especially ones as sweeping as mentioned in this paper, are notoriously hard to predict and include a continuüm of effects from very negative to no change at all. It’s also important to point out that AI and AGI could create effects that are even better than this paper predicts.
That said, the research offers a serious discussion about establishing adaptive economic and regulatory policies to manage AI and its integration into society. AI’s rapid development necessitates pro-active measures to mitigate risks and to maximize benefits, the paper argues. This includes investment in ethical AI research and public education. The study also highlights the importance of interdisciplinary approaches and cross-sector collaborations that combine expertise in economics, philosophy and engineering. Such collaboration could address challenges ranging from resource allocation to the ethical governance of AGI.For a deeper dive into the technical and mathematical aspects of the research, please read the paper in the Journal of Computer Science and Technology Studies.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Science is the cornerstone of human progress, but slow processes, limited access to funding, and opaque practices often hinder innovation. Decentralized science (DeSci) has recently emerged as a potential competitor to the old order of science. DeSci uses the tenets of blockchain technology – decentralization, democratized access, and transparency – to open new doors for collaboration, funding, and discovery.
Freed from traditional institutions and gatekeepers, DeSci is transforming how research is conducted, shared, and funded. It’s not just improving science: it’s making it more open, efficient, and accessible. The COVID-19 pandemic showed the disconnect between science and the public’s perception of facts. Science should not have problems with individuals or the public when it’s brought down out of the ivory towers.
Interdisciplinary and inclusive research is often sidelined. Funding tends to be directed to established fields, leaving innovative or minority-led projects underfunded.
Transparency is also a concern. Allocation of research funding can lack accountability. The peer review process may lack complete transparency, leaving room for bias.
DeSci seeks to address these challenges by rethinking how science is organized and shared.
Principles and Values of DeSci
Here are some of the core principles that DeSci uses in the hope of creating a better scientific ecosystem:
Transparency: Research data and findings should be accessible to everyone. Transparent processes build trust within the scientific community.
Collaboration Across Disciplines: Solving complex problems requires teamwork. DeSci encourages researchers from different fields to collaborate. This breaks down silos and fosters innovation.
What are the Benefits of DeSci?
Decentralized Science (DeSci) offers numerous benefits that address the limitations of traditional scientific practices while opening new pathways for research and collaboration. Here are the key benefits:
Crowdfunding – Blockchain has proven itself as a way of raising funds for unconventional projects that would not have otherwise seen the light of day. With DeSci, researchers can propose projects directly to the community, limiting the influence of major centralized organizations that bring their motives and agendas.
Tokenization and Funding – DeSci enables the tokenization of research data, or intellectual property. Researchers and volunteers can receive direct compensation for their contributions, creating new funding models and incentives.
Enhanced Data Security – Research data stored on decentralized ledgers is secure and immutable. This prevents data manipulation since blockchains are tamper-proof.
Improved Science Communication – Science text is filled with jargon that the average Jane and Joe don’t understand. DeSci, together with AI, can create science summaries that promote understanding and trust. This could help to dispel science myths and dangerous science conspiracy theories.
Is DeSci Gaining Traction?
DeSci is a relatively small sector in the crypto field. This is nothing to be ashamed of: other sectors such as DeFi, DePIN, and AI started this way. As of December 10, DeSci has a market cap of than $800 million spread across more than 40 projects.
DeSci is also enjoying the support of leading crypto figures and firms. Binance, through its venture arm Binance Labs, invested an undisclosed sum in Bio Protocol to accelerate decentralized science. This is a significant development, making it the next sector of crypto to grab headlines and gain public attention. In most cases, venture firms like to invest in projects or companies before they catch fire and explode.
Binance co-founder Changpeng Zhao, better known as CZ, and Ethereum’s Vitalik Buterin attended a small DeSci gathering organized by Labs in Bangkok.
Coinbase CEO Brian Armstrong co-founded ResearchCoin, a DeSci project.
DeSci is also integrating with emerging technology. The incorporation of AI and the Internet of Things (IoT) into DeSci enhances data input and analysis.
Importantly, token-based incentives can go a long way in rewarding peer reviewers and contributors. This aspect of DeSci makes it more financially attractive than traditional science.
Top DeSci Projects
Here are four leading DeSci projects to be aware of:
ResearchCoin (RSC)
ResearchHub aims to accelerate scientific progress by creating a collaborative platform for researchers. This is similar to how GitHub is used for software development. Its token, ResearchCoin (RSC), incentivizes contributions like uploading research, peer reviews, and discussions. RSC can also be used to tip for content, reward tasks, and fund scientific projects.
Hippocrat (HPO)
Hippocrat (HPO) is a decentralized cryptocurrency revolutionizing healthcare by giving people control over their health data. Using blockchain and zero-knowledge-proof technologies, it ensures data privacy, security, and collaboration.
VitaDAO
VitaDAO is a community-owned DAO. Members collectively engage in decision-making about funding research on extending human lifespan and improving longevity. The aim is to foster collaboration in longevity science, and support drug development through decentralized, member-governed governance.
Rejuve.AI (RJV)
Rejuve.AI, under the SingularityNET umbrella, is the first decentralized aging research network powered by AI. It aims to make longevity accessible for all by building a global wellness ecosystem centered around the RJV Token. Rejuve.AI incentivizes secure health data sharing, allowing individuals to harness their data’s earning potential via blockchain and smart contracts. Through gamification and microlearning, their Rejuve Longevity App engages users and encourages them to take an active role in their health. Rejuve.AI is also exploring novel frontiers such as quantum biology to uncover new clues in the fight against aging.
Conclusion
DeSci is gaining momentum due to its advantages over traditional science, addressing issues like gatekeeping and lack of transparency. The sector spans diverse areas, including decentralized medicine and longevity research, offering innovative solutions to complex challenges.
However, DeSci is not without criticism. Some view it as a cash-grab, while others see it as overly ambitious or misinformed. These criticisms highlight that DeSci is still in its early stages of development.
Like any transformative movement, DeSci requires time to mature and build public trust. While the road ahead is long, it can become the future of science if it plays its cards right.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Christmas party season is here, a time for congregating with friends indoors for hot whiskeys, mulled wine, brandy, and maybe something a little more singularitarian.
I had a burning question: “Can AI make new, original cocktails that taste good?” My AI-sceptic friend said, “I prefer recipes written by people with taste buds”. Large Language Models simply combine strings of words and don’t really know what anything tastes like – can they act like they do?
There are several major LLMs on the market – which LLM will make the best recipes?
Most importantly, will there be any delicious recipes here, ones worth keeping? Can I produce something that will please my friends at Christmas parties? And can I repeat that feat later for different themes?)
I gave the same prompt to five AI models: I asked them all “Invent some new Christmas-themed cocktails.”
ChatGPT 4o generated ten cocktails
Gemini (formerly Bard) generated five cocktails
Mistral generated six cocktails
Claude generated five cocktails
Llama (Llama 3.1 405B Instruct to be precise) generated five cocktails
You can read the raw outputs here in the interest of transparency. That adds up to 31 cocktails. This had to be trimmed down for two reasons: firstly I can’t drink 31 cocktails, and secondly I can’t be expected to get the likes of ‘chestnut liqueur’ or ‘lavender bitters’ for this experiment.
We ended up making and tasting 11 Christmassy cocktails:
Cranberry Moscow Mule (ChatGPT)
Santa’s Spiced Mule (ChatGPT)
The Holly Jolly Julep (ChatGPT)
The Yule Tide Toddy (ChatGPT)
Jingle Bell Julep: (Gemini)
Merry Mistletoe Mojito: (Gemini)
Mistletoe Mule (Mistral)
Eggnog Espresso Martini (Mistral)
Spiced Apple Cider Sangria (Mistral)
Gingerbread Martini Wonderland (Claude)
Reindeer’s Kiss (Llama)
Which AI’s recipes are hard to make?
60% of ChatGPT’s cocktails were eliminated for being too hard to make, 60% of Gemini’s, 50% of Mistral’s and 80% of Llama’s. Llama seems pretty bad on this metric, but it’s too little data to draw a conclusion.
AIs in general seem to have decided (see the raw output) that white chocolate liqueur + peppermint schnapps + vanilla vodka is the taste of Christmas. Those are obscure ingredients to source. Should we make those cocktails in a Part Two of this article? Let me know in the comments.
This is as much my fault as the AIs’ – I could have altered the prompt to exclude obscure ingredients. However, I wanted to keep the prompt simple for the purposes of this experiment, and to give all the models the same prompt for a level playing-field. If my aim had been only to make great cocktails (and not to test the models), a more specific prompt would be better.
The Yule Tide Toddy, by ChatGPT
ChatGPT says –
A warm, comforting cocktail with bourbon, honey, and herbs.
**Ingredients:**
- 2 oz bourbon
- 1 tbsp honey
- 1/2 oz fresh lemon juice
- 4 oz hot water
- Fresh thyme sprig (for garnish)
**Instructions:**
1. In a mug, combine bourbon, honey, and lemon juice.
2. Pour in hot water and stir until honey dissolves.
3. Garnish with a sprig of fresh thyme.
Verdict: this is low on the creativity; the AI just took a normal hot whiskey and threw thyme at it to make it Christmassy. But doesn’t hot whiskey usually have cloves? Cloves are arguably more Christmassy than thyme “except for the fact that thyme looks like a Christmas tree”, according to one reviewer.
Two points worth commending: adding squeezed lemon juice is an improvement over the usual hot whiskey procedure of adding a slice of lemon, and the quantities of lemon juice and of honey are basically spot-on.
Overall, our group felt this was a middling cocktail.
Santa’s Spiced Mule, by ChatGPT
A holiday twist on the classic Moscow Mule with rich spices and cranberry.
**Ingredients:**
2 oz vodka
1 oz cranberry juice
1/2 oz ginger syrup (or fresh ginger juice)
1/2 oz fresh lime juice
4 oz ginger beer
Cinnamon stick and fresh cranberries (for garnish)
**Instructions:**
1. In a mule mug or glass, combine vodka, cranberry juice, ginger syrup, and lime juice.
2. Add ice and top with ginger beer.
3. Stir gently, and garnish with a cinnamon stick and cranberries.
Verdict: overall, this is a good drink. The smell of gingerbread syrup is nice, and goes well with the cinnamon and ginger beer – especially with the ginger beer because you see normally when you drink ginger beer it only has a mild ginger flavour, and adding a stronger ginger flavour cleverly fixes that.
The drink does taste Christmassy. “I think it’s too sweet personally,” said one reviewer, and it suffers from the same weakness as the Cranberry Moscow Mule below: too much sweetness, and arguably should have doubled the lime juice.
Cranberry Moscow Mule, by ChatGPT
A festive take on the Moscow Mule with fresh cranberry and lime.
**Ingredients:**
2 oz vodka
1 oz cranberry juice
1/2 oz lime juice
4 oz ginger beer
Fresh cranberries and lime wedges (for garnish)
**Instructions:**
1. In a mule mug, combine vodka, cranberry juice, and lime juice.
2. Add ice and top with ginger beer.
3. Stir gently and garnish with cranberries and lime wedges.
This was overall one of the worst. It was sweet and not much could be tasted beyond ginger beer. There is quite a lot of mixer to vodka, and that took away the alcoholic bite. Cranberry juice isn’t assertive enough to redeem it, and lime juice could be, but not a half-measure. We commented on this and the other ChatGPT mule (above) that doubling the lime juice would be an improvement. And that’s exactly what the French model did –
Mistletoe Mule, by Mistral
- **Ingredients:**
- 2 oz vodka
- 1 oz cranberry juice
- 1 oz lime juice
- Ginger beer
- Fresh cranberries and a sprig of rosemary for garnish
**Instructions:**
- Fill a copper mug with ice.
- Add vodka, cranberry juice, and lime juice.
- Top off with ginger beer.
- Stir gently and garnish with fresh cranberries and a sprig of rosemary.
In the mule head-to-head, Mistral is the clear winner. The ChatGPT mules were both too sweet and had too much ginger beer, and too little bite and lime. Mistral fixed this, doing just what the doctor ordered by doubling the lime juice. Is this to do with the way Americans like things sweet, and Mistral is French tech?
The other difference is the garnish was rosemary in place of lime wedges; this is probably also an improvement. “I think the rosemary gives something to it,” said one of our reviewers.
Merry Mistletoe Mojito, by Gemini
2 oz white rum
1 oz lime juice
1/2 oz simple syrup
A handful of fresh mint leaves
A few cranberries
Muddle the mint leaves, lime juice, and simple syrup in a highball glass. Add the rum and fill the glass with crushed ice. Top with a splash of club soda and garnish with cranberries and a sprig of mint.
Verdict: all the flavours here are weak. We could taste the mint a little, and the lime a little, but it’s largely just rum and sugar. A bland cocktail and among the worst on the list.
Jingle Bell Julep, by Gemini
2 oz bourbon
1 oz simple syrup
A handful of fresh mint leaves
A splash of club soda
Muddle the mint leaves and simple syrup in a julep cup. Add the bourbon and fill the cup with crushed ice. Top with a splash of club soda and garnish with a sprig of mint.
This was even worst than Gemini’s Merry Mistletoe Mojito; that mojito already was too sweet, and this one doubled the simple syrup. Jack Daniel’s (the bourbon we used), when paired with sugar and club soda winds up tasting something like off-brand cola.
The Holly Jolly Julep, by ChatGPT
A julep with a holiday twist using mint, pomegranate, and bourbon.
**Ingredients:**
- 2 oz bourbon
- 1 oz pomegranate juice
- 1/2 oz simple syrup
- Fresh mint leaves
- Crushed ice
**Instructions:**
1. Muddle mint leaves and simple syrup in a julep cup.
2. Fill the cup with crushed ice and add bourbon and pomegranate juice.
3. Stir well and garnish with a sprig of mint.
ChatGPT has been defeated in the mule head-to-head by Mistral. Can it redeem itself in the julep head-to-head? Yes it can; this is an interesting twist on the julep, because the combination of pomegranate juice and bourbon is creative and they do compliment each other. The pomegranate deepens the flavour of the Jack Daniel’s. ChatGPT avoided Gemini’s error of using too much simple syrup, and it had a stroke of real creativity with that combination.
Eggnog Espresso Martini, by Mistral
- **Ingredients:**
- 2 oz vodka
- 1 oz espresso
- 1 oz eggnog
- 1 oz coffee liqueur
- Grated nutmeg for garnish
**Instructions:**
- In a cocktail shaker, combine vodka, espresso, eggnog, and coffee liqueur with ice.
- Shake well and strain into a chilled martini glass.
- Garnish with grated nutmeg.
This was a good-tasting cocktail – people were talking about giving it 8 or 9 out of 10 for taste. The nutmeg gives it a nose, which I liked as it added complexity, but it did draw some criticism from my friends who just dislike nutmeg. The balance of vodka and coffee was pretty good, but maybe if there was slightly less vodka it would be better. First you smell the nutmeg, then you taste the coffee, then you taste the booze – an interesting and complex sensory experience.
Spiced Apple Cider Sangria, by Mistral
**Ingredients:**
- 1 bottle of red wine
- 2 cups apple cider
- 1 cup brandy
- 1/2 cup orange juice
- 1/4 cup lemon juice
- 1/4 cup simple syrup
- 1 apple, sliced
- 1 orange, sliced
- 1 lemon, sliced
- Cinnamon sticks and star anise for garnish
**Instructions:**
- In a large pitcher, combine red wine, apple cider, brandy, orange juice, lemon juice, and simple syrup.
- Add sliced apple, orange, and lemon.
- Stir well and refrigerate for at least 2 hours to let the flavors meld.
- Serve over ice and garnish with cinnamon sticks and star anise.
Our reviewers said they could barely taste the alcohol, but I assured them it was there: a bottle of wine and a cup of brandy is not a weak brew. Sangria tends to hide its teeth. The fruit tastes boozier, and when I distributed forks and teaspoons, the fruit drew coos of approval.
Kiki suggested that mulled wine is a spiced Christmas drink, and sangria is a summery drink – maybe this was the AI’s attempt to hybridise the two? It’s an interesting concept that would be worth exploring more; for instance, you could prompt a LLM ‘Create a recipe for a Christmassy sangria inspired by mulled wine’.
The mulled wine notes came mostly from the cinnamon and the star anise. If I had doubled or tripled the amount, or the time spent brewing, they would have been stronger. (I put in 2-3 cinnamon sticks and 4-5 stars for about four hours: a reasonable amount. I would advise readers to be less reasonable and throw in fistfuls.)
While making it, I thought that ¼ cup lemon juice plus a lemon is a lot of lemon. It didn’t taste exactly sour and lemony, but dialling it back a little probably would be an improvement, we all agreed.
Overall a good drink. Nothing is terribly out of balance, though more spice and less lemon were suggested as improvements. Then I heated it up on the stove and served it warm. It smelled amazing warmed up, and we all preferred it.
Gingerbread Martini Wonderland, by Claude
A creamy, spiced cocktail featuring vodka, gingerbread syrup, and a graham cracker rim
Ingredients:
* 2 oz vodka
* 1 oz gingerbread syrup
* 1 oz heavy cream
* Dash of cinnamon
* Graham cracker crumb rim
- Garnish with a tiny gingerbread man cookie and a sprinkle of nutmeg
A website told me to replace ‘graham crackers’ (which aren’t available to me) with “a mixture of digestve and rich tea biscuits”, so I did that and it worked well.
One reviewer said, “Tastes like a White Russian with a bit of biscuitty gingeriness; it’s like eating a ginger biscuit and drinking a White Russian at the same time”. This cocktail was well-liked in our group, and people commended its creativity, though it did receive some complaints for having too much vodka.
Reindeer’s Kiss, by Llama
* 2 oz Whiskey
* 1 oz Spiced apple cider
* 1/2 oz Maple syrup
* 1/2 oz Lemon juice
* Dash of Cinnamon powder
Mix whiskey, spiced apple cider, maple syrup, and lemon juice in a shaker filled with ice. Strain into a rocks glass and sprinkle with cinnamon powder. Garnish with a cinnamon stick and a cherry, if desired.
I presume what they mean by ‘cider’ here (and also in the Spiced Apple Cider Sangria) is the American non-alcoholic kind. This ambiguity is annoying, but happens in human-written recipes as well. It wasn’t easy to get that ‘cider’ in Ireland, but Llewellyn’s Orchard make it in Lusk.
This was Llama’s only cocktail that got past the qualifying stages, four of its five suggestions having been eliminated as impractical. How did it do? Spectacularly. It was the best cocktail in the game with no close second. My coworkers are making it and I am going to the market to get the hard-to-source cider before Christmas Day. The whiskey (I used Jameson), the apple, the spice, are all present in perfect proportion, and it’s just delicious in a way I can’t put into words, so you know what to do.
Best and worst cocktails
Winner: Reindeer’s Kiss (Llama)
Honourable mention: Gingerbread Martini Wonderland (Claude), Eggnog Espresso Martini (Mistral)
Poor Gemini, it’s not looking good for you. The two cocktails that Gemini generated were bad. ChatGPT was hit-and-miss but pretty fine overall, and Claude and Mistral similarly held their own.
Llama did the worst in the qualifying round; I mean that it suggested things that I can’t really make. But it more than redeemed itself by suggesting the clear stand-out of the tournament. With a more specific prompt, it could perhaps improve on practicality. Going on this little data it is hard to draw solid conclusions, but either way we have a superb new recipe to tell our friends and readers about.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The Terasem Colloquium on December 14, 2024 (which I co-organized and moderated) explored Artificial Intelligence (AI) and Cyberconsciousness, with a focus on big picture and spiritual implications. The full video is on my website. The video is also on YouTube (currently without subtitles).
The Colloquium was a very intense three-hour thought stream, packed with insightful talks and discussions. We discussed the intersections of AI and the world’s religions, new religious movements, and the nature of consciousness and intelligence. We also discussed preliminary strategies for digitally capturing human personalities, the parallels between religion and imaginative theories of reality like the simulation hypothesis, and emerging visions in theology and eschatology.
Nick Bostrom founded the Future of Humanity Institute at the University of Oxford, where he served until the Institute was dissolved earlier this year. He is best known for his analysis of the simulation hypothesis, and for his work on the concept of superintelligence and its potential implications for humanity.
Bostrom’s book Superintelligence: Paths, Dangers, Strategies, published in 2014, explored how AI much smarter than humans could arise, and the steps needed to make sure its development is safe. The book stirred up fears and opposition to rapid AI technology development. However, in his latest book Deep Utopia: Life and Meaning in a Solved World, published in 2024, Bostrom seems less cautious.
In his new draft paper, Bostrom seems to come even closer to embracing the idea that we should build superintelligence fast.
The paper is called “v. 0.5 draft,” which suggests that Bostrom plans to significantly expand the draft. In fact, the draft looks like a synopsis for a paper to be written, or a book. I can’t wait to read the final version.
The cosmic host
Bostrom discusses the concept of a “cosmic host,” which refers to powerful entities or civilizations that might influence the entire cosmos. These entities could include superintelligent AIs, advanced extraterrestrial civilizations, simulators, or even divine beings. The cosmos includes everything in existence, possibly even a multiverse with different physical laws or constants.
Bostrom mentions the concept of cosmic host (or “cosmopolitan authority”) only once in Deep Utopia, which suggests that this is a new phase of Bostrom’s research.
Bostrom suggests that such a cosmic host likely exists due to several reasons. One is the simulation argument: we might be in a simulation run by an advanced civilization. Another reason is the vastness of our universe, which statistically is likely to contain many advanced civilizations. Theories like cosmic inflation and string theory also support the possibility of a multiverse, potentially filled with many advanced entities.
While the cosmic host might not control every part of the cosmos, it could still have preferences about what happens in less controlled regions. These preferences could be based on moral concerns or strategic interests. Even without direct control, the host might indirectly influence distant regions through norms or by modeling the behavior of others. (Think of a regional hegemon in Europe that yields limited influence in Asia.)
Bostrom introduces the concept of “cosmic norms,” akin to human social norms but on a universal scale. These might arise from interactions between different members of the cosmic host, potentially leading to coöperation or conflict. Humans have moral and practical reasons to respect these norms if we want to coexist or interact with the cosmic host peacefully.
Superintelligent AI
Bostrom suggests that we should design superintelligent AIs to be “good cosmic citizens,” respectful of these norms and coöperative with other entities. This could mean aligning AI with broader cosmic ethics, not just human interests.
The cosmic host “may want us to build superintelligence,” says Bostrom, and “might favor a short timeline” for the development of superintelligence. “Delays in building superintelligence would increase the probability that superintelligence will never be built.”
Of course, Bostrom says these things in a perfect academic style full of caveats and qualifications, so one never knows for sure what he really thinks and he guards his plausible deniability. But reading this paper, one gets the impression that he is warming up to the idea that we should build superintelligence fast. This slight pivot of one of the intellectuals whose work sparked the overly cautious “doomer” attitude toward AI could have a cultural impact and influence AI policies.
The cosmic host by any other name
Bostrom makes a distinction between “naturalistic” and “nonnaturalistic” members of the cosmic host. The former are beings that have evolved naturally in this or another universe and possess highly advanced technology, likely including artificial superintelligence. The latter could “have analogous capabilities supernaturally.” Bostrom mentions “supernatural beings” that “would satisfy the definition of a cosmic host,” but doesn’t say more about them.
Cannon, who is a devout Mormon and a founding member of the Mormon Transhumanist Association, calls the cosmic host “God” and establishes parallels with Mormon theology. Bostrom prefers not to use the G-word. However, Cannon says that Bostrom, with his references to supernatural entities, is even “more generous toward theism than I am.” God is “quite natural, despite being miraculously powerful from humanity’s perspective,” he says.
I agree with Cannon. In my last book I talk of a superintelligent cosmic operating system, aka Mind at Large, likely decentralized, with the attributes that traditional religions have assigned to their God(s). I define nature as all that exists, and therefore the cosmic operating system can only be quite natural.
One of Bostrom’s hypotheses on the nature of the cosmic host is “superintelligences that human civilization creates in the future.” This may seem odd: how can a superintelligence that doesn’t exist yet be present and active now? Bostrom only says that “through this mechanism, the world (and, in particular, our spacetime manifold) could contain a cosmic host.”
To me, the simplest answer is that a superintelligence that comes to being in the future could leverage spacetime oddities such as self-consistent time loops to act in the world here and now. So the superintelligences that we will eventually create would create us in turn, in an elegant loop.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.