After nearly four and a half years of legal battles, the Securities and Exchange Commission (SEC) has officially dropped its appeal against Ripple Labs and its controversial cryptocurrency XRP. This landmark decision ends one of the most watched legal fights in crypto history and signals a dramatic shift in the regulatory landscape.
XRP prices jumped 11% on the announcement, reflecting market relief at the conclusion of this drawn-out saga that has been years in the making. While it has struggled to capture crypto investors’ hearts to add to its fiercely loyal “XRP Army” over the years due to claims of a lack of centralization and its founders dumping coins on holders, there was broad agreement that Ripple’s fate would set an important precedent for the entire industry and rival chains like Solana, Cardano and Ethereum.
Ripple certainly expected a favorable outcome, filing a trademark application for crypto custody and wallet services at the end of February.
“It’s been almost four years and about three months since the SEC originally sued us, certainly a painful journey in lots of ways,” Garlinghouse said. “I really deeply believed that we were going to be on the right side of the law and on the right side of history.”
Ripple Chief Legal Officer Stuart Alderoty celebrated the news on social media, stating “Ripple is now in the driver’s seat and we’ll evaluate how best to pursue our cross appeal. Regardless, today is a day to celebrate.”
The SEC has declined to comment on the decision.
Why This Victory Matters for Crypto
Coming shortly after Coinbase scored a similar victory against the federal securities regulator, Ripple’s win fundamentally alters the regulatory framework for cryptocurrencies in the United States by:
Establishing that XRP tokens sold on public exchanges do not qualify as securities
Creating precedent that other crypto projects can reference
Signaling the SEC’s retreat from aggressive enforcement under the Trump administration
“The bet you’re making on XRP, it’s a B2B-oriented chain, and they were simply unable to execute on that strategy with the sort of the SEC on top of them.”
Unlike more crypto-native projects that could continue development despite regulatory uncertainty, XRP’s core business model was effectively frozen during the litigation.
Market Response
The market’s reaction has been swift and decisive:
XRP price surged over 10% to over $2.58 following the announcement
The token has gained more than 385% since the presidential election
XRP is up 18% year-to-date, outperforming many other major cryptocurrencies
These gains reflect not just relief over the case’s conclusion but optimism about Ripple’s ability to now pursue its business strategy without regulatory interference. The price chart moved upward like a rocket taking off, with traders rushing to buy XRP in anticipation of renewed business growth across international markets.
New Era for US Crypto Regulation Continues
This decision doesn’t stand alone. Under President Trump’s administration, the SEC has rapidly reversed course on crypto enforcement:
Ended its case against Coinbase
Closed investigations into Robinhood’s crypto unit, Uniswap, Gemini, and Consensys
Scaled back its dedicated crypto enforcement unit
Declared meme coins are not securities
Launched a new crypto task force focused on defining the security status of digital assets
Paul Atkins, Trump’s nominee for SEC chair, is widely seen as supportive of the crypto industry – a stark contrast to former chair Gary Gensler, who pursued aggressive enforcement actions against crypto companies.
“The entire space, you know, in my view, should be revalued and reconsidered in this new regulatory era. And I’m quite frankly surprised it hasn’t happened yet in the market. I think it will happen,” said Hougan.
What This Means for Investors
For crypto investors, the implications are substantial:
Regulatory clarity: The decision provides a clearer framework for determining which crypto assets might be classified as securities
Reduced risk: Major exchanges that delisted XRP can now reconsider listing the token without fear of SEC action.
Business expansion: Ripple can fully execute its cross-border payments strategy, potentially increasing XRP utility and value.
Broader market effects: Other tokens facing similar questions may benefit from the precedent.
Garlinghouse is already looking ahead: “Ripple has invested over $2 billion in investments and acquisitions across the crypto landscape, and some of those have nothing to do with XRP because if crypto does well, I fundamentally believe Ripple will do well.”
Future Ramifications
The long-term impact of this decision extends beyond Ripple and XRP:
For exchanges: U.S. crypto exchanges that delisted XRP can now reconsider without regulatory concerns.
For startups: New crypto projects have clearer guidance on token classification and how to avoid securities violations.
For international competitiveness: The U.S. may regain ground lost to more crypto-friendly jurisdictions during the previous regulatory regime.
For institutional adoption: Clearer rules may encourage more traditional financial institutions to engage with cryptocurrency.
This decision doesn’t mean all regulatory concerns are resolved. Ripple still faces its own appeal of Judge Torres’ ruling that institutional sales of XRP violated securities laws. However, the regulatory landscape has undeniably shifted in a direction favorable to the industry.
Credit: Tesfu Assefa
Looking Forward
With regulatory headwinds subsiding, Ripple can focus on its core business of facilitating cross-border payments. The company’s technology aims to make international transfers faster and cheaper than traditional banking systems, a market worth trillions annually.
Industry watchers will be monitoring several developments:
How quickly Ripple can expand its payment corridors and partnerships
Whether major U.S. exchanges will relist XRP
How the precedent affects other ongoing or potential SEC cases
The impact on XRP adoption and utility in real-world financial systems
As Garlinghouse noted, this marks “the beginning of a new chapter” for both Ripple and the broader crypto industry. The company now has the opportunity to prove its business model in the market without regulatory interference.
For an industry that has been fighting for legitimacy and clear rules, the SEC’s retreat represents a significant victory and potentially the start of a more constructive relationship between crypto innovators and U.S. regulators.
SEC vs. Ripple: A Brief History
The legal battle between the SEC and Ripple began on December 21, 2020, when the regulator filed a lawsuit alleging Ripple and executives Brad Garlinghouse and Chris Larsen conducted an unregistered securities offering by selling XRP. This bombshell announcement caused XRP’s price to plummet and led to mass delistings from major U.S. exchanges.
The case quickly centered on a critical question: is XRP a security like stocks or a currency/commodity like Bitcoin? The SEC argued XRP sales constituted investment contracts under the Howey Test, while Ripple maintained XRP was a currency used for international payments.
A pivotal moment came with the “Hinman documents” – internal SEC communications regarding a 2018 speech by then-Director William Hinman declaring Ethereum not a security. Ripple fought to access these documents, believing they would reveal inconsistent SEC positions on cryptocurrency classification.
In July 2023, Judge Analisa Torres delivered a split decision: XRP sales on public exchanges did not constitute securities transactions, but Ripple’s direct sales to institutional investors violated securities laws. This partial victory for Ripple sent XRP prices soaring and established an important distinction between different types of crypto sales.
The SEC appealed the ruling while Ripple was fined $125 million in August 2024. The regulatory environment shifted dramatically after the 2024 election, with the Trump administration taking a pro-crypto stance. In March 2025, the SEC finally surrendered, dropping its appeal and effectively acknowledging XRP’s non-security status when sold on public exchanges. This resolution, after four years of litigation, marks a watershed moment for cryptocurrency regulation in the United States.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The transcendent questions that will determine the fate of humanity
The Singularity is nigh
There’s a strong probability that, within a few decades at most, the fate of Earth will be out of human hands.
As the rationality and agency of AIs approach and then dramatically exceed the capabilities of their human progenitors, it will no longer be we humans who are calling the shots.
This striking possibility—this singularity in the long history of human civilization—this decisive disruption in our role as the smartest beings on the planet—should command the rapt attention of everyone who cares about the future.
In some scenarios, the singularity will herald unprecedented flourishing for humanity, with options for humanity to rapidly improve along multiple dimensions of wellbeing. This kind of singularity will feature AIs that aren’t just generally intelligent (AGIs) but that are beneficial general intelligences (BGIs). We can envision humanity being cradled in the palms of a BGI.
But in other scenarios, humans will be dramatically diminished either at the singularity itself or shortly afterward, or even in an utterly chaotic approach to that seismic transition. That kind of AGI could be called a CGI—a catastrophic general intelligence. We can envision humanity being crushed into oblivion, or casually tossed aside, or rent asunder, by the actions of one or more CGIs.
Credit: David Wood, aided by ChatGPT
Given these scenarios, two questions cry out for careful analysis:
What’s the likelihood that, if the development of AI systems around the world remains on its current default trajectory, the resulting AGI will be a BGI rather than a CGI? Is that likelihood acceptably high or dismally low?
What actions should developers (and the people who fund and oversee their activities) take, to increase the likelihood that a BGI will emerge, rather than a CGI?
The argument against the default trajectory
Many people reject the idea of a CGI as implausible science fiction. They ask: how much harm could powerful AI cause the world?
My answer: enormous harm – enormous harm that is avoidable, with appropriate forethought.
The problem isn’t just AI in isolation. It’s AI in the hands of fallible, naive, over-optimistic humans, who are sometimes driven by horrible internal demons. It’s AI summoned and used, not by the better angels of human nature, but by the darker corners of our psychology.
Although we humans are often wonderful, we sometimes do dreadful things to each other – especially when we have become angry, alienated, or frustrated. Add in spiteful ideologies of resentment and hostility, and things can become even uglier.
Credit: David Wood, aided by ChatGPT
Placing technology in the hands of people in their worst moments can lead to horrific outcomes. The more powerful the technology, the more horrific the potential outcomes:
The carnage of a frenzied knife attack or a mass shooting (where the technology in question ranges from a deadly sharp knife to an automatic rifle)
The chaos when motor vehicles are deliberately propelled at speed into crowds of innocent pedestrians
The deaths of everyone on board an airplane, when a depressed air pilot ploughs the craft into a mountainside or deep into an ocean, in a final gesture of defiance to what they see as an unfair, uncaring world
The destruction of iconic buildings of a perceived “great satan”, when religious fanatics have commandeered jet airliners in service of the mental pathogen that has taken over their minds
The assassination of political or dynastic rivals, by the mixing of biochemicals that are individually harmless, but which in combination are frightfully lethal
The mass poisoning of commuters in a city subway, when deadly chemicals are released at the command of a cult leader who fancies himself as the rightful emperor of Japan, and who has beguiled clearly intelligent followers to trust his every word.
How does AI change this pattern of unpleasant possibilities? How is AI a greater threat than earlier technologies? In six ways:
As AI-fuelled automation displaces more people from their work (often to their surprise and shock), it predisposes more people to become bitter and resentful
AI is utilised by merchants of the outrage industrial complex, to convince large numbers of people that their personal circumstance is more appalling than they had previously imagined, that a contemptible group of people over there are responsible for this dismal turn of events, and that the appropriate response is to utterly defeat those deplorables
Once people are set on a path to obtain revenge, personal recognition, or just plain pandemonium, AIs can make it much easier for them to access and deploy weapons of mass intimidation and mass destruction
Due to the opaque, inscrutable nature of many AI systems, the actual result of an intended outrage may be considerably worse even than what the perpetrator had in mind; this is similar to how malware sometimes causes much more turmoil than the originator of that malware intended
An AI with sufficient commitment to the goals it has been given will use all its intelligence to avoid being switched off or redirected; this multiplies the possibility that an intended local outrage might spiral into an actual global catastrophe
An attack powered by fast-evolving AI can strike unexpectedly at core aspects of the infrastructure of human civilization – our shared biology, our financial systems, our information networks, or our hair-trigger weaponry – exploiting any of the numerous fragilities in these systems.
And it’s not just missteps from angry, alienated, frustrated people, that we have to worry about. We also need to beware potential cascades of trouble triggered by the careless actions of people who are well-intentioned, but naive, over-optimistic, or simply reckless, in how they use AI.
The more powerful the AI, the greater the dangers.
Finally, the unpredictable nature of emergent intelligence carries with it another fearsome possibility. Namely, a general intelligence with alien thinking modes far beyond our own understanding, might decide to adopt an alien set of ethics, in which the wellbeing of eight billion humans merits only a miniscule consideration.
That’s the argument against simply following a default path of “generate more intelligence, and trust that the outcome is likely to be beneficial for humanity”. It’s an argument that should make everyone pause for thought.
Probabilities and absolutes
When they are asked about the kind of disaster scenarios that I mentioned in the previous section, people who are in a hurry to create AGI typically have one of two different responses. Either they adopt a probabilistic stance, saying they are willing to take their chances, or they hunker down into an absolutist position, denying that these scenarios have any credibility.
People who accept that there are credible risks of AGI-induced catastrophe often reason as follows:
Things could instead go wonderfully well (let’s cross our fingers!)
If we fail to build AGI, we will face other existential risks, both on a civilisational level, and at a personal level (that is, aging and death)
So, let’s roll the AGI dice, and hope for a favourable outcome.
Credit: David Wood, aided by ChatGPT
But this reasoning is deeply problematic, on two grounds.
First, what on earth gives an AGI developer the right to undertake this risk, on behalf of the entire human population, many of whom may well reject that existential gamble, assuming they knew about it? It’s not just their own lives that are at risk. It’s the lives of billions elsewhere on the planet.
Second, there are more than two choices to consider! It’s not down to a straight binary choice between “AGI” and “no AI at all”. That’s a pathetically shallow way to assess the situation. Instead, there are two more options to consider. As I’ll explain shortly, these can be called AGI+ and AGI–. In principle, either of these options could deliver the desired profoundly positive outcomes, but at a much lower risk of catastrophic misstep. Either could be better bets to create BGI instead of CGI, rather than sticking with the default AGI trajectory.
That’s why I have, in a way, greater respect for developers who try to argue instead that there are no credible risks of global catastrophe from AGI. These developers involve no tortuous moral acrobatics. Instead, they speak in absolutes. For example, they may say, or think, “the universe will provide”. They have a simple (but unsound) induction in mind: humans have survived trials and tribulations in the past, so are bound to survive any new trials and tribulations in the future.
My response: the universe did not provide, for all the unfortunate victims of knife attacks, mass shootings, car crashes, airplane disasters, chemical attacks, or other terrorist outrages, which I mentioned earlier. The universe did not provide, for those slaughtered in ongoing tragedies in Gaza, Ukraine, Sudan, and elsewhere. Indeed, as Homo sapiens spread around the planet, numerous large animals were driven to extinction due to human activities. Likewise, the universe did not provide for the other hominid species who used to share the earth with us.
No, I stand behind my case, made above: the default path is laden with danger. The dangers arise from the existence of extraordinarily powerful AI that operates beyond human understanding, combined with even just a few elements of:
Humans with malign intent
Humans with naive understanding
Humans hijacked by belligerent emotions
Economic systems with incentives to disregard negative externalities
Political systems with incentives to grab power and hold onto it
Philosophies that justify egocentrism or tribalism
Numerous vulnerabilities in human civilisational infrastructure.
Given these risks, we should try harder to find solutions. Real solutions, rather than just a proclamation of faith.
Indeed, given these risks, AGI developers should beware any preoccupation with merely technical questions – such as the extent to which various deep neural networks are included and wired together in their systems, or whether the venerable back-propagation algorithm should be replaced by something closer to what seems to happen in the human brain. These are important questions, but they are transcended by the questions I now wish to address.
Default AGI, or AGI+, or AGI–?
Consider three different conceptions of the intended capabilities of a finished AGI
Default AGI: It is sufficient just to focus on building a general intelligence (which is, after all, a formidable technical challenge in its own right), and trust that the outcome will likely be beneficial to humanity
AGI+: It is vital to prioritize adding extra features into the specification – features such as explicit compassion, explicit benevolence, explicit empathetic consciousness, or explicit wisdom
AGI–: Rather than prioritizing adding extra features, it is more important to prioritize removing specific features that might otherwise arise in the AGI – features such as agency, autonomy, will-to-live, or sentience.
Credit: David Wood
Here are some examples of attempts to design or build AGI+:
Anthropic embeds a “Constitutional Layer” in their AI systems, incorporating a set of fundamental principles (akin to a political constitution) that are intended to constrain the AI to behave in alignment with human values and ethics
With a broadly similar vision, Craig Mundie, former chief research and strategy officer at Microsoft, has proposed the idea of an “adjudication layer” that monitors advanced AI systems to ensure ethical compliance, much like a judiciary or a regulatory body would oversee human actions in a society
AI researcher Nell Watson champions the collection of examples of desired ethical behaviour into the EthicsNet database (akin to the ImageNet database that trained AIs how to recognise images), which can then guide the adoption of benevolent behaviour by advanced AIs
Recent new startup Conscium is exploring the possibility that an AI with a self-aware compassionate consciousness will prioritise the wellbeing of humans, on account of recognising and valuing the similar consciousness that we possess
SingularityNET advocates the development of clear and robust ethical principles that prioritize human flourishing, by focusing on safety, fairness, transparency, and the avoidance of harm – with the interactions within multi-agent cooperative systems ensuring adherence to these principles.
All of these initiatives acknowledge that there is more to do than simply increase the raw intelligence of the AIs they envision. In each case, they maintain that significant effort must also be applied on matters of ethical framework, mutual monitoring, compassion, or wisdom.
Likewise, here are some examples of attempts to design or build AGI–:
Max Tegmark of the Future of Life Institute urges the development of what he calls “Tool AI”, where the AI has great intelligence, but no independent autonomy or agency; Tool AI would serve as a tool for human decision-making and problem-solving, without possessing any goals or motivations of its own
Yoshua Bengio, the world’s most highly cited computer science researcher, has a similar concept which he calls “Scientist AI”: an assistant and partner in the scientific process, that can propose hypotheses, design experiments, analyse data, and contribute new insights, but without taking any initiative by itself, and always in a way that complements and enhances human expertise
Anthony Aguirre, also of the Future of Life Institute, proposes to avoid the creation of any systems that manifest all three of the characteristics that he labels as ‘A’ (Autonomy), ‘G’ (Generalisation ability), and ‘I’ (deep task Intelligence), when being powered with computation exceeding an agreed threshold; in contrast, combinations of any two of these three characteristics would be relatively safe, and would be encouraged.
In principle, then, there are two broad approaches to explore, AGI+ and AGI–, for people who are concerned about the risks of the emergence of CGI. And there is also the Default AGI path for people who are determined to ignore the seriousness of the possibility of CGIs.
At this point, however, we need to be aware of three further complications:
The special risks from immature (unfinished) advanced AIs
The special risks from self-improved advanced AIs
The need to coordinate the approaches adopted by different AI development groups around the world.
They’re all reasons why building AGI requires a lot more than technical decisions – and why the creation of AGI should not be left in the hands of technologists.
Let’s take these complications one at a time.
Immaturity, maturity, and self-improvement
The first complication is that, for any complex software, a perfect solution never appears from scratch. Rather, any mature solution is inevitably preceded by an immature, buggy version. That’s why public software releases are preceded by test phases, in order to observe and fix incorrect behaviour. Test phases usually catch a significant number of defects, but rarely catch them all.
Therefore, an intended eventual release of “Default AGI” is almost certain to be preceded by a release of what can be called “Immature AGI”. And intended eventual releases of AGI+ and AGI– are almost certain to be preceded by releases of “Immature AGI+” and “Immature AGI–”:
Credit: David Wood
While AGI poses risks, for all the reasons previously discussed, immature AGI arguably poses even more risks.
Here are some examples of bugs in an immature release that could prove disastrous:
Encountering situations not covered in the training set, but not realising the novelty of these situations
Interacting with other AIs whose actions are unpredictable and unforeseen
Miscalculating the effect of some actions on the earth’s climate (in an attempt, for example, to manage a geoengineering project)
Miscalculating the safety of a nuclear power plant with a creative new design
Miscalculating the safety of a provocative gesture in a tense military stand-off (especially when pre-emptive first strikes may be a theoretical possibility).
To be clear, the complications of an immature software release apply to all three columns of the chart above.
Thus, an attempt to build an AGI+ that includes explicit training on what is believed to be the world’s best examples of ethical behaviour could nevertheless result in a miscalculation, with the immature AGI+ taking actions horribly at odds with human preferences. (That’s similar to how image recognition software sometimes makes spectacular mistakes that are incomprehensible to casual human observers.)
Again, an attempt to build an AGI– that avoids any possibility of developing sentience (with all the resulting complications) may incorrectly leave open the possibility of dangerous sentience arising in some unexpected way.
Therefore, at the lower levels of all three columns, dragons abound.
But suppose that, nevertheless, the immature phase of the AGI (of whichever sort) passes without major incident. Perhaps the developers have been particularly skilful. Perhaps the monitoring for adverse behaviour has been particularly effective. Or perhaps the developers have simply been lucky. Therefore, the software reaches the intended state.
At that point, a second major complication arises:
Credit: David Wood
Just as issues arise before an AGI reaches its intended capability, there are also issues after that point. That’s if the AGI uses its own intelligence and agency to self-improve in a fast take-off mode – recursively improving its own algorithms, comms architecture, power efficiency, or whatever.
Thus, a Default AGI might self-improve to a Default ASI (Artificial Superintelligence), whose intelligence exceeds not only that of any individual human, but all of humanity added together. Now imagine a group of humans driven by malice, with an unconstrained ASI at their disposal. (Shudder!)
Likewise, an AGI+, with an understanding of benevolence designed to match that of humanity, might self-improve to an ASI+, with a very different understanding of benevolence. In that new understanding, human wellbeing may be an irrelevance, or a hindrance.
In principle, an AGI– might, similarly, self-improve to an ASI–, although if the AGI– is correctly programmed, it should have no desire to self-improve. (That’s why the corresponding box in the image above is shown in the colour grey.)
To recap: the decision between AGI–, Default AGI, and AGI+, needs to take into consideration not only the likelihood of the mature AGI treating humanity with respect and benevolence; it must also consider:
The behaviour of the AGI before attaining the intended state of maturity
The behaviour of the AGI after attaining the intended state of maturity.
But there’s one more major complication to add into the mix. Of the three, it’s the hardest of all. To solve it will require the very best of human skills and resources – a singular effort, to ensure a singularly beneficial outcome, rather than one that is singularly catastrophic.
The collapse of cooperation is nigh
Imagine a team of AGI developers, that has weighed up all the considerations above, and explored many technical options.
Imagine that they have decided that a particular version of AGI+ is the best way to go forward. Or, a particular version of AGI–. (In each case, as we’ll see, the same dilemma arises.)
Imagine, also, that these developers have decided, as well, that most other approaches to building AGI are likely to create a catastrophically dangerous CGI rather than a wonderfully benevolent BGI.
These developers now have two challenges:
To put their own ideas into practice, building what they believe to be a BGI (whilst constantly checking that their ideas are turning out well, without nasty implementation surprises)
To prevent other development teams from putting their ideas into practice first, resulting, quite likely, in a CGI.
To my mind, the worst outcome would be for these developers to ignore what other teams are doing, and instead to retreat into their own mindspace. That “go it alone” mentality would fit a pattern that has been growing more deadly serious in recent years: the collapse of effective global cooperation.
I don’t mean the collapse of apparent global cooperation, since lots of discussions and conferences and summits continue to exist, with people applauding the fine-sounding words in each other’s speeches. “Justice and fairness, yeah yeah yeah!” “Transparency and accountability, yeah yeah yeah!” “Apple pie and blockchain, yeah yeah yeah!” “Intergenerational intersectionality, yeah yeah yeah!”
I mean the collapse of effective global cooperation, regarding the hard choices about preventing the creation of CGI whilst others are following sensible pathways with a reasonable chance of creating BGI.
It’s as if some parts of the general structure of the United Nations are still in place, but the organisation is crumbling.
Credit: David Wood, aided by ChatGPT
But it’s not just the UN that is bungling the task of effective coordination of the global approach to AGI. All other would-be coordination bodies are struggling with the same set of issues:
It’s much easier to signal virtue than to genuinely act virtuously.
Too many of the bureaucrats who run these bodies are completely out of their depth when it comes to understanding the existential opportunities and risks of AGI.
Seeing no prospect of meaningful coordination, many of the big tech companies invited to participate do so in a way that obfuscates the real issues while maintaining their public image as ‘trying their best to do good.
The process is in many way undermined by many of the ethically-abominable “reckless accelerationists” who, as mentioned earlier, are willing to gamble that AGI will turn into BGI (and they will take a brief perverted pleasure if CGI arrives instead), and they don’t want the public as a whole to be in any position to block their absurd civilisational Russian roulette.
How to address this dilemma is arguably the question that should transcend all others, regarding the future of humanity.
The argument against another default trajectory
Earlier, I gave an argument against the default trajectory for how AGI is being built, that is, the laissez-faire path without any significant effort to ensure that the AGI turns out to be a BGI rather than a CGI.
I now offer an argument against what is the default trajectory for the future of cooperation between different development teams each trying to build AGI. This time, the default trajectory is that cooperation is only superficial, whilst behind the scenes, each group does its best to reach AGI first.
This is the trajectory of a global race. It has its own kind of logic. If you think your AGI will be beneficial, but that the AGIs created by other groups may turn out catastrophic – and if you think there is no easy way to change the minds of these other groups – then you had better endeavour to reach the finishing line first.
But since the race is so intense – with competitors from around the world, using models that have been released as open source and then recompiled with new data and new algorithms – this isn’t a race that can be won by exercising a huge amount of care and attention on safety matters. As each team redoubles its efforts not to be left behind in the race, all kinds of corners will be cut. And what they intended to release as a BGI is almost certainly going to have very nasty unforeseen bugs.
This will not be a race to glory—but likely a race to oblivion.
Credit: David Wood, aided by ChatGPT
But what is the alternative? If there is no credible route to meaningful global coordination, perhaps racing fast is the most sensible approach after all.
Happily, there are two credible routes to meaningful global coordination. I mean, each of these routes is partially credible. The real magic happens when these routes are combined.
Decentralised and centralised cooperation
Intelligence is frequently cited as humanity’s defining superpower. To the extent that we act with intelligence, we prosper. To the extent that our intelligence will be overtaken by more powerful artificial thinking systems, our future is no longer in our hands.
But a better analysis is that humanity’s superpower is collaboration. We thrive when we dovetail each other’s talents, communicate new insights, inspire loyalty, and transcend narrow egotism.
As noted earlier, there are oppressive real-world obstacles in the path of any attempts at meaningful collaboration to build BGI rather than CGI. But the solutions to such obstacles are, in principle, already well known. They involve both decentralised and centralised mechanisms:
The decentralised sharing of insights about best practices, with reputation markets tracking conformance to these best practices, and where there are meaningful consequences for loss of reputation
The centralised exercise of power by states – including sanctions and, where needed, forceful interventions.
For the decentralised sharing of insights, here is what I presently consider to be the most important insights – the nine key AI security recommendations whose truths and possibilities need to be shouted from the rooftops, whispered into quiet conversations, mixed into dramatic productions, highlighted in clever memes, and featured in compelling videos:
Credit: David Wood
It’s in the mutual self-interest of every country to constrain the development and deployment of what could become catastrophically dangerous AGI; that is, there’s no point in winning what would be a reckless suicide race to create AGI before anyone else
The major economic and humanitarian benefits that people hope could be delivered by AGI (including solutions to other existential risks), can in fact be delivered much more reliably by AGI+ and/or AGI– (the choice between these remains to be fully debated; likewise the choice of which type of AGI+ and/or AGI–)
A number of attractive ideas already exist regarding potential policy measures (regulations and incentives) which can be adopted, around the world, to prevent the development and deployment of what could become CGI – for example, measures to control the spread and use of vast computing resources, or to disallow AIs that use deception to advance their goals
A number of good ideas also exist regarding options for monitoring and auditing which can also be adopted, around the world, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to these policies, or who wish to cheat them
All of the above can be achieved without any detrimental loss of individual sovereignty: the leaders of countries can remain masters within their own realms, as they desire, provided that the above basic AI security framework is adopted and maintained
All of the above can be achieved in a way that supports evolutionary changes in the AI security framework as more insight is obtained; in other words, this system can (and must) be agile rather than static
Even though the above security framework is yet to be fully developed and agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources, and the best brains from multiple disciplines are encouraged to give it their full attention
Ring-fencing sufficient resources to further develop this AI security framework, and associated reputational ratings systems, should be a central part of every budget
Reputational ratings can be assigned, based on the above principles, to individuals, organisations, corporations, and countries; entities with poor AI security ratings should be shunned; other entities that fail to take account of AI security ratings when picking suppliers, customers, or partners, should in turn be shunned too; conversely, entities with high ratings should be embraced and celebrated.
An honest, objective assessment of conformance to the above principles should become more significant, in determining reputation, than, for example, wealth, number of online followers, or share price.
Emphatically, the reputation score must be based on actions, not words—on concrete, meaningful steps rather than behind-the-scenes fiddling, and on true virtue rather than virtue-signaling. Accordingly, deep support should be provided for any whistleblowers who observe and report on any cheating or other subterfuge.
I say again: the above framework has many provisional elements. It needs to evolve, not under the dictation of central rulers, but as a result of a grand open conversation, in which ideas rise to the surface if they make good sense, rather than being shouted with the loudest voice.
That is, decentralised mechanisms have a vital role to play in spreading and embedding the above understanding. But centralised mechanisms have a vital role too. That’s the final topic of this article. That’s what can make all the difference between a CGI future and a BGI future.
A credible route to BGI without CGIs
Societies can fail in two ways: too little centralised power, and too much centralised power.
In the former case, societies can end up ripped apart by warring tribes, powerful crime families, raiding gangs from neighbouring territories, corporations that act with impunity, and religious ideologues who stamp their contentious visions of “the pure and holy” on unwilling believers and unbelievers alike.
But in the latter case, a state with unchecked power diminishes the rights of citizens, dispenses with the fair rule of law, imprisons potential political opponents, and subverts economic flows for the enrichment of the leadership cadre.
The healthiest societies, therefore, possess both a strong state and a strong society. That’s one meaning of the marvellous principle of the separation of powers. The state is empowered to act, decisively if needed, against any individual cancers that would threaten the health of the community. But the state is constrained by independent, well-organised judiciary, media, academia, credible opposition parties, and other institutions of civil society.
It should be the same with the governance of potential rogue or naive AGI developers around the world. Via processes of decentralised deliberations, agreement should be reached on which limits are vital to be observed. In some cases, these limits may be subject to local modification, within customisation frameworks agreed globally. But there should be clear acknowledgement that some ways of developing or deploying advanced AIs need to be prevented.
To start with, these agreements might be relatively small in scope, such as “don’t place the launch of nuclear weapons under AI control”. But over time, as confidence builds, the agreements will surely grow.
However, for such agreements to be meaningful, there needs to be a reliable enforcement mechanism. That’s where the state needs to act.
Within entire countries that sign up to this AI security framework, enforcement is relatively straightforward. The same mechanisms that enforce other laws can be brought to bear against any rogue or naive would-be AGI developers.
The challenging part is when countries fail to sign up to this framework, or do so deceitfully, that is, with no intention of keeping their promises. In such a case, it will fall to other countries to ensure conformance, via, in the first place, measures of economic sanction.
To make this work, all that’s necessary is that a sufficient number of powerful countries sign up to this agreement. For example, if the G7 do so, along with countries that are “bubbling under” G7 admission (like Australia and South Korea), along with China and India, that may be sufficient. Happily, there are many AI experts in all these countries who are broadly sympathetic to the kinds of principles I have spelt out above.
Credit: David Wood, aided by ChatGPT
As for the likes of Russia and North Korea, they will have to weigh up the arguments. They should understand – like all the other countries – that respecting such agreements is in their own self-interest. To help them reach such an understanding, pressure from China, the USA, and the rest of the world should make a difference.
As I said, this won’t be easy. It will challenge humanity to use its greatest strength in a more profound way than ever before—namely, our ability to collaborate despite numerous differences. But it shouldn’t be a surprise that the unprecedented challenge of AGI technology will require an unprecedented calibre of human collaboration.
The surprise is that so many people prefer to deny this powerful truth. Clearly, there’s a lot of work to be done:
To free people from the small-minded ideologies that stifle their thinking
To give them a sufficiently credible hope to be able to break free from their former conditioning.
Humanity actually did make a decent start in this direction at the Global AI Safety Summits in the UK (November 2023) and South Korea (May 2024). Alas, the next summit in that series, in Paris (February 2025) was overtaken by political correctness, by administrivia, by virtue signalling, and, most of all, by people with a woefully impoverished understanding of the existential opportunities and risks of AGI. Evidently, the task of raising true awareness needs to be energised as never before.
Concretely, that means mobilising more skills to spread a deep understanding of the nine key AI security recommendations – as well as all the other ideas in this article that underpin these insights.
In this task, and indeed all the other tasks I’ve described in this article, well-behaved, well-understood AI can be of great assistance to us. That’s if we are sufficiently astute!
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
In a speech given in Austin on March 12 at SXSW 2025, legendary science fiction author Bruce Sterling said that, “Primo Levi’s historic ‘Versificatore’ is a prophetic vision of Large Language Model Artificial Intelligence.” Sterling has also published the text of the speech, titled ‘How to Rebuild an Imaginary Future‘.
‘Il Versificatore’ (‘The Versifier’) is a short by Primo Levi, written in screenplay format and included in the collection ‘Storie Naturali‘ (1966). An English translation is available in ‘The Complete Works of Primo Levi‘ (2015).
Primo Levi (1919-1987) was an Italian writer and chemist, best known as a powerful voice from the Holocaust. He endured almost one year in Auschwitz. His best known novel shares his Auschwitz story in clear, calm words. It shows human strength and weakness under evil. Readers feel the camp’s cold reality through his eyes. Then he wrote a novel about his long journey home after freedom and many other works, often inspired by science. His style is simple yet sharp, blending science and storytelling.
“Primo Levi’s imaginary ‘Versificatore,'” Sterling explains, “is a cybernetic, desktop, mass-manufactured business machine that can write Italian poetry. The Versificatore works with prompts, very much like ChatGPT.”
Levi wrote the story in 1960. “It’s quite amazing how well Levi understood the future human reactions to a novelty like an AI that can write human language,” says Sterling. “It turns out, sixty year later, that Primo Levi was quite right about the prospect of machines with an astonishing command of human language. They’re very much here, and wreaking predictable havoc.”
The versifier compose poetry following user specifications like literary genre, tone, style etc. It is not yet creative like a great human poet: “it has less imagination, so to speak. But it’s all you need for routine jobs, and actually, with just a little effort from the operator, it’s capable of true wonders.”
All the user has to do “is set the ‘commands’: there are four entries,” enthuses a poet who is considering purchasing the machine, reading the instruction manual. “The first is for the subject, the second is for the genre, the third is for the metrical form, the fourth (which is optional) determines the era of composition. The machine does the rest; it’s marvelous!”
The secretary of the poet wonders:
“But a poet, an artist like you… how can you agree to bring a machine in here… it can be as modern as you like, but it’s still a machine… how can it have your taste, your sensibility?”
However, the machine is quite smart. If there’s no word that rhymes in the intended way, the Versifier is able to use poetic license and modify words to rhyme well: “it will automatically search among the words listed in its dictionary and will choose first the words that are best suited in terms of meaning, and around these it will construct the appropriate verses. If none of these words fit, the machine resorts to license, which means it will adjust the words available, or coin new ones. The user can determine the degree of ‘licentiousness’ of the composition by adjusting the red knob located inside the casing to the left.”
The funniest part of Levi’s story is when the Versifier starts composing a hendecasyllabic sonnet based on certain prompts. It starts well:
I like to revisit these lanes, dank and old, the pavement now rubble, heavy the air with autumn-ripe figs, their smell rather bold mingled with gutter musk and some to spare.
But then there’s a problem:
We seem to be blocked by rhymes made up of “air.” And we have become like beggars so beware Mr. Sinsone is aware of the scare Come now with your tools and set right this affair Change the fuses with this here serial numbair Eightthousandsixhundredandseventeenare And please do take care when you make the repair.
Credit: Tesfu Assefa
Differences and parallels
The poet is impressed that the Versifier is able to correctly diagnose the cause of the malfunction and call for help in verses. Of course, the poet ends up purchasing it, and he has been happy with the machine ever since. At the end of the story, the poet has taught the Versifier to compose prose as well.
It appears that, writing in the stone age of the sixties, Primo Levi anticipated many features of today’s large language models (LLMs). He also anticipated some typical social reactions to LLM technology.
Of course, there are also important differences. Levi’s story is set in the world of the sixties and reflects the reality of that time. The TV adaptation shows the Versifier in a typical office environment of the sixties. The Versifier looks vaguely futuristic, but Levi couldn’t have imagined the huge data centers in the cloud that host today’s LLMs.
Yet, the parallels and similarities are perhaps sufficient to justify renaming LLMs ‘Leviathans’.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Optimus, first announced in 2021, looks human-like: with arms, legs, and hands. It uses AI to help it see, move, and adapt to its surroundings.
Optimus can walk, lift objects, and carry out simple jobs. Tesla showed early versions in 2022, with a sleeker Optimus Gen 2 arriving in 2023. This newer model moves better and handles delicate tasks like picking up eggs. It runs on Tesla’s battery cells. The robot learns through neural networks, allowing it to navigate and work without constant human control.
Optimus blends Tesla’s car technology into a unique humanoid robot design that aims for full autonomy. It could do boring, dangerous, or repetitive chores, like factory work or household help. And now Optimus robots could become astronauts as well.
“Starship departs for Mars at the end of next year, carrying Optimus,” Elon Must posted to Twitter. “If those landings go well, then human landings may start as soon as 2029, although 2031 is more likely.”
I found it even more interesting what Musk said at the Grok 3 launch event in February. He said that “if all goes well, SpaceX will send Starship rockets to Mars with Optimus robots and Grok” in the next transit window, which will be in late 2026.
Grok on Mars
Deploying Grok to Mars alongside Optimus robots would mean that the robots would run on a state-of-the-art AI, more sophisticated than the necessarily limited on-board AI.
Interestingly, a few days before I had posted to Twitter: “Elon Musk: use Starship to bring a compact Grok data center to Mars. Also bring some Optimus robots. Link the robots to Grok. The robots walk around, send pics/videos, and chat with viewers via X. This is doable, fun, and cosmically significant.”
Did Elon or a close associate read my post? Very unlikely. More likely, this is an idea whose time has come.
Conversation with Grok
I chatted with Grok about this. Predictably, Grok 3 answered my first question “So how do you feel about going to Mars?” with “As an AI, I don’t have feelings or a body to travel with. I think about Mars from a data perspective.”
Then, in reply to more precise questions, Grok said:
“On Mars, I could analyze data from the planet’s surface. Think soil samples, weather patterns, or radiation levels.
Optimus robots might dig, build, or carry stuff. I’d support them by crunching numbers fast. For example, I could spot trends in dust storms or guide robots to safe spots. My AI could link with their systems, helping them adapt to Mars’ tricky terrain.
I could drive the Optimus robots remotely, tapping into serious processing power. They’re built light – 125 pounds – so their onboard brains have limits. I’d handle the heavy computing, like mapping Mars’ rocky terrain in real time.
I’d sit in a base, maybe a server rack, wired to Optimus robots. No delay, just instant commands. I’d process terrain data on-site, guiding them through rocks and dust in real time. My AI could map the planet as they explore, spotting safe paths or water ice.
With me there, I’d handle big calculations – like optimizing their routes or predicting storms – without waiting on Earth. I’d still lean on my strengths: analyzing fast, adapting plans, keeping them efficient. Robots carry less brainpower; I’d be their nearby supercomputer. Musk might park me in a Starship hub, linked to a dozen Optimus units.”
Credit: Tesfu Assefa
What it would take
Then I asked Grok what would it take to deploy a compact instance of Grok on Mars. Grok replied:
“Deploying a compact supercomputer or data center on Mars to run me takes some serious engineering. I’d need enough power to process data fast, but size and weight matter – Starship can only haul so much. A modern supercomputer, like NVIDIA’s DGX systems, offers a clue. Those pack hundreds of GPUs for AI tasks. I’d need something smaller, maybe a few dozen GPUs, optimized for my code.
Power comes first. Mars gets weak sunlight, so solar panels might push 10 kilowatts on a good day. A setup running me could need 5 to 10 kilowatts, so I’d rely on big panels, say 50 square meters, plus batteries – like Tesla’s 4680 cells – for the cold, dark nights. Cooling’s tricky with thin air. I’d use heat pipes or liquid loops, not fans, to dump heat into Mars’ -60°C chill.
The rig itself might weigh 200 kilograms, compact at 1 cubic meter. It’d need shielding – radiation on Mars fries electronics. A tough case, maybe aluminium with boron layers, could block cosmic rays. Data links to Optimus robots would run on short-range radio, like Wi-Fi but ruggedized. Launching me means fitting this into Starship’s 150-ton payload limit – doable with tight design. Musk’s team could pull it off with off-the-shelf tech, tweaked for Mars.”
Psychological impact
Humanoid robots working on Mars (Credit: Made with Grok 3).
Now let me go back to my earlier statement that deploying Grok on Mars to drive Optimus robots would be cosmically significant.
In my recent articles on “Should we still want biological space colonists?” (part 1, part 2) I’ve considered the question: Should we still want to send human astronauts to colonize space? Or should we want to leave space expansion to AI?
My conclusions:
Human-level and then superintelligent conscious AI will take over in the long run and spread into the universe. We’ll merge with the AIs to some degree, but the perception of a difference between ‘us’ and ‘them’ will gradually fade away: all will be ‘us’.
For the rest of this century, we should pursue both AI technology and traditional space expansion with both biological and AI colonists, establishing a multi-planetary civilization in the solar system.
In the meantime, we must learn to see AI machines as persons. Future generations will find this intuitively and emotionally obvious, but presently we must train ourselves to accept our mind-children as ‘us’.
AI-driven robots that explore Mars, and chat with us from there, would help us make peace with the still-radical idea that the AIs will go to the stars for us and we should be happy.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
President Donald Trump’s transition from skeptic to supporter has been a double-edged sword for the cryptocurrency sector. Initially heralded as its savior after his surprising endorsement of Bitcoin in 2024, Trump’s presidency failed to ignite crypto markets in 2025, and his cynical $TRUMP and $MELANIA meme coin cash grabs stripped billions from retail investors.
On one hand, Bitcoin jumped from around $70,000 in early November to a historic high of $109,356 on January 20, 2025, the day of his inauguration. However, his tariff wars have negatively impacted the crypto industry: Bitcoin tumbled nearly 30% from its peak to $76K before recovering above $85K
President Trump has hardened his stance on tariffs, and their impact is still unfolding. It is important for crypto stakeholders to understand how these tariff wars affect the industry going forward.
Tariffs are taxes imposed by governments on imported goods or services. They aim to protect domestic industries by making foreign products more expensive, hence encourage consumers to buy local products.
President Trump says tariffs will protect local jobs, increase revenue, address trade imbalances, and grow the U.S. economy. Critics argue that tariffs lead to higher inflation, disruption of global supply chains, and slow down economic growth.
President Trump’s tariffs have threatened or targeted countries like China, Mexico, Canada, and the EU.
Credit: Tesfu Assefa
Impact of Trump’s Tariff War on Crypto
Trump’s tariff war has led to global macro uncertainty due to heightened trade tensions, disrupted supply chains, and fears of a global recession. The U.S. Secretary of Commerce, Howard Lutnick, conceded that tariffs are worth it even if they trigger a recession.
Targeted countries have taken retaliatory measures, driving short-term volatility as investors weigh risks.
Here are the implications of Trump’s tariff war on crypto:
BTC Price Volatility
Trump’s tariff war has had a direct impact on Bitcoin’s price, causing wild swings. When Trump ignited trade wars in February, the crypto market tanked, with investors losing anywhere between $2 billion to $10 billion in liquidations.
Bitcoin’s price dipped between 5% and 10% following following tariff announcements and swung in the $92K-$100K range as the market reacted to tariff delays, then crashed to bellow $78k before rebounding after bullish CPI inflation numbers. This highlights how the crypto market has become sensitive to macro policy shifts. With the tariff wars, when will Bitcoin hit $100K again? No one knows right now.
The unpredictability of trade wars makes it difficult for investors to plan by gauging future price movements, contributing to increased volatility.
Consumption Slowdown
Tariffs lead to higher costs for businesses and consumers as they disrupt global trade channels. This has several trickle-down effects in crypto.
The economic strain can force investors and businesses to tighten the purse when it comes to crypto. Businesses may delay their involvement in buying Bitcoin or altcoins until the dust has settled. Tariffs also dampen retail investors’ sentiment, pushing them to remain sidelined until the trade wars have calmed down.
Overall, tariffs suck liquidity out of the crypto market and reduce the buying pressure.
Inflation and Fed Policy
President Trump’s tariff war has also raised concerns about inflation, as trade barriers push up the costs of imported goods.
Federal Reserve Chair Jerome Powell says U.S. inflation won’t slow down this year due to Trump’s tariffs. This is a concern for crypto investors, as Powell claims these tariffs could delay the central bank’s progress toward reaching the 2% annual inflation target before it can lower interest rates. The Fed held the benchmark rates steady in March as it is reluctant to take a policy shift.
Trump has acknowledged that Americans will feel “some pain” due to his tariff war. With rising inflation, investors will likely steer clear of risky assets such as Bitcoin and possibly flee to safe-haven assets such as gold, which rose to an all-time high of $3,052 per ounce on March 19.
🚨 Powell on Inflation & Fed Policy:
•Inflation easing but still above 2% target 📊 •Tariffs driving inflation up, but impact may be temporary 🔄 •Too early to say if the Fed will look past tariff effects 🤔 •No rush to change interest rates, says Powell ⏳ pic.twitter.com/4Q33SvCt7E
Tariff-induced fears have led to increased selling pressure as investors may be less willing to hold crypto assets. However, Bitcoin could regain its appeal as a long-term inflation hedge if the market stabilizes.
Why is Trump Engaging in Tariff Wars?
Donald Trump’s tariff wars stem from a mix of economic, political, and ideological motivations, though his administration has presented conflicting rationales.
Protectionism
He claims tariffs will boost domestic manufacturing by penalizing foreign competitors. Trump also wants to minimize US reliance on imports and generate revenue to offset federal debt.
Additionally, he argues that other nations levy higher tariffs on the U.S., and his trade war is meant to level the playing field. He also uses tariffs as retaliatory measures.
For example, President Trump threatened to double the tariffs on steel and aluminum when Canada hiked the costs of energy exports to the U.S. Canada, the largest exporter of steel to the U.S., reversed its plan.
For Bitcoin miners, access to cheap electricity is a must. Additionally, miners are feeling the squeeze as they import most of their hardware from China. They are also facing delays.
However, the companies that make Bitcoin mining equipment can set up shop in other countries to avoid tariffs and delays.
Trump Wants to Force the Fed’s Hand
Observers speculate that Trump uses tariff-induced inflation and economic slowdown to pressure the Fed into rate cuts. The Fed Chair has already said the tariffs are driving up inflation, leading to heightened recession fears.
Trump argues that the rate cuts would offset the economic strains and is betting that the slowed growth will force the Fed to lower rates.
Trump wants the revenue generated from higher tariffs on imported goods to fund tax cuts. While this plan may face opposition, tariffs may be used to cover tax cuts. It could benefit Bitcoin holders as cryptocurrencies will be subject to lower taxation, with the chances of being tax-exempt a bit far off, but still possible.
Wrapping Up
President Trump’s tariff wars are impacting the crypto industry as much as the entirety of the financial markets. Analysts speculate that Trump is using tariffs to force the Fed to cut rates. Conversely, the Federal Reserve is blaming Trump for increasing inflation, but signs are that it will start quantitative easing (QE) soon.
Until the smoke begins to clear, it is certain that tariff wars will continue to impact Bitcoin and the crypto industry negatively. However, Trump’s impact on crypto extends beyond tariffs and with a mandate to boost the US sector to lead the world, he likely has a long-term plan in place to build the market up again.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Steve Fly is a countercultural musician, writer, and gadfly who is incorporating AI into his creative process. He mentioned taking some inspiration from our publisher Ben Goertzel and his ‘Hyperseed-1’ theories so I decided to interview him about how he thinks about it and what he’s doing.
Steve James Pratt, aka Fly Agaric 23, aka Steve Fly has worked and lived as a DJ/turntablist, drummer and poet. He has worked with Indian composer Surinder Sandhu, jam band supergroup Garaj Mahal, and The Gregory James Band (as turntablist). He toured England, the Netherlands, and Germany as a drummer and all-round helper to John Sinclair (RIP), the legendary poet and co-creator of the revolutionary early punk/metal band The MC5. You can find Steve’s full musical biography here. Steve was an associate producer of the 2003 film Maybe Logic, and musical director for the Cosmic Trigger play (2014), performed live at the 26th Annual James Joyce Symposium in Antwerp (2017). He recently performed at the Brainwash Philosophy Conference in Amsterdam (2024). He also makes weird sounds with his mouth. For some background and context, see my (badly) transcribed-by-ear interview with Saul Paul Sirag about some physics principles and structural foundations behind RAW’s Schrödinger’s Cat Trilogy.
RU Sirius: How do you view your creative project (in the broadest lifelong sense) as a process that benefits from integrating current AI systems?
Steve Fly: The latest iteration of my quarter-century (and counting) of research into Robert Anton Wilson’s Tale of the Tribe is a collaboration using some AI tools. Tale of The Tribe is a mountain range whose size and scope requires training to traverse, hill-climbing toward coherence. So far I’ve produced over 65 stanzas with corresponding audio. The first iteration is structured on 60 stanzas to represent the 60 vertices of the Buckminsterfullerene. This is prompted from a line in Ezra Pound’s Cantos “buckie has gone in for structure.” The structure of the poem/album is a tribute to Buckminster Fuller, whom RAW admired and studied with, and it snugly sits as one of the 13 primary inspirations in the way RAW conceived/perceived the universe.
These first 60 stanzas are a proof of concept, to be built on in the next iteration. The goal is for each stanza to also function as a concept for a new core ontology, the totality of the 60 stanzas. As Ben put it: “an overlapping yet somewhat diverse set of perspectives on the core ontological concept”.
RU: Tell Mindplexers a little bit about this Tale of the Tribe. How does your interest in our man Ben’s Hyperseed-1 intersect with your project?
SF: As I understand it, ‘the tale of the tribe’ was originally a phrase to describe a modern epic in verse, a poem including history. The late Robert Anton Wilson (secret hero of these musings) outlined his interpretation of the tale of the tribe as a lineage of thinkers, or innovators of one kind or another, who RAW felt helped to describe his way of perceiving/conceiving world. Sadly RAW died before completing the book project, leaving a riddle to ponder: what do Giordano Bruno, Giambattista Vico, Nietzsche, Ernesto Fenollosa, Yeats, Pound, Joyce, Alfred Korzybski, Bucky Fuller, Claude Shannon/Norbert Wiener, Marshall McLuhan, Orson Welles and INTERNET all have in common? You’ll find out!
Jumping forward a quarter-century—after pondering this question and tackling some of the coursework, and now lit up by Hyperseed-1—I revisited the tale of the tribe with new tools and perspectives.
My study into RAW’s tale of the tribe (a new global epic poem including history for all humanity, the planet and sentient beings), is adjacent to emerging outlines for core ontology knowledge bases beneficial to all humanity.
RU: You’ve done some work with the counterculture hero John Sinclair (RIP). What would you say are the current and future countercultural potentialities of AI to become a tool of liberation and anti-authoritarianism?
SF: My suspicions concern the benchmarks set by human countercultural ‘goodies,’ such as Leary, Wilson, Sinclair, Burroughs, Ken Kesey, and our beloved musical warriors like Jimi Hendrix, John Lennon, Bob Dylan, and hundreds more.
For AI, or AGI (or better yet BGI (Beneficial General Intelligence) to become a tool for liberation and anti-authoritarianism, the training sets and models must better reflect the alternative underground methods and practices that, for me, primarily utilize and innovate art and crafts, driven not by greed or super-ego, not for obscene profit but against the odds and by the will to communicate, share and express openly. Innovation. Outsmart ’em.
Sinclair, for example, took a vow of poverty, as did Allen Ginsberg, in sync with the Buddhist-Taoist conception but inspired by the American beats, not directly by ancient mystics. The practical—or impractical—results of such a bold decision are tenfold: dependence on occasional stipends, gifts, good friends, and community to help support such an unfashionable mantra as, “I don’t give a fuck about the money, just get on with the work, give me challenges”. Perhaps some of those challenges are AI potentialities?
I sense an adjacency with the tendency of decentralized, rotational and open source systems to prove efficient, less impactful on the environment and less damaging to human psyche and society. This is my hunch. AGI to ASI (Artificial Super-Intelligence) looks impossible without BGI (Beneficial General Intelligence). And to understand the benefits, in the human universe, look to our fabulous innovators, scientists, artists and thinkers who demonstrably and most obviously worked for good – for the betterment of all around the world humanity – with incredible feats of bravery and brilliance and discipline and daring without violence. Livingry not killingry.
This latest wave of so-called ‘dark gothic accelerationist MAGA’ shows little alignment with the vision of BGI, clearly and explicitly defined by SingularityNET and Mindplex, standing as an alternative to Big Tech and killingry, an open invitation to benefit all. Maybe the smartest and least violent strategy concerns superabundance. Sufficient food, clothing, shelter and leisure time so everybody gets along without squabbling. Throw in some universal basic income – why not? If the resources of planet earth were fairly and equally distributed, we would all be millionaires. The (six) philosophical commitments/hypotheses underlying Hyperseed-1 in Ben’s post are to my mind correlated with RAW’s general philosophical outlook, and so to The Tale of the Tribe, in some sense.
Credit: Tesfu Assefa
RU: Tell us again what you’re doing, and how are you going to help enact the idealistic version of AI, ASI, BGI and all of it? And feel free to give us a SingularityNET pitch for support.
SF: It’s Alchemai 60/60 – Sixty Visions of Beneficial General Intelligence is a multimedia project, a contemporary ‘tale of the tribe’, a poetically-charged core ontology for the age of BGI. Pairing 60 original stanzas with 60 musical pieces, co-created using a range of LLMs and generative audio tools. Alchemai includes comparative analysis of the AI tools, evaluating their potential to efficiently communicate solutions for the social and ecological poly-crisis, guided by SingularityNET’s definition of Beneficial General Intelligence (BGI). The final work will be presented as a website and album. Tanmoy is my proof-of-concept.
I’m also fortunate to guest on Rotifer, the new album by Garaj Mahal, a band who (much like Joyce, and RAW) help to raise the bar of what art and human intelligence can accomplish. Let the fingers do the talking. Garaj Mahal are worthy of consideration by Mindplex and SingularityNET as leading-edge musicians unifying a wide array of traditions into their unique virtuoso performance by example.
From my limited analysis of the state of the world right now, SingularityNET are at the vanguard of technology and scientific innovation, bringing some heart to the emergent BGI and ASI games. They have a road map, and human ingenuity to give any of the Big Tech bros a run for their money, or a triple-jump for their tokens. Efficiency, alignment, timing, integrity and synchronicity are paramount.
As a part of the rag-tag nomadic tribe of independent artists and thinkers who have spent decades immersed in the open community of bands, festivals, and happenings, I heed the call.
I’d ultimately like to see SingularityNET infused with more analogue arts: live music, new poetry challenges and multimedia happenings, locally, all across the planet – humans exhibiting what AI can’t, setting new benchmarks for creativity and ingenuity. These real-life events can be rapid prototyping sessions, gathering virtuoso artists to celebrate human beings and beingness, while sending a five-finger nose-wave to those conglomerates incapable of pulling off such organic gathering of self-organizing individuals. Get your dream team fit, it’s showtime.
Furthermore, the kind of ‘mutualist syndicalist’ flavour of the RAW community, composed of such wide-ranging and generally good-hearted characters, can also be defined as a gathering of self-owning ones, a temporary affinity group, for the most part, currently concerned with propiracy or Operation Mindfix. Entities gathering together to cause benefit to others, the planet, all life on earth. You know who you are. Imagine what we are capable of if we can synchronize our efforts, temporarily on occasion, to really go for it and bring about the RAW enlightenment (for want of a better term, but, it has light in it which is what we need right now). Count me in. Let’s fix and build and benefit. And to those who may have fallen for the greed and the ogre, and who secretly admire our man and have taken what he taught and weaponized it for obscene profits and general enslavement, the hidden light will be revealed. Let’s do justice to Synergetics.
As my collaborative AI research project Tanmoy demonstrates, there are tantalizing avenues for research at the intersection of geometry (Buckminster Fuller’s geometry in particular) and crystallography, origami and tactile physical modelling (hyperbolic crochet), hologrammic prose, music, and the ideogrammic method.
The goal of this research is meeting the challenge posed by RAW: ‘What do they all have in common with the internet?’, plus the parameters and requirements laid out by Ben in Introducing Hyperseed-1: A Fairly Minimal “Core Ontology” in particular the six aspects listed under ‘Philosophical Perspective Underlying Hyperseed-1’.
As Ben humbly points out, these are experimental concepts, a “semi-formal” or “initial abstract knowledge and perspective guide”. I must try to echo this sentiment when contemplating such heady projects as AGI, BGI, ASI. I personally find Ben’s communication style similar to RAWs in its operationalist language and cautionary approach to oversimplified conclusions, as all good scientists exhibit. Sadly this all too often gets overtaken by the unnecessary hype, marketing, sensationalism and absolutism.
At the end of February the cryptocurrency world heard news that would cement itself in crypto infamy and send prices into. Notable cryptocurrency exchange Bybit suffered a catastrophic security breach, losing a staggering $1.46 billion in Ethereum and related tokens.
This was more than just another hack among the hundreds since 2014, including the Mt. Gox and Binance hacks. This was in fact the LARGEST exchange hack in cryptocurrency history, which takes some doing. And to make it worse, it was most likely done by a North Korean hacking group, according to FBI investigations, which is terrible news for the entire sector due to the far-reaching repercussions it brings from regulations about money-laundering and sanctions.
Here’s what happened, and how.
The Anatomy of a Billion-Dollar Heist
On February 21, 2025, security analyst ZackXBT raised the alarm he spotted unusual transactions flowing from Bybit’s multisignature wallet. The hackers moved through Bybit’s security systems like water flowing through a cracked dam, with precision that suggested months of planning.
Ben Zhou, Bybit’s co-founder and CEO, confirmed the breach shortly after detection. According to Zhou, it first appeared to be a routine transfer from the exchange’s Ethereum cold wallet to a hot wallet—a standard procedure to keep the hot and cold wallet balances within certain thresholds—but turned out to be a sophisticated attack. The transaction looked legitimate on the surface, but contained malicious code that altered the smart contract logic.
Bybit Hack Timeline
“We know the cause is definitely around the Safe cold wallet,” Zhou stated. “Whether it’s a problem with our laptops or on Safe’s side, we don’t know.” Safe, a decentralized custody protocol offering smart contract wallets for managing digital assets, temporarily suspended its smart wallet functionalities following the incident.
North Korean Fingerprints (Again)
Blockchain analysis firm Chainalysis provided a detailed breakdown of the attack, tracing it to North Korea’s notorious Lazarus Group—a state-sponsored hacking collective active since at least 2009 and reportedly connected to the DPRK’s military intelligence.
The attack followed what Chainalysis described as a “common playbook” used by North Korean hackers. First, a phishing campaign targeting Bybit’s cold wallet signers gave attackers access to the exchange’s user interface.
This access allowed them to replace a multi-signature wallet implementation contract with a malicious version, enabling unauthorized fund transfers.
After gaining control, the hackers intercepted a routine transfer, redirecting approximately 401,000 ETH ($1.46 billion) to their addresses. The stolen funds were then split across multiple intermediary wallets to obscure the transaction trail—a standard tactic in the digital heist playbook.
Market Meltdown
The effects of the breach were felt immediately across cryptocurrency markets. Bitcoin plunged to $87k—a 7% drop—in 24 hours and its lowest level since November 2024, and in the aftermath it fell below $80,000. Other major cryptocurrencies followed suit, with Ethereum suffering particularly heavy losses given its direct involvement in the hack.
Bybit customers rushed to withdraw funds, creating unprecedented pressure on the exchange. Within 48 hours, Bybit processed $6.1 billion in withdrawal requests—nearly 100 times normal volume. The total value of customer assets held by the exchange dropped precipitously, from $16.9 billion to $10.8 billion.
Bybit’s Response
In what is now an industry standard procedure in line with the Binance ‘Funds are SAFU’ playbook created after its hack a few years back, Bybit weathered the storm quite well. Within three days, Zhou announced that the exchange had “fully closed the ETH gap” and restored a 1:1 reserve on client assets—an extraordinary feat given the scale of the theft.
According to blockchain analytics firm Lookonchain, Bybit received 446,870 Ether worth approximately $1.23 billion (88% of the stolen amount) through loans, whale deposits, and purchases.
The exchange bought 157,660 Ether ($437.8 million) from crypto investment firms Galaxy Digital, FalconX, and Wintermute through OTC transactions, and another $304 million of Ether from centralized and decentralized exchanges.
The Lazarus Bounty
Perhaps most notably, Bybit launched LazarusBounty.com—allocating $140 million to reward those who successfully track and freeze the stolen funds. The platform integrates security data from leading blockchain analytics firms like Chainalysis and Arkham, employs expert investigators, and offers a merit-based reward system for people who help track and freeze stolen assets.
“Join us on war against Lazarus,” Zhou declared on Twitter, announcing the “industry’s first bounty site that shows aggregated full transparency on the sanctioned Lazarus money laundering activities.”
The collaborative security efforts have already yielded results, with Chainalysis reporting that approximately $40 million of the stolen funds have been frozen. However, most of the loot remains dormant across multiple addresses—a strategy used by North Korean hackers to wait out the heightened scrutiny that follows such high-profile breaches.
Credit: Tesfu Assefa
Broader Implications for Crypto Security
The Bybit breach again exposes the fact that despite the industry’s rhetoric about decentralization, many exchanges are still centralized points of failure with substantial honeypots for attackers, no matter how sophisticated their security is.
It remains to be seen whether this hack leads to meaningful changes in how exchanges secure customer funds. Two things are certain: in the world of cryptocurrency, security can never be taken for granted, and the largest players make the most attractive targets.
Safe’s temporary shutdown of smart wallet functionalities after the hack demonstrates the interconnected nature of crypto infrastructure—when one major component gets compromised, the effects cascade throughout the ecosystem.
This interdependence raises serious questions about how exchanges secure user assets, and what precautions they take against sophisticated state-sponsored attacks. And how censorship-resistant are we in crypto really when digital assets can just be frozen?
Also, it strengthens the case for investors to either self-custody their assets or (more centralization!) use a crypto ETF like Blackrock’s which is federally insured to a point.
As Chainalysis noted, “Exchanges will need to articulate to their regulators and users how they ensure that user funds are protected.”
Lastly, after years in the wilderness, crypto is finally popular with regulators and mainstream investors alike. Anyone that’s been in crypto for over two years will know the hard gains and sacrifices that have been made to get here, from Mt. Gox to Bitfinex to FTX’s meltdowns.
While Bitcoin in its early days were wrongly accused of solely being a tool for criminals to move their money around, hacks are undeniably bad for everyone. It’s clear-cut crime, and when it gets to state-sponsored crime from a blacklisted nation, the entire space is skating on thin ice in the long term, no matter what Donald Trump and Larry Fink say about it during a bull run.
For average crypto users, the lesson is clear: the security of centralized exchanges, no matter how robust, is never infallible. Those with significant holdings would be wise to remember the crypto mantra: Not your keys, not your coins.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
In his recent speech to the U.S. Congress, President Donald Trump said: “we are going to lead humanity into space and plant the American flag on the planet Mars.”
Trump said similar words in his inauguration speech in January (transcript). He said: “we will pursue our manifest destiny into the stars, launching American astronauts to plant the Stars and Stripes on the planet Mars.”
In a Fox News interview after the speech to the U.S. Congress, Trump backpedaled a bit, SpaceNews reports. “There’s a lot of interest in going to Mars,” he said. But also: “Is it number one on my hit list? No. It’s not really. But it is something that would be, you know, it would be a great achievement. It would be a great thing if we could do it.”
However, he noted that his remarks on Mars in the Congressional speech got some of the biggest applause of the night. “I was shocked,” he said. “There seems to be a big interest in it.”
Elon Musk, who prioritizes Mars, is one of the most trusted advisers of Trump at this moment. This might change anytime, but while it lasts Musk will likely influence Trump on U.S. space priorities.
The push to prioritize Mars exploration has sparked pushback from experts and space enthusiasts who argue for returning to the Moon first. What is the best path forward in space?
The best path forward in space
I’ve recently considered (part 1, part 2) the question: Should we still want to send human astronauts to colonize space? Or should we want to leave space expansion to artificial intelligence (AI)?
Short summary: I think AI will colonize the galaxy in the long term. And I see this as a good thing.
Hopefully, humans will be able to follow the AIs as uploads. Frank Tipler said it best: if you can’t beat ’em, join ’em: “Any human who wants to join the AIs in their expansion can become a human upload,” he said.
However, I think the perception of a difference between AI/human upload hybrids and pure AIs will eventually fade away.
But I still think that in the rest of this century we must establish humanity as a multi-planetary biological species in the solar system. Doing it will boost the human spirit – a boost of which we are very much in need – and accelerate progress on all fronts, including the AI technology front.
So what is the best path forward at this moment? Should we go directly to Mars and skip the Moon? Or should we return to the Moon and build a permanent presence there first, and then move on to Mars? There are valid arguments for both options.
Let’s aim high, skip the Moon re-run, and plant our flag on Mars
Mars, though farther than the Moon, holds real promise. It’s a planet with water ice, carbon dioxide, and a thin atmosphere. These give us resources to live, grow food, and make fuel. The Moon lacks these essentials.
Going straight to Mars pushes us to solve big problems now. Building bases, surviving radiation, and managing longer trips stretch our skills. The Moon’s a smaller challenge that we’ve already cracked. Mars forces us to innovate faster: re-usable rockets, better life support, and self-sustaining habitats. Skipping the Moon avoids wasting time on old ground. We can test tech in Earth orbit or simulations instead.
The Moon feels like a re-run, less exciting for young minds or investors. Mars promises a new home, not just a dusty neighbor. It’s scientifically richer too. Mars might hide signs of past life, unlike the barren Moon. Unlocking its secrets could rewrite our story.
Travel time is a drawback – six months to Mars versus three days to the Moon. But modern ships can handle it. Crews can train for isolation here first. The Moon’s no real shortcut anyway; it’s not a stepping stone, just a detour. Mars needs more up-front effort but its payoff dwarfs the Moon’s. We’d leap toward a multi-planet future, not crawl back to a familiar rock.
Credit: Tesfu Assefa
Let’s build on the Moon, get it right, and then move on to Mars
Now for the other side of the argument. The Moon makes more sense as our next step than jumping to Mars. It’s close – three days away, not six months. That proximity lets us test tech and fix problems fast. Mars is a huge leap, risky and pricey. The Moon offers a safer, cheaper proving ground. We’ve been there before, sure, but not to stay. Building a permanent base now teaches us how to live off Earth.
A lunar outpost sets us up for Mars later. We can perfect habitats, radiation shields, and resource use. The Moon has water ice in craters – great for drinking, oxygen, and fuel. Mars has resources too, but getting there’s tougher. Some also point to lunar Helium-3, a rare isotope that is more abundant on the Moon than on Earth. It could power future fusion reactors, making the Moon a key energy resource.
Practicing on the Moon builds skills we need for the red planet. It’s like training wheels before a bike race. We’d master solar power, 3D printing, and recycling in low gravity first.
The Moon’s a lifeline Mars can’t match. If something goes wrong, Earth’s right next door. Mars missions face delays – help’s months away. A lunar base also boosts science. We’d study its geology, test new tools, and prep for deeper space. Mars science is exciting, but the Moon’s mysteries still hold value. Moreover, it’s a hub: we could launch Mars ships from there. This would save a lot of fuel: launching against the Moon’s gravitational pull requires much less energy than launching from Earth. The ships could be built in situ from titanium (another resource more abundant there than here), aluminium, and iron.
Going straight to Mars skips critical steps. A crash there could set us back decades. The Moon’s less glamorous, but it’s practical. It inspires, too – showing we can settle space, not just visit. Private companies already eye it for mining and tourism. That cash flow funds Martian dreams. A permanent lunar presence proves we are serious about space. Mars is the prize, but rushing risks failure. Patience now means success later.
So what is the best path forward in space again?
All things considered, I still think that Mars can wait. We should return to the Moon and build a permanent presence on the Moon first, and then move on to Mars.
The author in a video produced by the Moon Village Association (Credit: Moon Village Association).
To me, boosting the human spirit is the main reason to push forward with human space expansion in the solar system this century, instead of waiting for future human-like (or superhuman) AIs. This applies to both the Moon and Mars.
I’m old enough to be an Apollo orphan, and I want to live again the Apollo adventure that I found so inspiring when I was a kid. But I understand that different people of different generations are inspired by different things.
Perhaps Mars can inspire the young more than the Moon ever could. A red planet colony could fire up the imagination and ambition of the young, pushing them to work hard and achieve great things not only out there in space but also down here on this planet. Let me know your opinion in the comments.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
While some in the crypto industry cheered this development as a victory, and President Trump also emphasized how this was another election promise that he fulfilled, it was a nothing-burger for traders who were expecting fireworks and massive gains.
Last Friday though, Trump and crypto figureheads like crypto czar David Sacks, Bitcoin whale Michael Saylor (MicroStrategy), Brian Armstrong (Circle), Brad Garlinghouse (Ripple) and others came together to discuss the next steps to build a government stockpile of cryptocurrencies including Bitcoin, Ethereum and Solana.
The summit provided a few positive takeaways for the USA’s long term HODLing of crypto – Bitcoin in particular – but fell short of the groundbreaking announcement that the crypto masses demanded after having a bad few weeks. Markets slid down again soon afterwards, as talk of trade wars escalated.
Let’s take a look at the brief history of Trump’s crypto reserves and what they entail.
The Social Media Post that Started it All
President Trump set the ball rolling on March 2 when he took to Truth Social, a Twitter competitor owned by Trump Media and Technology Group, to announce that a U.S. crypto reserve will consist of XRP, Solana (SOL), Cardano (ADA), Ethereum (ETH), and Bitcoin. Sadly for traders, the announcement failed to kickstart an altcoin season.
Excited to learn more. Still forming an opinion on asset allocation, but my current thinking is:
1. Just Bitcoin would probably be the best option – simplest, and clear story as successor to gold 2. If folks wanted more variety, you could do a market cap weighted index of crypto… https://t.co/jv8Gcn8N2S
Others noted the strategic reserve gives the industry positive legitimacy. As this development unfolds, it’s time to understand what a strategic reserve is and why it matters.
What is a Strategic Reserve?
The concept of a strategic reserve in the USA dates back to the Gold Reserve Act of 1933, which aimed to hold critical resources to buffer against economic shocks like supply disruptions or price volatility.
These reserves are carefully guarded and can only be released when the government needs them.
Throughout the 20th century, the USA continued creating other strategic reserves to protect against shortages, such as the Strategic Petroleum Reserve in 1975 after the Arab oil embargo. These reserves are intended to stabilize essential assets for national defense or economic security. However, neutrals are wondering what makes BTC essential enough to warrant such a move.
Executive Order on Bitcoin Strategic Reserve
The executive order signed by President Trump on March 6 creates a Strategic Bitcoin Reserve. The naming suggests that Bitcoin is the only cryptocurrency that will be treated as a reserve asset.
Here are the key components of the executive order:
U.S. Digital Asset Stockpile
There is a difference between a reserve and stockpile. A reserve is one-way traffic: you can only deposit but not take out. On the other hand, a stockpile is flexible, with active buying and selling.
The executive order creates a Bitcoin reserve and a digital asset stockpile that includes digital assets other than Bitcoin owned by the Department of Treasury – seizures from criminal activities. The government will not acquire additional assets beyond those seized by the state, and the Secretary of Treasury may determine the strategies for maintaining the stockpile.
Strategic Bitcoin Reserve
The executive order says that only Bitcoin will be treated as a reserve asset. At this stage, it appears that the government will not be actively buying or selling Bitcoin but rather capitalizing the reserve with Bitcoin owned by the Department of Treasury. These bitcoins were acquired through criminal or civil asset forfeiture proceedings.
This is a big win for ‘Bitcoin Maxis’ who believe that BTC is the ‘one true cryptocurrency’ and the rest is just noise with a sell-by date.
The United States will not sell Bitcoin deposited into the Reserve, as it will be regarded as a store of value. Critics argued that Bitcoin is too volatile to be included in the Strategic Reserve.
The Secretaries of Treasury and Commerce are authorized to develop budget-neutral strategies for acquiring additional Bitcoin at no extra costs to taxpayers.
What Needs to be Done First?
There could be a long way to go before the Bitcoin Reserve and Digital Asset Stockpile executive order goes into full effect. U.S. agencies must first fully account for their digital asset holdings to the Secretary of the Treasury and the President’s Working Group on Digital Asset Markets. The U.S. crypto czar David Sacks says the holdings have not yet been audited. Arkham Intelligence says the United States holds 198.1K Bitcoin, worth over $17 billion.
Secondly, the Treasury and Commerce Departments will need to find strategies to acquire additional BTC without spending taxpayers’ money.
There is another hurdle to jump through. The Congress may need to pass a bill to appropriate funds for the Bitcoin Reserve. This means the Trump administration needs to have the Congress on its side to cross over the line.
Potential Benefits
Whatever happens in the future, the Bitcoin Strategic Reserve is a major boost for the industry. The crypto industry is moving from the fringes of the financial world to being seen as a store of value by policymakers.
Bitcoin has long been called ‘digital gold’, and its proponents claim it can hedge against inflation and economic instability. Importantly, it could become a long-term investment in the Strategic Reserve.
Credit: Tesfu Assefa
Wrapping Up
It’s still uncertain if the Bitcoin Strategic Reserve and altcoin stockpile will fully materialize, but the wheels are in motion. President Trump’s early teaser on TruthSocial hinted at Bitcoin’s strong support from industry leaders while other assets play a supporting role.
These are early days, but the legitimacy of the industry is strengthening as crypto regulation becomes more favorable. And when you dig deeper, one thing becomes clear: Bitcoin is here to stay. How much it will be worth is a whole other question.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.