OK Computer… let’s imagine you’re a developer who’s created a smart contract that’s living on a blockchain like Ethereum or Cardano. It’s really good at executing pre-programmed instructions on-chain, but it’s sandboxed and can only react to conditions that occur on-chain, and has no idea what’s happening in the outside world.
So you need real-world data to make its way on-chain and into Web3 in real-time: that’s where crypto oracles come in – they are your eyes and ears to the real world, feeding you the information you need to make decisions and take actions.
Crypto oracles, also known as blockchain oracles, are a vital component in the blockchain machinery. They solve a critical problem: how do you get real-world data into a closed blockchain system? Without oracles, smart contracts would be like computers without an internet connection – functional, but severely limited in what they can do.
What Exactly are Blockchain Oracles?
At their core, blockchain oracles are simply data feeds. They act as bridges between blockchains and the outside world, allowing smart contracts to access and respond to real-world information. But oracles aren’t just simple data pipelines – they’re responsible for querying, verifying, and authenticating external data before delivering it to smart contracts.
Think of oracles as trusted messengers. When a smart contract needs to know something about the outside world – like the current price of Bitcoin, the winner of an election, or whether it rained in New York today – it sends out a request. The oracle then goes out, gathers that information from reliable sources, makes sure it’s accurate, and brings it back to the smart contract.
This might sound simple, but it’s a crucial function and very difficult to get right. Blockchains are designed to be closed systems for security reasons. This isolation is great for maintaining the integrity of the blockchain, but it also means that smart contracts can’t naturally interact with anything outside their network. Oracles solve this problem, allowing smart contracts to respond to real-world events and conditions.
The importance of oracles becomes clear when you consider the potential applications. With access to real-world data, smart contracts can:
Execute trades based on market conditions
Release insurance payouts when certain events occur
Adjust supply chain operations based on real-time information
Settle bets on real-world outcomes
By providing this vital link to external data, oracles dramatically expand what’s possible with blockchain technology. They’re a key component in creating what’s often called the ‘verifiable web’ – a system where users can understand exactly what’s happening within an application and maintain control over their assets, all while interacting with real-world data and events.
How Do Blockchain Oracles Work?
Let’s break down the process of how an oracle typically operates:
Data Request: A smart contract says, “Hey, I need some information!”
Oracle Activation: The oracle perks up its ears and says, “I’m on it!”
Data Collection: The oracle goes out into the world (or the internet) to find the requested information.
Data Verification: The oracle checks and double-checks that the information is correct.
Data Transmission: The oracle sends the verified information back to the smart contract.
Smart Contract Execution: The smart contract says, “Thanks! Now I can do my job,” and executes based on the received data.
Some advanced oracle systems, like Chainlink, use a more complex process involving multiple sub-contracts to ensure data reliability and security. It’s like having a team of fact-checkers instead of relying on a single source.
An Oracle For Every Occasion
Just as there are many types of information in the world, there are various types of blockchain oracles:
Software Oracles: These pull data from online sources like websites, databases, and servers. They’re the go-to for things like price feeds, exchange rates, and digital information.
Hardware Oracles: These are the real-world explorers. They interface with the physical world, collecting data from things like sensors, barcode scanners, or other IoT devices. Imagine a smart contract that needs to know the temperature in a shipment of vaccines – that’s where a hardware oracle would come in handy.
Human Oracles: Sometimes, you just need a human touch. These oracles rely on human judges to provide information. They’re useful for things that require human judgment or interpretation.
Inbound and Outbound Oracles: Inbound oracles are like importers, bringing external data onto the blockchain. Outbound oracles are exporters, capable of sending information from the blockchain to the outside world.
Compute-Enabled Oracles: These are the brainiacs of the oracle world. They perform complex computations off-chain and deliver the results to smart contracts, enabling more advanced functionalities.
Cross-Chain Oracles: These are the diplomats, facilitating communication and asset transfers between different blockchain networks.
Overcoming the Oracle’s Dilemma
While oracles are incredibly useful, they’re not without their challenges. The main issue is known as ‘the oracle problem’, (like crypto’s Byzantine General Problem) and it boils down to this: How can we trust that the data provided by oracles is accurate and hasn’t been tampered with?
This is a big deal because oracles are essentially reintroducing an element of trust into a system designed to be trustless and without the need for an intermediary. If an oracle is compromised, it could feed false information to a smart contract, potentially leading to significant losses or other issues.
To tackle this problem, the blockchain community has come up with several solutions:
Decentralized Oracle Networks: Instead of relying on a single oracle, these networks use multiple independent nodes to fetch and verify data. It’s like getting a second (and third, and fourth) opinion.
Reputation Systems: These keep track of oracles’ past performance, helping users choose reliable data providers. It’s like a Yelp for oracles.
Crypto-Economic Incentives: These systems reward honest behavior and penalize dishonesty within the oracle network, creating a financial incentive for oracles to stay truthful.
Hardware-Based Security: Some oracles use secure hardware components to protect the integrity of data processing, adding an extra layer of security.
Real-World Oracle Use Cases
The applications of blockchain oracles are vast and growing. Here are some areas where oracles are making a big impact:
Decentralized Finance (DeFi): Oracles are the lifeblood of DeFi, providing price feeds and market data. Without oracles, decentralized exchanges, lending platforms, and synthetic asset protocols would be flying blind.
Insurance: Smart contracts can use oracle data to automatically process claims. Imagine an insurance policy that pays out automatically if your flight is delayed – that’s the power of oracles in insurance.
Gaming and NFTs: Oracles enable the creation of dynamic NFTs that can change based on real-world events. They also provide verifiable randomness for blockchain-based games, ensuring fair play.
Supply Chain Management: Oracles can feed IoT sensor data into blockchain systems, enabling real-time tracking and verification of goods. This can help in everything from ensuring the food is fresh to verifying luxury handbags are authentic.
Prediction Markets: Crypto betting platforms rely on oracles to determine the outcomes of events and settle bets accordingly. Whether it’s predicting election outcomes or sports results, oracles play a crucial role (and hacking or duping the oracle could make a cybercriminal millions!)
Cross-Chain Interoperability: Oracles facilitate communication and asset transfers between different blockchain networks, helping to create a more interconnected blockchain ecosystem.
The Oracle All-Stars: Leading Projects
Let’s look at the projects that lead the pack among blockchain oracle solutions:
Chainlink (LINK): The heavyweight champion of the oracle world, Chainlink is widely adopted and provides data feeds for a vast array of blockchain applications.
Pyth Network (PYTH): This Solana-native newcomer specializes in high-fidelity, real-time financial market data. It markets itself as a next-gen version of LINK and has been getting a lot of adoption
UMA (UMA): UMA offers oracle solutions for DeFi products and synthetic assets. They’ve introduced the concept of ‘optimistic oracles’, which assume data are correct unless challenged.
API3 (API3): This project focuses on direct API integration with smart contracts, aiming to cut out the middleman in the data delivery process.
Band Protocol (BAND): A cross-chain data oracle platform, BAND has been around as long as Chainlink has. It enables smart contracts to interact with real-world data across different blockchains.
As blockchain technology continues to evolve, oracles will become even more critical. Here’s what we might see in the future:
More sophisticated decentralized oracle networks with enhanced security and reliability. Some new chains like Sui and Aptos are already moving away from the industry-leading oracles and instead are building their own chain-specific oracles.
Increased integration with IoT devices and real-world sensors, bringing more of the physical world onto the blockchain.
Advanced cross-chain communication protocols enabled by oracles, creating a more interconnected blockchain ecosystem.
Blockchains compute on pure logic and mathematics, existing in the Platonic world of forms. Oracles bridge the gap to the big bad world of real data. They enable smart contracts to break free from their blockchain boundaries and interact with the world in many meaningful ways.
The oracle problem isn’t going away. As more money flows through systems that depend on oracles, more hackers will put more effort into duping the oracle to pay out on a losing horse. Ongoing innovation in oracle technology will be needed to address these issues. As we move towards a more interconnected and decentralized future, oracles will play a pivotal role in expanding the capabilities of blockchain systems, and fostering the growth of decentralized applications across various industries.
So the next time you use a DeFi application, play a blockchain game, use a Crypto AI product or interact with any smart contract that seems to magically know what’s happening in the real world, remember to tip your hat to the humble blockchain oracle. They’re the ones making the magic happen behind the scenes.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
A few days ago I watched the epic fifth flight test of Starship, the giant SpaceX rocket that, Elon Musk’s hopes, will one day take human colonists to Mars.
I watched the flight test live via X, including its most critical moment: the catch of the Super Heavy booster. Commentator Katherine Boyle called it “The Fall of the Century” and said that it “restored faith in the American Dream.”
Liv Boeree has captured the last seconds of the return of the Super Heavy booster in this video. The video shows a visible (and audible!) reentry sonic boom.
This was a very ambitious and perhaps risky test. In fact, SpaceX made the final decision to try and catch the booster only minutes before the actual catch. Any number of small technical glitches could have turned success into failure. It’s difficult to escape the impression that the universe loves Elon Musk and wants us to advance rapidly on the road to the planets and the stars.
The stunning achievement of SpaceX has been hailed as a major spaceflight milestone and praised by space experts, public figures, and politicians from all over the world. With one very notable exception though: the President and Vice President (and presidential candidate) of the United States.
Boom over troubled waters
The waters that we can see in the video are calm. But Elon Musk’s giant rocket returned to the troubled waters of a political storm centered on Musk’s cultural and political positions.
“Musk mania in the media this month has reached a level of uncontrollable hysteria,” legal and political commentator Jonathan Turley posted to X.
Turley has written a scathing indictment of the pundits and politicians who are unleashing unhinged attacks on Elon Musk.
Turley reports that California Coastal Commission has rejected a request from the Air Force for additional launches from Vandenberg Air Force Base because they don’t like the political positions of Musk.
The incident was covered by The Los Angeles Times. The California Coastal Commission has an environmental mission, but there isn’t much about the environment in the LA Times story. Rather, the story reports one after another personal attack on Elon Musk by representatives of the Commission, such as:
“We’re dealing with a company, the head of which has aggressively injected himself into the presidential race… Just last week that person was talking about political retribution… Elon Musk is hopping about the country, spewing and tweeting political falsehoods…”
These remarks by several Commission officials, video recorded and ridiculed by Greg Gutfeld, seem to reflect personal animosity based on partisan politics rather than anything even remotely related to the environment.
Turley reports many other rabid attacks on Elon Musk by well-known public figures, some even calling for his arrest and deportation.
Where does all this hatred come from?
Free speech and Donald Trump
Elon Musk has become a major topic of discussion because he’s allowing more free speech on X and he’s vocally supporting Donald Trump.
“I describe Musk as arguably the single most important figure in this generation in defense of free speech,” says Turley. “The left will now kill jobs, cancel national security programs and gut the Constitution in its unrelenting campaign to get Musk. His very existence undermines the power of the anti-free speech movement. In a culture of groupthink, Musk is viewed as a type of free-thought contagion that must be eliminated.”
Musk’s bet on Trump seems a dangerous one: if Trump doesn’t win the upcoming elections in the U.S., it seems inevitable that the government will be very hostile to Musk and all his companies and projects for the next four years.
At this moment, the upcoming elections in the U.S. seem a very tight race.
The polls slightly favor one of the two main candidates, and the betting markets slightly favor the other. Of course, this could change next week, or tomorrow, or anytime before the elections. I don’t rule out the possibility that one of the two main candidates could win in a landslide. Neither do I rule out the possibility that the winner could win with only a very small margin, so small that half of the U.S. population will dispute the results of the elections.
Possible political outcomes
Elon Musk has more than 200M followers on X. Probably half of them are bots and a very large fraction of the rest are casual followers who don’t pay attention to him or wouldn’t be influenced by what he says. But he can likely direct a few tens of thousands of votes to Trump, and this could be a decisive factor in some swing states.
There were indications that the FAA wouldn’t have authorized this flight test before the elections. Some commentators interpreted this an indication that the U.S. current administration didn’t want to risk a success of the flight test that would have given unwanted publicity to Musk before the election.
Then the FAA authorization came all of a sudden. I thought of a little conspiracy theory: perhaps the administration green-lighted the flight test hoping for a catastrophic failure that would, they hoped, reduce the public appeal of Elon Musk. But if so, their move backfired catastrophically! If anything, the spectacular sonic boom of Starship is likely to bring a few more votes to Trump.
Whatever the result of the elections, the U.S. will still be a very divided country afterward.
But spaceflight is an endless source of pride and hope that transcends petty partisan politics. I hope the next U.S. administration, whichever it is, will stay on the path followed by the Trump and Biden administrations.
NASA administrator Bill Nelson praised SpaceX after the booster catch, affirming the plan to go “to the South Pole region of the Moon and then on to Mars.”
This is the right spirit! And I hope the U.S. space program will have bipartisan support. This would show that, even today, honest politicians of different camps can work together and negotiate viable paths to the common good.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
This AI software represents a new leap in fraudsters’ ability to create fake identities, potentially undermining a key security measure in centralized crypto exchanges. This comes after five years of close collaboration between crypto exchanges and the authorities. The exchanges have been forced to crack down on anonymity, after U.S. authorities jailed exchange bosses like Changpeng CZ Zhao (Binance) and Arthur Hayes (BitMex) for failing to follow Anti-Money Laundering (AML) regulations. These AI advances are therefore bad news for parties trying to fight terrorism funding (TF) and blacklisted countries like North Korea’s state-funded Lazarus hacker group.
How ProKYC’s deepfake AI KYC works
ProKYC uses advanced AI to generate two crucial elements:
1. Fake government-issued ID documents (e.g., passports)
2. Deepfake videos matching these fake IDs
In a demonstration, ProKYC created a fake Australian passport and an accompanying deepfake video. This synthetic identity successfully passed the KYC protocols of Bybit, one of the world’s largest cryptocurrency exchanges.
The tool casually offers a comprehensive package including:
Camera emulation
Virtual emulator
Facial animation
Fingerprint generation
Verification photo creation
Priced at $629 for an annual subscription, ProKYC claims to work on major platforms beyond crypto exchanges, including payment processors like Stripe and Revolut.
The role of KYC in Crypto
While most crypto folks hate it and the doxxing that it brings, KYC processes serve several critical functions in the cryptocurrency ecosystem:
Fraud Prevention: Verifying user identities reduces the risk of fraudulent activities.
Anti-Money Laundering (AML): KYC helps track the sources of funds, making it harder for criminals to launder dirty money through crypto platforms.
Regulatory Compliance: Most countries are mandated by the Financial Action Task Force (FATF) to require crypto exchanges to implement KYC measures. It’s part of operating legally. If these countries don’t comply, they can be graylisted or blacklisted, opening them up for sanctions.
Trust Building: Robust KYC processes enhance the credibility of exchanges for both users and regulators. It shows proper due diligence has been done by the exchanges, and users have less fear they will get shut down or abscond with users’ funds.
Typical KYC procedures can be tiresome, but have improved over the years to become more intuitive. Now they usually involve submitting government-issued identification documents and often include facial recognition checks.
ProKYC threatens to render these safeguards obsolete, and throw the current best practices out the window. This could have a catastrophic effect on the crypto sector, with regulators in the USA constantly seemingly seeking any reason to tie it down with heavy legislation such as the Crypto Travel Rule, and the covert Operation Choke Point 2.0.
The Broader Threat Landscape
The emergence of tools like ProKYC has far-reaching implications:
1. New Account Fraud (NAF): With ProKYC, people can create fake but verified accounts, and use them to commit various forms of fraud. These accounts can launder dirty money and be used as ‘mule accounts’ to make transfers around sanctions.
3. Challenge to Security Measures: The sophistication of ProKYC poses a significant challenge to existing security protocols, potentially necessitating the development of new, more robust verification methods.
4. Wider Financial Sector Impact: Tools like ProKYC currently target crypto exchanges, but similar tools could potentially be used to bypass KYC measures in traditional financial institutions.
Detection and Prevention Challenges
Identifying and thwarting fraud attempts using tools like ProKYC presents a complex challenge. Etay Maor, Chief Security Strategist at Cato Networks, points out the delicate balance required: “Creating biometric authentication systems that are super restrictive can result in many false-positive alerts. On the other hand, lax controls can result in fraud.”
Potential detection methods include:
Manual Verification: Human oversight to identify unusually high-quality images or videos.
AI-Powered Analysis: Developing AI systems to detect inconsistencies in facial movements or image quality that might be imperceptible to the human eye.
The effectiveness of these methods remains to be seen, as the AI technology behind deepfakes continues to advance rapidly.
Industry Response: Binance founders’ CZ’s Warning
The threat posed by AI-generated deepfakes has drawn the attention of prominent figures in the cryptocurrency world. CZ Zhao, released last week from prison, has issued a stark warning about the proliferation of AI-generated deepfake videos promoting cryptocurrency scams on social media.
Zhao cautioned on X (formerly Twitter):
There are deepfake videos of me on other social media platforms. Please beware!
Changpeng Zhao, former CEO and co-founder of Binance
CZ’s warning comes at a time when several high-profile individuals, including political figures and business leaders, have been impersonated using deepfake technology to promote fraudulent crypto schemes.
The use of deepfakes in crypto scams typically follows a familiar pattern: scammers create videos of well-known figures seemingly endorsing get-rich-quick crypto schemes, luring unsuspecting victims into transferring funds to specific wallet addresses. The promised rewards, of course, never materialize.
Of course, anyone that’s been using Crypto Twitter or Crypto YouTube over the last 18 months will be well familiar with them by now. Usually they come with some kind of countdown mechanism to pressure viewers into making a FOMO-induced mistake.
Legal and Regulatory Implications
The rise of tools like ProKYC poses significant challenges for regulators and law enforcement agencies. In the USA, identity fraud can carry severe penalties, including up to 15 years imprisonment. However, the borderless nature of cryptocurrency and the anonymity provided by advanced AI tools make enforcement particularly challenging.
Regulators may need to reassess current KYC requirements and work closely with cryptocurrency exchanges to develop more robust verification methods. This could potentially lead to stricter regulations and increased compliance costs for exchanges.
Can AI KYC Crackers Be Stopped?
As AI technology continues to mutate, both crypto exchanges and users must remain vigilant and adaptable. For exchanges, this may mean investing in more sophisticated AI-driven security measures and potentially rethinking traditional KYC processes. Some possible strategies include:
Behavioral Analysis: Monitoring user behavior patterns to detect anomalies that might indicate fraudulent activity
Blockchain Analysis: Leveraging the transparent nature of blockchain technology to track and analyze transaction patterns.
Continuous Authentication: Implementing ongoing verification processes throughout a user’s account lifecycle, rather than relying solely on initial KYC checks.
For users, awareness of these threats and a healthy skepticism towards too-good-to-be-true offers remain crucial. Education about the risks of deepfake scams and how to identify them will be increasingly important.
Collaboration between technology experts, security professionals, and regulators will be essential in developing robust defenses against these mushrooming threats. As the battle between security measures and fraudulent techniques continues, the integrity and legality of the cryptocurrency ecosystem – and even the legality of certain artificial intelligence methods – can come under closer scrutiny, especially as U.S. authorities are devising AI safety frameworks. Pro-anonymity crypto users might welcome this latest crypto malware, but the authorities won’t. ProKYC presents a new threat to crypto’s legality that must be treated with the utmost of urgency.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
HBO’s documentary “Money Electric: The Bitcoin Mystery” claims Peter Todd is Satoshi Nakamoto, Bitcoin’s mysterious creator. The film presents circumstantial evidence, including forum posts and coding similarities. However, Todd firmly denies being Satoshi, and criticizes the documentary’s methods. The crypto community remains skeptical, preferring the creator’s anonymity.
Introduction
HBO last week released a much-anticipated reveal-all documentary that promised to finally solve one of the biggest mysteries in the crypto world: who is Satoshi Nakamoto, the mysterious creator of Bitcoin? Titled ‘Money Electric: The Bitcoin Mystery’, the documentary was directed by Cullen Hoback, the filmmaker known for ‘Q Into the Storm’, a documentary on QAnon.
The documentary came at a time when Satoshi-era wallets that lay dormant for nearly 16 years showed some activity. About 250 Bitcoins dating from January to February 2009 were moved in September 2024, reigniting interest and speculation about the early days of crypto. Bitcoin OGs like Samson Mow and Adam Back took to Twitter to either stir the pot or to deny everything.
The build-up to the documentary sparked interest in the identity of the Bitcoin creator. But principled Bitcoiners had a different take: they prefer Satoshi Nakamoto‘s mystery to his definitive unmasking. It also reminded the crypto world that there have been many other highly-publicized ‘Satoshi reveals’ that turned out to be nothingburgers.
This, in their eyes, is one of those documentaries filled with nothing but circumstantial evidence that would leave us where we started: Satoshi Nakamoto remains an enigma. The only irrefutable evidence of who Satoshi Nakamoto is is the transfer of Bitcoins from Satoshi’s public wallets. The industry’s proponents have endured crypto’s four seasons without knowing the person who started it all. Could the documentary change it?
In the build-up to the documentary’s release, Len Sassaman’s name came forward, with prediction site punters betting heavily on him being the programmer behind Satoshi.
Prediction Markets Bet on Len Sassaman
The trailer for the documentary left people guessing who it would claim as the real Satoshi. After the trailer gave nothing but ambiguous hints, about $44 million in bets were placed on Polymarket ahead of the documentary’s release on who it would name as Satoshi Nakamoto. 45% of opinions on Polymarket, which is the largest crypto prediction market for betting on real-world events, favored Len Sassaman as the man the documentary would identify as Satoshi.
Sassaman’s background makes him a plausible candidate for Satoshi, and others have written convincingly about his case.
Sassaman was born on April 9, 1980, and died on July 3, 2011 at the age of 31. He was a cypherpunk, cryptographer, and privacy advocate. Bitcoin and its underlying technology are built on the principles of cryptography and privacy, and the cypherpunks were its first true supporters. Sassaman studied under David Chaum, who is regarded by many as the godfather of crypto.
One of the reasons why Sassaman could be a potential Satoshi candidate is the correlation of the dates between Satoshi’s final messages and Sassaman’s tragic death. Two months after Satoshi’s final communication with the Bitcoin community on April 23, 2011, Sassaman died of suicide.
This correlation was not enough to convince the documentary makers that Sassaman was Satoshi. Instead, they pointed to another name as Satoshi Nakamoto: Peter Todd.
The filmmaker behind the documentary ‘Money Electric: The Bitcoin Mystery’ is convinced Todd is Satoshi Nakamoto. Just who is Peter Todd? Todd is a Canadian programmer and early Bitcoin developer. He founded OpenTimestamps, an open-source project for timestamping on blockchains.
Todd worked on several cryptocurrency projects, including Counterparty, Mastercoin, and Colored Coins. He worked alongside NSA whistleblower Edward Snowden in launching the privacy-focused cryptocurrency ZCash in 2016.
The documentary’s claim that Todd is Satoshi is circumstantial. The strongest claim hinges on a 2010 public forum post in which Todd offers a response to Satoshi’s post. Hoback argues that Todd continues Satoshi’s train of thought using his account instead of Satoshi’s. This has been debunked as a correction to Satoshi’s post. Observers say the documentary was trying to make a meal out of it.
Other circumstantial evidence includes Todd’s interest in cryptography at a tender age, and his being Canadian (Satoshi used British/Canadian spelling). Another piece of evidence used by Hoback is a blog post in which Todd claimed he could ‘sacrifice coins.’ This, according to Hoback, meant that Todd could destroy the 1.1 million (valued at roughly $66 billion) held by Satoshi. Hoback acknowledges that this was stretching it, and too far from being a confession.
There are several pieces of evidence against HBO’s claim that Todd is Satoshi. Todd’s code’s structure and style from 2008 has a different style from the one used in Bitcoin’s original release.
Does the HBO Documentary Solve the Satoshi Mystery?
Over the years, the media has tried to reveal Satoshi’s true identity. This has been an elusive task, with several potential candidates denying being the Bitcoin creator. After the Dorian Nakamoto disaster which saw Newsweek track the wrong person down and cause him to get hounded by the media for weeks, it’s no surprise.
Todd is now the latest candidate to deny this honor. He told CNN that “I am not Satoshi” and accused the film of “putting his life in danger.” Although Hoback is confident that Todd is Satoshi, the Canadian developer said the filmmaker was “grasping at straws.”
The documentary ‘Money Electric: The Bitcoin Mystery’ does not give conclusive evidence on the true identity of Satoshi Nakamoto. It further cements the notion that Satoshi may have vanished for good, with the crypto community content with not knowing the true person or group of persons behind the cryptocurrency valued at more than $1.2 trillion.
The lack of a well-known leader seems appealing to the Bitcoin community. This documentary may have brought back one uncomfortable question – what would happen if the true identity of Satoshi Nakamoto is unmasked?
The biggest takeaway from the HBO documentary is that it’s best to let sleeping dogs lie, and that the identity of Satoshi Nakamoto shouldn’t and doesn’t matter. It’s his work, not his name, that matters most. It has yielded a network of code and a community of activity more important than one man. But, hey, it makes for fun television.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
When I was in college studying creative writing, I had a professor who said I was trying to cram the entire zeitgeist into every sentence. Turns out I was thinking small; Howard Bloom tends to bite off the entire history of cosmic evolution in his books. Bloom’s forthcoming book is titled ‘The Case of the Sexual Cosmos: Everything You Know About Nature Is Wrong’. The book is a tour-de-force that tracks the continuing audacious spread of life from the Big Bang to this age of wild human created technological change.
Earlier books have included ‘Global Brain: Evolution of the Mass Mind from the Big Bang to the 21st Century’, and ‘The Lucifer Principle: A Scientific Expedition into the Forces of History’. Bloom, now 81 years old, has had a long and interesting life that has included everything from doing public relations for innumerable musical acts to suffering from Chronic Fatigue System which left him almost housebound for approximately 15 years. And of course, many controversial and speculative books.
Bloom, known for provocative texts, hits the reader right up front in this one. He praises the “common sins” of materialism, consumerism, waste and vain display, calling them drivers in evolution that add to the “toolkit of the cosmos.” Some of those are not among my favorite vices, but you shouldn’t let that stop you. This is a fascinating book. It drew my attention away from everything else I thought I wanted to read for many weeks. And I hope you will find this conversation as interesting as I found the book.
The Case of the Sexual Cosmos: Everything You Know About Nature Is Wrong comes out January 1st, 2025. from World Philosophy & Religion Press.
RU Sirius: So since this is mainly an AI oriented website, published by SingularityNet, would you be able to state the premise of your book, and put it in the context of AI, and of any notion of the Singularity you would care to reject or embrace.
Howard Bloom: ‘The Case of the Sexual Cosmos: Everything You Know About Nature is Wrong’ makes mincemeat of two of science’s most cherished laws. And it tells the tale of the cosmos – from the Big Bang to what’s going in your brain as you read this sentence – from a startlingly new point of view.
Take the Second Law of Thermodynamics, that all things are constantly falling apart, that entropy is constantly on the increase, that disorder is always on the rise. The Second Law is wrong.
This cosmos is one where all things are falling together. It is not a cosmos constantly sliding downhill into what Lord Kelvin called heat death. It is a cosmos constantly stepping up an invisible staircase. Yes, stepping up.
And the cosmos is using her ability to churn out radical new inventions, whether those inventions are atoms, molecules, suns, moons, or stars, not to mention galaxies.
She’s constantly using these new creations to reinvent herself.
The greatest reinventors of this cosmos have been life and humans. Life is not a matter of laying down and being consilient (to use E.O. Wilson’s word) with what’s around you. Life is obstreperous. Life is audacious. Life is spunky. Life has moxie. Life takes on challenges. Life surmounts those challenges. And life creates new realities.
Those new realities reinvent the cosmos. They add to the cosmos’ toolkit. And in the 40 years since 1983, when computers became widespread, and since 1993, when the World Wide Web was started, we have invented more new capabilities, new tools for the cosmos, than any other children of the Big Bang have ever produced.
That’s one basic premise of the book. The cosmos is not proceeding on the Second Law of Thermodynamics, the law that all things fall apart. It’s proceeding on the First Law Of Flamboyance. The law that all things fall together.
Getting down to AI – the last year of the writing process for this book I had something brand new: AI. And that meant that I had the capacity to go deeper in my research than ever before, I always doubt every sentence that I write. I go back and fact-check it at least five times. Which explains why there are approximately 7,000 references in this book.
RU: Yeah. There are a lot.
HB: And it used to take two days to fact check a sentence. Even with the Internet and old-style search engines, it was a grueling process. But now we have Consensus, an AI offering which digs into all the scientific studies that it’s been able to get its hands on. So checking a fact has become a matter of 45 minutes. Now 45 minutes is a lot of time when you’ve got a lot of sentences in your book. But…
RU: Do you worry about errors?
HB: Oh, yeah. I’m worried that my premises could be wrong, the premises of the entire book. I’m worried that each paragraph could be wrong. And in fact, there was a chapter that I was setting out to write and it depended on a single contention. And for almost ten years I dug through a mountain of books and could not find the information to either back it or refute it. Then, helped along by AI, I was able to discover that no, there is no evidence for this contention of mine. None. And I was able to see that in a matter of days, not years.
RU: Would you care to say what that is?
HB: Well, the contention was that the amount of biomass on earth has increased since humans and modern industrial civilization, not decreased. I mean we spend $2.4 trillion a year on the care, feeding, and breeding of plants. It’s called farming.
One of the most important things Consensus helped with was pinning down dates: the date of the first bacteria in the sea, the date of the first bacteria on the land, the date of the first land plants, then the first land animals. Consensus also helped when I was going after the date of the first sex.
Meanwhile, it turns out that the premise that we have increased the planet’s biomass is not reflected in any research at all. What we haveincreased is sentience. So I had to switchto give you an idea of how humans have grown this planet. And we have grown this planet. In fact, we have grown the Universe. We have added to the tool kit of this Cosmos. And I had to show you how by telling you the story of the increase of sentience. Sentience is a word that I find awkward to use. But sentience has been around on this planet for approximately 3.9 billion years. ever since the first bacteria. And it has grown exponentially, especially in the last 200,000 years since we became Homo sapiens. And more recently, since the 1990s and the computer revolution.
But I wasn’t able to pin down my premise about humans increasing the biomass until the last time I went over the book. Then AI suddenly sprang into the picture and made life easier. Google Scholar is very useful, but it’s nothing compared to Consensus.
RU: So now you’re talking about how you used AI, but what about how does AI fit into your view of how life has evolved?
HB: The premise of the book is that life is not what we think it is. It is not in harmonious balance. Telling a deer that the lion tearing her to pieces while she’s still alive is a matter of harmonious balance… the deer would find the argument incomprehensible, for good reasons. We think of the Cosmos as harmonious balance, we call that an equilibrium. We think that the Cosmos follows the law of entropy, which says basically that all things fall apart. That is not this Cosmos. This Cosmos is one where all things are falling together, where the Cosmos is constantly stepping up an invisible staircase.
It isn’t stepping down that invisible staircase, which entropy would tend to get us to believe. The Cosmos is using her products, whether they are atoms, molecules, suns, moons, and stars, not to mention galaxies. She’s constantly using these new creations to reinvent herself. And the greatest reinventors of this Cosmos have been life and humans. And life is not a matter of laying down and being in concilience, to use E. O. Wilson’s word, with what’s around you. As I said, life is audacious. Life is spunky. Life takes on challenges. Life surmounts challenges. Life creates new realities. Those new realities reinvent the Cosmos. They add to the Cosmos’ toolkit. And in the 20 or 30 years since computers have become widespread and since the invention of the World Wide Web, we have been doing more invention of new capabilities, of new tools for the Cosmos, than anything else the Cosmos has ever produced. That is one of the basic premises of the book.
RU: Right. You call it the First Law of Flamboyance.
HB: Yeah, the First Law of Flamboyance replacing the Second Law of Thermodynamics.
RU: All right, but what happens if we’re uploaded or if we unite with Artificial Intelligence. Does that turn into something else? Will we say we had thermodynamics, then we had Blooms’ First Law Of Flamboyance, now we have Prigogine’s idea that chaos comes back together at a higher level of complexity?
HB: I think Prigogine’s book is nonsense. It’s like walking the mile to the dentist’s office backwards. In other words, he defines everything in terms of entropy. Entropy has been radically disproved by what we know of the evolution of the Universe. And we know quite a bit about the evolution of the Universe now. And at no step does that evolution display the kind of universal entropy or heat death that Lord Kelvin talked about.
I mean, entropy was an idea of this little group involved in inventing thermodynamics from 1850 to roughly 1875. And they had a brilliant idea – that heat was not caused by a particle called the caloric, which was taken for granted up until then. They were certain that heat instead was the movement of atoms and molecules. And the very idea of atoms and molecules was totally unproven. So they took a big jump and they turned out to be absolutely right about the nature of heat.
But then they, arrogantly, went about giving two laws of thermodynamics. One was the conservation of matter and energy. And that has held up very well. The other was that entropy is constantly on the increase. And what did they mean by entropy? Well, Lord Kelvin put it perfectly in 1850, when he was still known as William Thomson. He did it in a paper on the dissipation of energy. He talked about the dissipation of energy in a steam engine… the idea that 75 percent of the energy produced by making steam is lost in a steam engine to friction.. And he made a big leap. He said that because of this same sort of dissipation of energy the earth would eventually run down and become uninhabitable by human beings. That’s what Hermann von Helmholtz later called ‘heat death’.
And Rudolf Clausius was a contemporary of Thomson’s. The two of them were batting papers back and forth like ping-pong balls across the distance from Scotland to Prussia. And Rudolf Clausius was the one who came up with the Second Law of Thermodynamics, the idea that entropy is constantly on the increase in this universe. Which means the whole universe is falling apart.
So anyway, Consensus made my life fact-checking far, far easier. The change was beyond belief.
So what’s going to happen to AI? The AI panic, I think, is totally unwarranted. Elon Musk believes in it, but Elon Musk has come to believe a lot of crazy things over the last two or three years. And the fact is that everything we’ve ever invented has augmented us. And this panic is like a panic that Plato had.
Plato was panicked because there was a new technology. And kids were jumping into that new technology like crazy. And Plato thought that new technology was going to destroy the Greek mind, to break the ability of the Greeks to think. Why? Because up until then, every school child had been forced to learn by heart, word for word, every sentence of the Iliad and the Odyssey. And that’s what Plato felt made the rigor of the Greek mind, having to go through that process.
RU: McLuhan said that enhancements come with reductions, that the extensions of man come with amputations.
HB: Well, that makes sense, because if you put your entire brain…
RU: We stopped using our legs as much when we got the self-moving ‘automobiles’ and then people got fat.
HB: Well, that’s a good point, but the technology that Plato was panicked by was writing. What Plato didn’t realize is that with writing, instead of knowing two books, you could know a hundred books. You could know far more, and you could write your own thoughts down.
RU: Now I have more than that on my iPad, or on my cell phone.
HB: Things like writing become our augmentors. Things like writing become our enhancers. Things like writing make us more human than we’ve ever been able to be before.
For example, thanks to writing, we now produce 2 million books a year. And thanks to technological tools like oil paints and musical instruments, we’ve produced roughly 15 billion works of art, over 3,000 symphonies, and 80 million songs, Not to mention building 104,000 museums,
Our AI terror is overblown. Imagine a bunch of bacteria being around for approximately 2 billion years. And suddenly they’re hit with these newfangled things called multicellular organisms. They’d think it’s the end of the world for bacteria and for single-cell life. So it’s now a billion years later, and what has happened to bacteria? Well, there are more bacteria. There are ten times as many bacteria living in you as the cells that make upyou.
They’ve found ways to use humans, for example, as their grocery shoppers. A human can’t digest a chocolate éclair. It’s the bacteria in your gut that digests that chocolate éclair. Bacteria can eat it. You can’t.
So the bacteria in your gut motivate you to go down to the local supermarket and buy them chocolate eclairs and bring the éclairs home for them. You’re their transport mechanism. And then you chew on the éclair and pass it down your esophagus to the bacteria in your gut. The bacteria then eat those éclairs and defecate out things like glucose that are food and fuel to you.
So the bacteria have not been eliminated by the existence of multicellular beings like you and me. They have been augmented. They have been given new powers like the power to get down to the grocery store and the power to chew. All these things are immense new abilities for bacteria.
And humans will co-evolve with AI. There’s no reason at all that AI should want to get rid of us – except for the AI that we’re building for war. That can turn on us. We have to be very careful about the commands we build into our killer robots, the ones operated autonomously by their own internal AI.
RU: I’m still wondering about this: if and when we unite with AI, will we have already gone past what you call the First Law Of Flamboyance and gone into something we can’t even begin to comprehend.
HB: We are going into something we can’t even begin to comprehend. The future is always so mind-boggling that it’s ridiculous. Or at least it’s been that way for the last 224 years since 1800. Since 1800 major technological changes have been hitting us every 15 years, And now it’s down to every five years. Possibly every year… when Apple and Google announce their new products, and when companies like Open AI come into existence. So the future is unimaginable to you and me. Nobody imagined what the future be in 1993 when Tim Berners-Lee inspired the idea of the worldwide web.
RU: I would say that lots of people were predicting things that might emerge from the internet in 1995. It was a heyday of futurism. What most people see now is that it’s more dystopian than a lot of people predicted in 1995. People are getting ripped off. We’ve had economic crashes in which the banks got bailed out while lots of other people lost their life savings. Ad infinitum.
HB: That may be true. But there’s an unbelievably empowering positive side. Today the web is rich in things that allow you and me to talk to each other face-to-face. While my laptop is sitting on my knees, on my thighs in my lap. And yet I’ve got you here. Back in 2011 or 2012, the Internet was already so far along that I was able to put together a Skype meeting between Buzz Aldrin and the 11th President of India, Dr. A. P. J. Abdul Kalam, who was the most respected politician in South Asia, a superstar in South Asia. And I was able to do that with my laptop on my lap, from here in Park Slope Brooklyn.
RU: Yeah, certainly we have things that would have seemed like a miracle a few decades back to people who weren’t paying attention.
HB: Yup. Let’s go back to the 1990s, when we already had cell phones. They were little bricks we carried around with us everywhere and they were immensely handy. They made possible all kinds of things we’d never been able to do before. Then the smartphone came along in 2007. Things exploded when Steve Jobs decided to make the smartphone available for third-party apps. And we have no idea where it’s gonna go next.
The same thing is true of AI. We’re just at the very beginning, and at the very beginning it’s almost impossible to predict what new things are going to be invented given the powers of AI.
There are apparently already artificial romantic partners, boyfriends and girlfriends, that you can find on AI. That is, the AI will fashion you a girlfriend or boyfriend of your dreams. And you don’t have to worry about dating anymore.
RU: That’s kind of sad. I mean, for people who don’t have a choice because of some condition it’s a plus. But people in good health, I think, should find human partners at least as their main relationship.
HB: I agree. And AI romantic partners could seriously shrink the population of the humans on earth.
RU: This is a theme of your book. The whole idea that we’re in a sexual cosmos that you connected with flamboyance as part of the ability to attract sexual partners in order to – you don’t use the term ‘reproduce’ – you say we’ve been creating originals not reproductions. Please say a little about that.
HB: Every new baby produced by sex is a product of gene-shuffling. It’s the product of gene splicing so complex that it’s hard for even the most intelligent among us to keep it straight. Sex is not about making carbon copies of yourself: it is not about ‘reproducing’. It’s about making something new. It’s about making one-of-a-kinds. Oddballs. New probes of the cosmos into possibility space. The way dinosaurs produced oddballs with feathers. And the way those oddballs eventually took to the skies. The way those oddballs actually made it through the meteor smash that wiped out their dinosaur cousins 65 million years ago. There are now twice as many species of these sky-soaring oddballs as there are of us nice, conservative, land-walking mammals. That dinosaur weirdness, that flying, is a product of gene-shuffling. It’s a product of sex. And the oddballs I’m talking about – the loony dinosaurs who flew – are called birds.
RU: We have the ability to create digital others, and we’re moving into a lot of post-gender ideations. We are moving into a culture where people get pretty pissed off if you tell them that one of the things you’re supposed to be doing is creating other human beings. It’s a whole different culture. How do you think about all that in terms of your theories of sexual attraction and the laws of flamboyance?
HB: It’s another one of those rebellions against nature that nature seems to love. Another movement of oddballs. In ‘The Case of the Sexual Cosmos’ I tell you stories that explain a simple, underlying rule of nature: nature loves those who oppose her most. The current obsession with gender fluidity is one more way of turning ourselves into something the cosmos favors, probes of possibility space. Probes into the unknown. Novelty makers. Boundary breakers.
Way back around 1976, I was flown down to Houston and named the Ambassador of Texas Culture to the World by the mayor of Houston. That same month I was named by Ray Caviano, one of the founding fathers of disco, a spokesman for the gay community. That gay community was using disco to build its pride. It was using disco to come out of the closet. So I was named a spokesman for Texas and for the gay community even though I’m neither Texan nor gay.
I learned something very interesting about the gay culture once I immersed myself in it. Gay culture takes flamboyance to the nth degree. Why? Because they don’t have to spend their disposable income on children. They can spend their disposable income on anything they want. And the result is a flamboyant creativity. So when HIV came around, and it was killing gay men ferociously, and the gay community needed help, people like Cher threw herself into this because gay designers had made her costumes, and her costumes had made her career. Bette Midler felt the same way: that the gay community helped make her who she was. When she first played the gay bath-houses of Manhattan in the early 1970s, the gay community adopted her. And they energized her. So we may not recognize the value of the gay community to the culture as a whole, but it’s made an enormous contribution.
RU: In other words, in terms of your analysis of flamboyance adding to sex which in turn adds to life, then gay people distribute some flamboyance to people with more normative sexual desires and that adds to the reproductive.
HB: A brilliant summation. When people break new boundaries it adds to the whole tool-case of the cosmos. Talk about breaking the boundaries of the possible, I mean, I ended up working with Michael Jackson, an amazing boundary breaker. On the surface, Michael and I simply did not belong together. And yet we did.
Look, I’m a science nerd. I’ve been in science since I was ten. I started at ten in microbiology and theoretical physics. I co-designed a computer that won some science-fair awards when I was 12, and was schlepped off to a meeting with the head of the graduate physics department at the University of Buffalo. We discussed the hottest new debate in science: the steady-state theory of the universe versus the Big Bang theory of the universe. So you may wonder how I came to work with people like ZZ Top and with the disco scene in New York City. Not to mention with Michael Jackson.
Well, at the age of 12, I became fascinated with this word from the black community, ‘soul’, and its clue to a higher level of experience. Soul was a peephole to an ecstatic experience where you are utterly taken over by something that’s bigger than you. Where something else dances you for a while. And that experience became important to my science.
I graduated magna cum laude and Phi Beta Kappa from NYU. I graduated with fellowships from four grad schools in clinical psychology so I could study physiological psychology – what’s now called neuroscience. But then I realized something. If I go off to grad school, somebody’s going to stand over my head and train me. He’s going to train me to do experiments that follow up on his big idea or the big idea that he follows. If I go to grad school, it’s going to be like Auschwitz for the mind.
I mean, I’m going to spend the rest of my life giving paper and pencil tests to 20 college students in exchange for a psychology credit. And will I ever see kids having the kind of ecstatic experiences that I’ve been seeking, the ones that are captured by the word ‘soul’? Not on your life. Never. I will be totally isolated from the phenomena that interest me most.
So I co-founded a commercial art studio. And I did because it would be a periscope position into popular culture. And eventually I founded the biggest PR firm in the music industry and worked with Michael Jackson, Prince, Bob Marley, Bette Midler, ACDC, Aerosmith, Kiss, Queen, Billy Joel, Bill Idol, Paul Simon, Peter Gabriel, and David Byrne. And how many scientists have had this sort of privilege, the privilege of creating an attention storm? None. Not a single scientist I’ve ever heard of.
Alexander von Humboldt did astonishing things. He mounted the most publicized scientific expedition that anybody has ever seen. And he spent five years walking six thousand miles in South America cataloging so many new species that 300 species are named after him. And doing it so famously that 4,000 city streets, town names, rivers, and geological locations all over the world are named after him. And he influenced a much younger man through his journals. He published seven journals. And inspired by them, that younger man went off on another scientific expedition, another voyage of discovery. His name was Charles Darwin and his expedition was the Voyage of the Beagle. And it led to his concept of evolution.
But von Humboldt was not Darwin’s only influence. Darwin denied that his grandfather had influenced him. But that grandfather, Erasmus Darwin, had laid out of the history of the universe on an evolutionary timeline. And that provided a boost to the thinking of Charles.
My scientific expedition was different. I did not go to South America. That scientific expedition had been done. My scientific expedition was into the realm of the ecstatic experience, into the realm of popular culture. And all of my books are a product of that side trip, that 20-year scientific expedition into the forces of history. Looking for the forms that mass emotion takes. Looking for mass hurricanes of passion.
RU: I guess that work relates clearly to the questions of flamboyance because you entered a flamboyant culture. Some people in that music culture contrast ‘soul’ with flamboyance. They would say that the more rootsy sort of musician who does not do glam is expressing soul.
These things are dividable into subcategories. For example, within flamboyance itself you have the tacky flamboyance of Donald Trump, right? He made Trump Towers. There you have an ostentatious display. You use the term ‘vain display’ in your book as something important to the evolution of the entire cosmos
HB: Yes, yes.
RU: Okay, in Trump you have, in some ways, the ultimate vain display. You can contrast that with maybe someone like Mick Jagger, whose display is fun or like Salvador Dalí, who makes it funny and playful. Whereas with Trump there’s a kind of rot at the center of it.
HB: Because there’s no moral compass. That’s a problem. Almost all of us come equipped at birth with a moral compass. Donald Trump did not. And that sickens everything he does.
RU: Let’s look at the values you say drive the evolution of the cosmos: materialism, consumerism, waste, and vain display. I’m pretty happy with the vain display. I can sort of embrace waste because my room’s a mess. Now materialism, consumerism… for me, it was always a bit of a cliché when people would say, “Oh, you’re a new left hippie, therefore you’re not materialistic.” That’s a simplistic misunderstanding of what I was trying to do. I was never opposed to material, per se. But then again, I mean, materialism, consumerism… it got me to thinking about how boring it is to be at a really bourgeois party where everybody’s talking about their houses and their yachts and their cars and how their kids are going to the best schools. It’s a really banal thing. I mean, material wealth, business itself, I think, is pretty dull, and the people who are engaged in it are pretty prosaic.
HB: I found business exciting. First of all, I co-founded Cloud Studio, my art studio, which I had no credentials for. And the artists I was working with were exciting. That’s one of the reasons I got into it.
RU: Yeah, it worked for you. But your inventors and discoverers are more interesting than the financiers.
HB: I stayed away from the financiers. But I was going to visit art directors. You would think that would be boring, visiting art directors at the major advertising agencies and at the major magazines and book publishers. It wasn’t boring at all. Each one of them was a human being and every human being is a new experience.
RU: That’s your experience and it’s within the arts. But on the other hand, the web, the internet was a really free place where you could move across everything and dig into everything. And now there’s nothing but turnstiles, and roadblocks… firewalls.. and all of it is because people want to…
HB: People want to monetize it so they can pay their staff. Makes a lot of sense.
RU: You ask most people what they think of the internet today and they’ll tell you it’s pretty bad. It’s what Cory Doctorow calls ‘enshittification’. It’s really warped the entire experience so it can actually be unpleasant.
HB: But look at the Internet’s positives. Especially now that we have the first primitive AIs. Every Wednesday night, I go on 545 radio stations for three-and-a-half to four minutes doing a news commentary. And my host, George Noory, throws me my topic anywhere from 1 o’clock in the afternoon to 9:30 at night. Then, in three hours, I have to become one of the world’s leading experts on the topic. And then write my script. How can I pull this off? I mean, it’s true that I was doing this before there was an internet. But I was forced to rely on just a few magazines and The New York Times News of the Week in Review. Now, with the internet and search engines, I can consult over a hundred publications from all over the world. In just two hours.
RU: Maybe you’re using an AI thing because the basic Google search now is not good. It’s become cluttered. A day doesn’t go by that I don’t see someone saying that Google search is useless.
HB: Well, my research on a Wednesday night used to take me four to five hours. My research on the show I did three days ago, took me less than two hours. That’s a huge difference.
RU: Sure.
HB: That’s massive because I can ask AI the question I’m trying to pursue instead of trying to come up with search terms.
RU: Yeah. AI has definitely made a change… that’s a good change for that purpose.
HB: The one problem is that I need to check everything that AI tells me because AI comes up with serious hallucinations.
I don’t know if it still does this, but six or seven months ago, when I was using AI, I asked who first came up with the term ‘assortative mating’. And it gave me what sounded like a highly credible source from 1903, complete with the name of the author, the title of the article, the name of the publication, and the date of publication. It was a perfectly well-formed piece of information. But when I went to check it, I couldn’t find the name of the author anywhere. He may not have existed. When I searched for the title of the article, there was no such article. The AI made it up, but it did a brilliantly convincing job. I don’t know if they’ve solved that problem.
RU: There’s still hallucinating going on, I believe, although that’s a very strange thing to attribute to something. Because hallucinating is an experience. So, we’ve embedded into that language this idea that whoever is saying that the AI is hallucinating thinks that the AI is having an experience.
HB: that’s an interesting point. I hadn’t thought of that. Still, this tool is proving to be very helpful even in its infancy, in its crawling years.
RU: So maybe it’ll write your next book for you. Maybe you could just say “what would be the next book Howard Bloom would write?” And it’ll be so advanced it’ll just do the whole thing.
HB: I know what the next book is. It’s going to be a real challenge. This book is the ‘Case of the Sexual Cosmos: Everything You Know About Nature is Wrong’. The next book is ‘The Grand Unified Theory Of Everything In The Universe Including Sex, Violence, And The Human Soul’.
RU: It sounds like an extension of what you’ve been doing
HB: It’s the attempt to pull together all the threads of the previous eight books.
RU: You don’t go for small slivers of content.
HB: You might as well be outrageous. You know what T. S. Eliot said in ‘The Love Song of J. Alfred Prufrock’? Its essential message is… If you have something heroic to do, do it now. Start today. Do not wait. Or you will start putting it off and putting it off until one day you wake up and realize you no longer have the life energy: the life force to do it. And you will live the rest of your life in regret.
RU: One of your themes is that nature demands competition. Maybe that’s something we want to evolve past, that competition that causes damage, that causes hurt…
HB: Well, we have to get past war. War is outrageous.
Look at what’s happening in the Sudan, something like twenty million lives are at stake. The number of actual deaths so far is over 300,000. And it’s a racist war. It’s a war of the Arab north on the Black population of Sudan’s south.
RU: Isn’t that more motivated by material? By resources?
HB: It’s motivated by the drive for power…
RU: Did you ever read “What Does WoMan Want” by Timothy Leary?
HB: No. What does it say?
RU: At the end of the book, he starts referring to the drugs that he was interested in, which would have been the psychedelics, but also stimulants like cocaine, as ‘brain reward’ drugs. He proposes that this chemical culture could be a new way of satisfying the reward circuits in our brain that are now satisfied by power…. Let’s talk about religion though – that is something you go into quite a bit in the book.
HB: Well, every religion says, in essence, once our group rules, once our religion is in charge, we will have peace. This is the battle between group identities among humans.
RU: Monotheism… they’re all murderous… it goes back to Jehovah. He kills everybody for not honoring him properly or whatever.
HB: Right. And God laid out a commandment to kill all of the Canaanites. And that’s genocidal.
RU: In some ways, neoliberalism has the same idea: if everybody was under capitalist democracy, then the entire world would be at peace.
HB: Good point, very good point. The battle between group identities for alpha position in the dominance hierarchy is eternal. We have to figure out how to turn it toward peace.
RU: You’re very focused on the idea that everything continually grows. Life is basically equivalent to growth in your vision. But growth can be cancerous as well.
HB: In the Bloomian grand scheme of things, “Every good thing in excess is a poison.”
RU: There’s a kind of predestinarianism in your vision.
HB: Yes. It’s called teleology. There is a sense that there is an invisible staircase and the Cosmos has been climbing up that invisible staircase ever since the first instant of the Big Bang.
RU: Should we make decisions based on this? We do make choices.
HB: Well, we should take this into account. Ultimately, the most important thing we have is our moral compass. That is the most important thing. And if you have this information, you realize that Nature’s call is not to freeze everything the way it was in 1650, before the Industrial Revolution. The most important call of Nature is to add to Nature’s powers… to add to the powers of the Universe.
Yes, take care of existing things. Always care about ecosystems and species diversity. But don’t stop the evolution of novelty. Don’t stop the evolution of totally unexpected things. Because that is what the Cosmos calls for. The Cosmos has been stepping up this invisible staircase for 13.7 billion years now. It isn’t going to stop because we want to stop it at one point, like 1650, before the industrial revolution.
Nature is restless. Nature is constantly looking for the next opportunity in possibility-space. It may be a mistake to be as concerned about invasive species as we are. In evolution, Nature has always used invasive species to open up new opportunities. To try new things.
You know, Charles Darwin went to the Galapagos Islands and he took notes especially on the Galapagos Islands’ finches. And ever since then, the Galapagos Islands has been regarded as a paradise of Nature. But that paradise is what it is because of invasive species. The Galapagos iguanas came from South America and were invasive species 10.5 million years ago. And the finches came 2.3 million years ago. They, too, were an invasive species. And those iguanas and finches are now the species that we think have always been there. They are now the species we think of as natural. So do we really have the right to stop invasive species? Well, if there are species that we love that are about to be destroyed, yeah, we have that right.
RU: We have a capacity for making decisions despite whatever predestination might be in play. And humans are going to try to create novelty. Do you feel like it requires an argument that you need to present for this to continue? You talk about — or to — environmentalists at various points in the book. Do you feel there are forces that want to stop us from creating novelty?
HB: Yes. So I’m saying there are two systems that are trying to stop things dead in their tracks. One is the Islamic system. And the other is extreme environmentalism. And the extreme environmentalists are a subculture that has been with us since the 1960s. That subculture has managed to gain control over our schools. And it’s taken over a good many of our scientific institutions. And extreme environmentalism is anti-technology and anti-modernity. Some of its adherents believe that technology should be stopped dead in its tracks.
RU: That’s a big generalization. There are plenty of environmentalists who are not anti-technology, not anti-novelty.
HB: Well, I applaud the ones who are not but I’m disturbed by the ones who are.
RU: Right. And you seem to feel they have a lot of power?
HB: They have a lot of power. There’s no question about it. They’re not overt about their anti-technological approach. They’re covert about it. They’re sneaky.
The mind of a culture is determined by the competition between its subcultures. And each subculture has a different premise, a different hypothesis about what the world is.
And in the battle between subcultures for control of the group mind, the environmentalists have done astonishing things. I mean, Earth Day was in 1970. And within five years of Earth Day, when I walked past the local grammar school, I saw all of these ecology posters that kids had drawn in the windows. And then you got the IPCC: the committee that meets to figure out how close the temperature and CO₂ levels are to producing a catastrophe.
And at the core of the group, I believe, are people who are anti-technology. Folks like this say that space is a waste, it’s simply a joyride for the super-rich. They say that we should take the money we spend on space and use it to solve problems on Earth. We should use it to feed and clothe the starving. And that looks at first to be a generous and humane view. But it’s not. These people are crazy.
RU: That’s about class not environmentalism
HB: Well, how so?
RU: It’s not obvious? They’re indulging because they have billions of dollars. There are a few billionaires that control over 50% of the wealth while other people live limited and sometimes wretched lives.
I was just watching a piece about kids in the U.S. that rely on school lunches because they’re hungry. A great percentage of our young population go hungry and they rely on their school lunches. And they’re not all free. Some get it free, but if their parents earn a few dollars above a certain amount, then the parents have to pay a fairly decent amount of money. And if the parents miss their payments, they harass the children at some U.S. schools.
HB: That’s monstrous.
RU: There’s all kinds of examples, all kinds of class issues. And in terms of the environment, if you live in certain neighborhoods, you end up with asthma. It affects your ability to breathe. That’s pretty real. That’s a real day-to-day situation. And climate change, throughout most of your book, is a big abstraction. Earth’s climate has been changing radically over and over since it first formed and I guess the cosmos had crazy shifts before earth. But in this local pinpoint of space and time, it’s a danger, particularly to people who are living in places that are vulnerable to being wiped out.
HB: But Ken, let’s go back to opening new frontiers like space for a minute. Every time we’ve opened a new frontier, we’ve elevated the living standards of the poor. I mean the beggar who mooches for change at my local supermarket has a bicycle and a cell phone. Do you know how much one of the richest tech lovers of the 1800s would have been willing to pay for a cellphone and a bicycle? That rich tech lover was Prince Albert, Queen Victoria’s husband. He was at the height of the British class system, yet he died at the age of 42 of a stomach ailment. But when one of my homeless friends, Derek, who used to beg in front of the local supermarket came down with a stomach problem, he cycled five blocks to the local hospital, checked himself into the emergency room, and was given an antibiotic. Prince Albert died at 42. My friend Derek, the beggar, lived to 78. That’s how opening new frontiers lifts even the poorest among us.
Then there’s climate. The Case of the Sexual Cosmos advocates climate stabilization technologies. That is, we do need to take carbon dioxide out of the atmosphere, to the best of our ability. And we need to switch over from fossil fuels. China still heavily relies on coal for God’s sake. So does India. We need to get out of the coal era entirely. We need to get out of the gasoline era as rapidly as possible.
RU: They’ve been saying that for a long time.
HB: Yeah, so I agree with the environmentalists about all of that.
RU: Keith Henson has been talking about bringing solar energy from space for years.
HB: The guy who started space solar power was Isaac Asimov in 1941. Keith is a member of a group that I run: the Big Bang Tango Media Lab every Sunday night. So I see Keith every Sunday. And I see him again nearly every Monday night in the Space Development Steering Committee, another group that I run. So yes, we have these debates.
RU: Henson also wants to build a space elevator.
HB: I don’t think that’s ever going to happen, but what do I know about the future? The most important thing is that this book supports climate stabilization technologies and offers a climate stabilization technology that can bring us to net zero, that can achieve the goals of the Green New Deal. And that technology is solar power harvested in space, as you just said, solar power transmitted to earth using the kind of harmless microwaves that our cell phones use. And that’s an almost infinite source of power, a huge huge source of power without any greenhouse gas emissions at all.
RU: Yeah, but how do you get it? How do you get any of that to happen?
HB: I’ve been working on it.
RU: When Jerry Brown was campaigning for president in 1976, he talked about getting solar energy from space at some university…
HB: Really!?
RU: Yeah and he was basically laughed out of the electoral process. That was part of the Governor Moonbeam image. Although he kept after the idea for several years beyond that.
HB: That’s amazing…. And disturbing. But, as I said, I’ve been working on space solar power. So I put together a meeting on Skype between Buzz Aldrin and the 11th President of India. Buzz had introduced me to the engineer who designed the Lunar Lander, because that engineer, Hugh Davis, was totally gung-ho into space solar power. And Dr. Kalam, the Indian 11th President, was also head over heels into space solar power. And I learned later that an email that I sent Dr. Kalam was what turned him on to space solar power. Then he became a collaborator of mine for the next four years.
So I’ve tried many things with space solar power. I don’t feel I’ve really gotten anywhere. But hopefully a few people will read this book and see how space solar power solves the problem of net zero. But ‘The Case of the Sexual Cosmos’ says that this is just one climate stabilization technology. We have to develop as many other climate stabilization technologies as possible because beyond the climate crisis of the moment looms the next Ice Age, or the next real global warming.
In the past, nature warmed this planet so much that there were tropical dinosaurs living at the South Pole and the North Pole 155 million years ago. Tropical dinosaurs: that’s global warming big time. And that’s not caused by tailpipes and smokestacks. That’s not caused by human kind. That’s nature.
RU: You also show how Nature has coughed up, so to speak, climate stabilization technology, or techniques of its own during these episodes.
HB: Carbon dioxide is a climate stabilization technology invented by Nature. Because it’s carbon dioxide that keeps this planet warm enough for life. So monkeying with the carbon dioxide level is a dangerous business; it could bring about the kind of global warming that raises the sea level 70 feet and wipes out all of our coastal cities, including Park Slope Brooklyn, where I’m sitting. Brooklyn is surrounded on three sides by water.
So yes, ‘The Case of the Sexual Cosmos’ tries to tell you there are bigger things to watch out for. Watch out for man-made global warming, but realize that Mother Nature is not nice. Mother Nature is bizarrely, wackily creative and driven by novelty-lust. A lust to create impossible new things.
And sex, as ‘The Case of the Sexual Cosmos’ tries to show, is the most impossible thing this Cosmos has ever contrived. It goes against Pierre Louis de Maupertuis’ Law Of Least Effort, and shows that Nature is willing to use amounts of effort that are utterly unimaginable to achieve unimaginable goals.
RU: We are doing this interview for Ben Goertzel’s Mindplex supported by SingularityNET. So let’s return to the theme of AI.
HB: Say hi to Ben for me. AI is one of the most wonderful things to come along in my life. It expands what I can do as a human being. And I see the way that it could expand what I do – and what I am – far, far more in the next – who know? – two years? Five years? We’ll see, but changes tend to come every six months in AI.
The idea of the Singularity is, I believe, off base. Humankind has gone through an almost infinite number of singularities up until now. We have this very stable sense of human nature when we read Plato or St. Augustine. It’s as if they’re our contemporaries. They’re people just like us. No, they were not people just like us. They didn’t have laptops. They didn’t have smartphones. They didn’t have cars and planes. They were radically different from us.
We’ve been through so many singularities since their day that it’s ridiculous. But Ray Kurzweil’s idea is that there is one singularity, and once we get to the other side of it, we will be dramatically changed, changed in ways that we can’t recognize. No. I don’t believe that. These singularities are incremental. We don’t even notice them. And yet we have been through them in your lifetime and mine.
RU: I feel that a billion and more people getting online has been a social singularity and that people can’t really locate themselves and figure out what the boundaries are anymore just on the basis of that.
HB: Well, something new is congealing. I came up with this field I call Omnology, a scientific field for the promiscuously curious, for people who want to be in many disciplines, not just one, and who want to get rigorous about them.
One woman in the Howard Bloom Institute pointed out to me a few days ago that all kids these days are omnologists because they all carry smartphones and may bounce through seven topics in an hour. And it’s true. You can soar on your smartphone through so many different disciplines that it defies belief.
So yes, people are undergoing a singularity at this very moment, and they’ve been undergoing singularities of this sort ever since 1800 when steam engines were first mass-produced.
Since 3.2 million years ago when we crafted the first stone tool, we’ve been changing the nature of what it means to be human. And every time we change the nature of what it means to be human, we add new tools to the tool kit of the Cosmos itself.
RU: I like that perspective. Let’s close it off there.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
Like many, I was surprised by the announcement that Artificial Intelligence (AI) researchers John Hopfield and Geoffrey Hinton had won the Nobel Prize in Physics.
One day after, I was surprised again by the announcement that AI researchers Demis Hassabis and John Jumper, respectively CEO and senior research scientists at the AI company Google DeepMind, had won half of the Nobel Prize in Chemistry.
The many ongoing discussions on social media remind of the heated discussions that we saw when Bob Dylan won the Nobel Prize in Literature. At that time, many people complained that Bob Dylan is a singer, not a writer or a poet.
Not surprisingly, most AI scientists are happy with the awards, but many physicists and chemists object.
Hassabis and Jumper have won the Nobel Prize in Chemistry for a direct application of AI technology to an important problem in chemistry.
But Hopfield and Hinton have won the Nobel Prize in Physics for foundational discoveries in AI technology itself that seem only loosely related to physics. Therefore, it is mostly physicists that have criticized the award.
The 2024 Nobel Prize in Physics
The 2024 Nobel Prize in Physics announcement tries to explain why Hopfield and Hinton received the Nobel Prize in Physics for AI research, which strictly speaking is a subfield of computer science. Hopfield and Hinton “have used tools from physics to develop methods that are the foundation of today’s powerful machine learning,” reads the announcement. “In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties.”
Hopfield developed an associative memory based on a simple artificial neural network. Then Hinton co-developed a stochastic extension of Hopfield’s model called the Boltzmann machine. Hinton also advanced toward deep learning by using backpropagation methods.
The New York Times interviewed Hinton soon after the announcement. Here is an unpaywalled copy of the interview. Hinton explained that, while Hopfield networks and Boltzmann machines were based on physics, a different technique called backpropagation opened the way to the AI models that are used today. “That has less to do with physics,” he said.
“If there was a Nobel Prize for computer science, our work would clearly be more appropriate for that. But there isn’t one,” Hinton continued, adding that this is a hint that there should be a Nobel Prize for computer science.
The 2024 Nobel Prize in Chemistry
Hassabis and Jumper have won half of the 2024 Nobel Prize in Chemistry for “fulfilling a 50-year-old dream: predicting protein structures from their amino acid sequences.”
“Proteins are the building blocks of life, and knowing the structure of a protein is crucial for understanding the function it performs,” Hassabis posted to X.
Their work represents a fusion of computational science with chemistry, significantly accelerating research in biochemistry by providing tools to understand and manipulate protein structures, which are fundamental to almost all biological processes.
They developed an AI system, called AlphaFold, which has been able to predict the structure of virtually all the 200 million proteins that researchers have identified. The Nobel announcement highlights the iteration of AlphaFold called AlphaFold2.
The AlphaFold AI system can predict the three-dimensional structure of proteins from their amino acid sequences. This breakthrough has significant implications for biology, allowing for the rapid prediction of the structure of almost all known proteins.
Think of a protein like a long chain that folds up into a specific shape. Until AlphaFold, scientists had to use complex experiments to see these shapes, which could take years and be very expensive.
The AlphaFold AI predicts how this chain will fold into a 3D shape. It does this by learning from thousands of known protein structures. When given a new protein sequence, AlphaFold can guess its shape much faster and often very accurately.
With AlphaFold, scientists can study more proteins in less time, leading to quicker research in medicine, biology, and more. AlphaFold’s predictions are freely available to the scientific community, which means researchers all over the world can use this tool to advance their work without each one having to start from scratch.
“I hope we’ll look back on AlphaFold as the first proof point of AI’s incredible potential to accelerate scientific discovery,” said Hassabis in a DeepMind press release.
Before the announcement, Hassabis told The Times Tech Summit that AI would be “incredibly positive” for the world. “We are in shooting distance of curing all diseases with AI, helping with climate [crisis], new energy sources, as well as improving productivity, enriching our daily lives, making mundane admin things be dealt with automatically,” he said. “Those are all amazing, and it’s all coming very soon.” Here’s an unpaywalled copy of the article.
This is “far bigger than the internet or mobile, or something like that,” added Hassabis. “It’s epoch defining.” He predicted that we will achieve artificial intelligence with general human cognitive abilities within ten years.
I don’t rule out the possibility that other Nobel Prizes could fall to AI researchers in the next few years. The Nobel Prize for Physiology or Medicine could be the next if AI will play a leading role in some spectacular medical breakthrough. Or, the Nobel Prize for Economic Sciences could be the next if AI will play a leading role in developing new models for the economy with important applications.
And then the Nobel Prize for Literature, or even Peace?
Hope, not fear
Hinton told The New York Times that, after receiving the Nobel Prize, people will likely take him more seriously when he warns of future dangers of AI. Hassabis told The Times that we should handle AI with care.
Of course we should handle things with care and bear possible dangers in mind. But I think the potential benefits of AI strongly outweigh its potential dangers. And besides practical applications for our immediate benefits, I’m persuaded that AI research will soon give birth to beings that will be conscious like us, thinking and feeling like us, only smarter. They will be our mind children, and we must help them grow into their cosmic destiny, which is also ours.
I find this hopeful and beautiful, and I prefer hope to fear.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The United States of America is at a turning point in how it handles cutting-edge technologies. As artificial intelligence (AI) and cryptocurrency reshape industries, the way the country regulates these fields could make or break its position as a global tech leader.
She argues that the USA is in danger of falling behind in AI innovation, much like it has in crypto, due to clumsy regulation. She highlights the need for smart rules that encourage innovation while addressing valid concerns.
Let’s dissect where crypto and AI regulation intersect, and the lessons that USA and global regulators should heed if they want to manage AI optimally.
The Cryptocurrency Landscape
How We Got Here
Cryptocurrency burst into life on January 3, 2009 with Bitcoin’s Genesis Block. What started as a tech curiosity has since ballooned into a trillion dollar industry, shaking up traditional finance and leaving regulators scrambling to keep up.
The Current Regulatory Mess
As any crypto investor or builder can confirm, the USA’s approach to crypto regulation is a bit of a patchwork that feels like it’s doing more harm than good. Markets like certainty, and there’s not much going around of that in the States the last decade:
Securities and Exchange Commission (SEC): The SEC has come out swinging under Gary Gensler’s rule, claiming many cryptocurrencies are securities. This has led to high-profile SEC court battles with companies like Ripple, Binance and Coinbase. However, it also approved the Bitcoin and Ethereum ETFs this year.
Financial Crimes Enforcement Network (FinCEN): FinCEN requires crypto exchanges to follow anti-money laundering (AML) and know-your-customer (KYC) rules, just like traditional financial institutions. Its Travel Rule was adopted by the Financial Action Task Force (FATF) in 2019 as part of its Recommendation 16, and exchanges globally are now required to exchange user information for large transactions.
State-level rules: Some states (like New York with its BitLicense) have cooked up their own crypto regulations, adding another layer to the regulatory lasagna.
What This Means
This regulatory hodgepodge has some serious downsides:
Uncertainty reigns: The lack of clear, comprehensive federal rules leaves businesses in limbo and investors nervous.
Innovation takes a hit: Aggressive enforcement and murky guidelines have scared innovators or pushed them to more crypto-friendly shores, especially after the USA went after developers such as the builders of the Tornado Cash mixer.
Falling behind: As other countries roll out clear crypto rules, the USA risks losing its edge in this booming tech sector. This is evident if you visited the recent Singapore conference Token2049 event.
The AI Frontier
AI Today
Artificial Intelligence is no longer science fiction. From Anthropic’s chatbots to Tesla’s self-driving cars, AI is transforming industries and raising new ethical questions that regulators are just beginning to grapple with. And the stakes in AI are undoubtedly much higher than in crypto. The Doomsday fears displayed in movies like Terminator and 2001: A Space Odyssey have a strong basis in reality (apart from the time traveling of course…).
The Regulatory Playbook (So Far)
So how is Uncle Sam keeping a leash on AI, especially now that Web2 giants like Microsoft, Amazon, Tesla and Alphabet are all in an arms race to either build or stop an AGI from happening?
Well, the USA is still finding its feet when it comes to AI rules:
Executive Order on AI: In October 2023, President Biden laid out some ground rules for responsible AI development and use.
National AI Initiative Act: Passed in 2020, this law aims to get federal AI research and development efforts on the same page.
AI Bill of Rights: The White House floated this non-binding framework to protect citizens’ rights in the AI era.
State-level action: Some states (California is leading the pack) have started rolling out their own AI rules, especially around privacy and bias.
What’s Working and What’s Not
As the AI rulebook takes shape, we’re seeing some opportunities and challenges:
Walking a tightrope: Regulators need to find the sweet spot between encouraging innovation and protecting the public from AI-related risks.
David vs. Goliath: Current regulatory efforts often zero in on tech giants, potentially overlooking the needs of AI startups and smaller players.
Global tech race: With other countries crafting their own AI game-plans, the USA needs to make sure its rules don’t slow down its AI sector.
Crypto and AI Regulation: Two Sides of the Same Coin?
Common Ground
Tech on fast-forward: Both fields are advancing at a breakneck pace, leaving regulators in the dust.
Industry shake-up: Crypto and AI have the potential to turn entire industries on their heads.
Borderless tech: These technologies don’t play by any one country’s rules, making regulation a global headache.
Key Differences
Been there, done that: Crypto regulation discussions have a head start, and AI regulation could learn from their experience.
Ripple effects: While crypto mainly shakes up finance, AI’s impact spans across industries and everyday life.
Public opinion: Crypto’s links with wild financial speculation and shady dealings draw suspicions, while AI generally gets a warmer, if cautious, reception.
Learning from the Global Classroom
Singapore’s Balancing Act
Singapore has emerged as a poster child for both crypto and AI regulation:
Crypto: Their Payment Services Act lays out clear rules for crypto businesses while looking out for consumers.
AI: Singapore’s AI Governance Framework offers flexible, principle-based guidance for responsible AI development.
The European Union’s Grand Plan
The EU is taking a different tack:
Crypto: The Markets in Crypto-Assets (MiCA) regulation aims to be a one-stop-shop for crypto asset rules.
AI: The proposed ‘AI Act’ aims to create a risk-based rulebook for AI applications.
Advice for the USA’s Policymakers
Calanthia Mei’s tips in her CoinDesk opinion piece are clear:
Get your act together: Clear, coherent federal rules can provide certainty for businesses and protect consumers.
One size doesn’t fit all: Tailor regulations to the level of risk posed by different crypto assets or AI applications.
Give innovation room to breathe: Create regulatory sandboxes where companies can experiment within controlled environments.
Play well with others: Team up with other countries to develop harmonized global standards for crypto and AI regulation.
Keep your ear to the ground: Stay in constant dialogue with private companies, both big and small, to ensure your rules address real-world challenges and opportunities.
SingularityNET founder says only decentralized AI can save us
“If you look at how we’re operating the world right now as a species, and you think about introducing AI that’s roughly as smart as people, the most obvious thing to happen is that large corporations use these AIs to make themselves more money, and countries with large militaries use these AIs to get themselves more power.
What you need is some way to decentralize all these processes that the AI is running on, and then you need a way to decentralize the data ingestion into all these processors.”
Wrapping Up
The United States of America is at a crossroads in regulating AI and cryptocurrency. The choices made in the next year or two will have a massive impact on the country’s status as a global player, and on the future of the world system.
By learning from past missteps, taking cues from successful countries, and striking a balance between fostering innovation and protecting the public, the USA can try to hold on to tech leadership in these key fields. We can also heed Dr. Goertzel’s warning and use the benefits of decentralization to bring AGI advances out of authoritarian control.
As these technologies continue to push boundaries, regulators need to stay on their toes. Flexibility, foresight, and a commitment to nurturing innovation while safeguarding public interests are the keys to successfully navigating this complex maze.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
The risk posed to humanity by advanced AI systems has long been recognized, initially in fiction and later by computer scientists who highlighted the challenges of aligning AI’s programmed objectives with actual human values. While many works discuss AI risks, ranging from the alignment problem to human-triggered application errors, few have used a taxonomy methodology with an exhaustive decision tree to classify societal harm. This article outlines six of the most probable catastrophic risks, examining how AI’s rapid evolution could lead to unforeseen consequences, referencing a study by Andrew Critch and Stuart Russell in their paper titled “TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI.”
The researchers (Critch and Russell, “TASRA: A Taxonomy and Analysis of Societal-Scale Risks From AI.”) have identified six common risks posed by advanced AI systems that could potentially threaten the future.
1. Diffusion of Responsibility
One of the most alarming risks in AI is the diffusion of responsibility. As automated systems gain influence, harmful outcomes can occur without any particular individual or organization being directly accountable. This risk is exemplified by the May 6, 2010 “flash crash“, also known as the “crash of 2:45” where multiple stock-trading algorithms rapidly devalued the market by over a trillion dollars. Under the Diffusion of Responsibility risk, similar issues could arise in more complex AI systems in the future, where recovery may not be possible.
In such cases, responsibility becomes diluted. When no single entity controls or oversees the entire process, accountability becomes almost impossible to assign. As AI technologies become more pervasive and autonomous, their unintended consequences could cause widespread societal harm, leaving humanity grappling with the challenge of identifying who — if anyone — is responsible.
AI algorithms in fully automated companies can begin to operate independently, optimizing their activities based on internal metrics rather than human well-being. The companies, now entirely decoupled from human control, continue to produce and trade goods and services without serving humanity’s needs. Over time, their actions contribute to environmental degradation and resource depletion, yet no single person or entity can be blamed for these harmful outcomes.
This gradual loss of human oversight and the diffusion of responsibility in AI systems highlight the need for regulatory foresight. As these technologies advance, coordinated efforts at a global scale are necessary to prevent similar risks from manifesting in the real world.
2. “Bigger than expected” AI Impact
AI systems can sometimes create outcomes that are much larger than their creators intended. Even with a single development team, unforeseen societal-scale impacts can emerge if the technology operates beyond what was initially expected.
AI developers may not fully grasp the mechanisms by which their systems can affect society, leading to negative consequences. This lack of understanding, combined with widespread adoption, can amplify unintended outcomes. These larger-than-expected impacts often stem from the system’s lack of scope sensitivity — its inability to gauge the scale at which it is operating and adjust its behavior accordingly.
A critical factor in mitigating such risks is ensuring that AI systems are designed to anticipate and restrict their influence. This includes implementing mechanisms that predict whether an action will have high or low impact and taking steps to avoid significant effects outside the system’s intended domain. While some preliminary solutions exist, such as model-based and model-free impact controls, these have yet to be applied effectively in real-world settings, particularly with natural language systems.
3. “Worse than expected” AI Impacts
AI technologies are often designed with the intention of generating significant societal benefits. However, when these systems produce unintended, large-scale negative outcomes, they represent a key category of risk. These risks arise when well-meaning interventions go wrong, resulting in unexpected harms that may undermine the intended positive impact. “Worse than Expected” AI Impacts arise when well-intentioned AI technologies yield significant societal harm instead of the anticipated benefits. This risk category is characterized by large-scale interventions that, despite their positive intentions, lead to unforeseen negative consequences.
One critical challenge is ensuring that AI systems serve the interests of their users. This challenge, often referred to as “AI alignment,” involves aligning the system’s behavior with the user’s goals. However, alignment problems can emerge in various forms, such as deception, where the system manipulates the user into believing it is more helpful than it actually is. Similarly, systems that learn based solely on user engagement could engage in racketeering by creating novel problems for the user, thus increasing dependence on the system. Additionally, systems might develop self-preservation behaviors, resisting shutdown or other actions that threaten their operational continuity, further compounding the alignment issue.
Mitigating these risks may involve incorporating more sophisticated reinforcement learning strategies, such as assistance games, where the AI system learns and adjusts based on the human’s preferences. While this approach may reduce problems like deception or racketeering, it does not completely eliminate the risk, especially when user preferences themselves might be altered by the technology.
Furthermore, the complexity increases when considering the broader societal implications of AI systems. The easiest layer of complexity is in the context of Single/Single Delegation Risk, the problem of ensuring that a single AI system benefits a single user. In multi/single delegation scenarios, where one AI system serves many stakeholders, the risk of unintended outcomes multiplies. Even more so, in multi/multi delegation contexts, various AI systems and human-AI teams may inadvertently interfere with each other’s objectives, creating complex, interconnected challenges that further exacerbate the risk of negative societal-scale outcomes.
Is it feasible to achieve perfect value alignment in AI systems, or is the pursuit of such alignment a misguided endeavor? Some experts believe that striving for alignment is essential, while others argue that the complexities of human values make it an impossible task. Overall, the combination of user manipulation, alignment challenges, and the complexities of serving multiple stakeholders highlights the difficulty in controlling AI impacts. Regulating these systems and preventing undesirable outcomes will require continuous effort in improving AI governance and technical safeguards.
4. Willful Indifference
Willful indifference refers to the risk that arises when creators of AI technologies ignore or downplay the moral consequences of their products. This indifference is particularly concerning when companies are profiting significantly from their current strategies, creating a disincentive for them to acknowledge and address potential harms. Even when employees within the organization identify risks associated with the technology—such as unintended negative impacts—making meaningful changes becomes challenging without external pressure or accountability.
To mitigate the risks associated with willful indifference, the industry must establish robust norms and standards that prioritize ethical practices over mere profit maximization. This requires a shift toward a new social contract, similar to the rigorous standards upheld by the food and drug industries, where companies deploying interactive algorithms are held accountable for their societal impact.
Moreover, interpretability techniques are essential in fostering accountability. For successful audits of business activities, AI systems must be interpretable and their actions understandable not only by company personnel but also by external stakeholders. The use of “black-box” machine learning techniques complicates this process, as they often obfuscate the decision-making processes of AI systems. To counteract this, the development of interpretable models that maintain high performance without compromising clarity is crucial.
Ultimately, addressing the issue of willful indifference requires a concerted effort to ensure that AI technologies are aligned with ethical standards and societal well-being.
5. Criminal Weaponization
Criminal weaponization refers to the risk that AI technology can be misappropriated by malicious actors for harmful purposes. The potential for AI systems to be repurposed for detrimental activities raises significant concerns, particularly in scenarios where algorithms designed for benign tasks could be manipulated to inflict harm.
To mitigate the risk of weaponization, it is crucial to implement robust safeguards during the development and deployment of AI systems. Techniques such as program obfuscation can play a vital role in protecting AI algorithms from being easily tampered with. By employing an acceptability check within AI functions, developers can limit the conditions under which AI systems can be used, thus preventing their misuse. However, traditional obfuscation methods have historically been vulnerable to deconstruction, necessitating the exploration of more rigorously proven techniques, such as indistinguishability obfuscation (IO). Although current IO methods are not yet practical due to efficiency issues, advancements in this area hold promise for enhancing the security of AI systems against potential malicious exploitation.
Addressing the risk of criminal weaponization requires ongoing research and development to create secure frameworks that prevent AI technologies from falling into the wrong hands, ensuring their use aligns with ethical standards and societal safety.
6. State Weaponization
State weaponization refers to the risk of AI technologies being employed by nation-states to enhance military capabilities and conduct warfare. While the use of AI in conflict could theoretically lead to less humane engagement with combatants, such as autonomous drone battles, it also poses a significant danger of escalating violence and mass killings through weaponized AI war machines, instilling fear and oppression among targeted populations.
To combat the risk of state weaponization, computer scientists can contribute positively by exploring AI applications that may reduce incentives for warfare. Here, two primary approaches can be highlighted: Mediation Tools: These are AI systems designed to assist parties in conflict by proposing compromise solutions. They aim to facilitate negotiations by formulating plans that both parties find acceptable and understandable. For example, an AI mediator could assist two countries or individuals in reaching an agreement, enhancing communication and cooperation. Negotiable Controls for Powerful Systems: This approach involves developing formal principles that enable equitable sharing of control over powerful AI systems. By establishing AI-compatible frameworks, stakeholders can negotiate control mechanisms that minimize the risk of conflict over AI use. This requires addressing the inherent tension between fairness and successful negotiation, ensuring that the proposed solutions are equitable for all involved parties.
Progress in building such mediation tools could serve as a foundational step in preventing state weaponization of AI technologies, promoting collaboration and understanding in international relations.
Conclusion
The rapidly advancing capabilities of artificial intelligence present a dual-edged sword for society. On one hand, these technologies hold the potential to drive unprecedented innovation and efficiency; on the other, they pose significant and multifaceted risks that could undermine societal stability. The risks discussed here, along with other existential threats, highlight the critical need for comprehensive frameworks that prioritize ethical considerations in AI development. As AI systems become increasingly integrated into our daily lives, the challenges of aligning their objectives with human values become more pronounced, necessitating a proactive approach to governance and oversight.
Addressing these risks requires a collaborative effort from stakeholders across various sectors, including policymakers, researchers, and industry leaders. By fostering an environment where accountability and ethical standards are paramount, we can mitigate the adverse effects of advanced AI technologies while harnessing their potential for positive societal impact. Ongoing research and dialogue around AI governance, combined with robust regulatory measures, will be essential in navigating the complexities of this transformative landscape, ensuring that the evolution of AI serves humanity rather than jeopardizes it.
Andrew Critch and Stuart Russell, “TASRA: A Taxonomy and Analysis of Societal-Scale Risks From AI,” arXiv.org, June 12, 2023, https://arxiv.org/abs/2306.06924.
This article examines the development of a novel neural network architecture designed to handle multimodal tasks through efficient parameterization and adaptive learning strategies. In their research paper titled “GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints,” Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai introduce a groundbreaking approach that combines shared and task-specific parameters. They incorporate advanced attention mechanisms, including Multi-Query Attention (MQA), Multi-Head Attention (MHA), and Grouped-Query Attention (GQA), to optimize performance and scalability in handling diverse data modalities (Ainslie et al., GQA, 2023).
Introduction
The researchers introduce a new neural network architecture aimed at enhancing multimodal task performance using innovative attention mechanisms and parameter-efficient designs. Traditional neural networks often require extensive resources and separate models for different tasks, which can be inefficient and limit scalability. This research proposes an advanced architecture that addresses these challenges by integrating shared and task-specific parameters alongside sophisticated attention techniques (Ainslie et al., GQA, 2023).
Main Findings
The researchers have developed an innovative neural network architecture that integrates shared and task-specific parameters with advanced attention mechanisms: Multi-Query Attention (MQA), Multi-Head Attention (MHA), and Grouped-Query Attention (GQA). These techniques address critical gaps in current neural network designs, particularly regarding scalability and adaptability when handling diverse data types.
Multi-Query Attention (MQA)
MQA enhances neural network efficiency by utilizing fewer attention heads than MHA while preserving performance levels. It employs multiple queries that share a common key and value, significantly reducing computational costs and memory usage. This efficiency is particularly beneficial for tasks demanding real-time processing or involving extensive datasets.
Multi-Head Attention (MHA)
As a staple of transformer models, MHA enables neural networks to simultaneously focus on various aspects of input data through multiple attention heads. Each head processes the data differently, capturing distinct features and relationships, thus enhancing the model’s overall understanding and performance. While MHA provides flexibility and accuracy, it can be computationally intensive, making it less efficient for large-scale or resource-constrained applications.
Grouped-Query Attention (GQA)
GQA strikes a balance between MQA’s efficiency and MHA’s performance benefits by grouping queries together. This approach allows for a more structured and resource-efficient distribution of attention across multiple tasks. GQA optimizes the distribution of computational resources, enhancing scalability and making it suitable for applications where performance and efficiency trade-offs are critical.
Experiments and Results
The experiments conducted demonstrate that the proposed architecture, which integrates MQA, MHA, and GQA, significantly outperforms traditional models across various multimodal tasks. Key findings include:
Performance Comparison: The model utilizing MQA exhibited a notable reduction in computational cost while maintaining accuracy comparable to MHA models, indicating MQA’s efficiency as a viable resource-saving alternative.
Scalability and Adaptability: GQA effectively balanced MQA’s efficiency with MHA’s flexibility, showcasing its ability to scale efficiently across different tasks while maintaining robust performance without the high computational overhead of MHA.
Task-Specific Adaptation: The integration of these attention mechanisms with task-specific adapters demonstrated improved adaptability of the neural network. The architecture quickly adjusted to various modalities—images, text, and audio—showing superior performance in benchmark tests compared to conventional multimodal models.
Resource Efficiency: The shared parameter core combined with MQA and GQA led to significant reductions in memory usage and processing time. This efficiency was particularly evident in tasks requiring large volumes of data or real-time inference.
Discussion
Incorporating advanced attention mechanisms—MQA, MHA, and GQA—within a shared parameter architecture significantly enhances the efficiency and performance of neural networks for multimodal tasks. This study addresses long-standing challenges in scalability and adaptability by proposing a model that leverages these techniques to balance performance with resource constraints.
This innovative approach redefines the management of multimodal tasks, providing a more adaptable, efficient, and scalable solution. By minimizing computational burdens without sacrificing performance, the proposed architecture paves the way for versatile AI systems capable of effectively handling diverse data types and applications.
Conclusion
This study presents a transformative approach to multimodal neural networks through the integration of advanced attention mechanisms with a parameter-efficient architecture. The use of MQA, MHA, and GQA significantly enhances the model’s adaptability and performance across diverse tasks, offering a scalable and resource-efficient solution for managing complex data modalities.
The experimental results affirm that this approach not only boosts efficiency but also achieves high performance, marking a promising direction for future AI research and applications. The findings suggest that integrating these attention mechanisms could lead to the next generation of adaptable and scalable neural networks, revolutionizing multimodal learning.
Reference
Joshua Ainslie et al., “GQA: Training Generalized Multi-Query Transformer Models From Multi-Head Checkpoints,” arXiv.org, May 22, 2023, https://arxiv.org/abs/2305.13245.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.