back

TESCREALism.  Has The Silicon Valley Ruling Class Gone To Crazy Town? Émile Torres in conversation with R.U. Sirius

Oct. 06, 2023. 23 min. read. Interactions
This article is available in multiple translations, Click here for a list of options.

Émile Torres and R.U. Sirius scrutinize the eccentricities and extremities that have become associated with Rationalism. They delve into the meanings, practices, ethics, and applications within Transhumanism and Singularitarianism.

Credit: Tesfu Assefa

TESCREALism
T=Transhumanism
E= Extropianism
S= Singularitarianism
C = Cosmism
R = Rationalism
EA = Effective Altruism
L = Longtermism

Émile Torres, a philosopher and historian who has focused recently on existential threats, developed what they refer to as a ‘bundle’ (we might call it a memeplex) that claims to link the above series of -isms into a sort-of singular force that has been embraced by many of the super-wealthy and influential in the tech world. It is the influence of these tropes on the super-rich and influential that, in Torres’ view, makes them very dangerous.

In an article for Truthdig, Torres writes, “At the heart of TESCREALism is a ‘techno-utopian’ vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundancereengineering ourselves, becoming immortalcolonizing the universe and creating a sprawling ‘post-human’ civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI.”  

In the same piece, Torres gets into the wilder projections that I suspect even many techno-enthusiastic transhumanism-oriented Mindplex readers would find fantastic (rooted in brilliant minds  taking their fantasies for reality),  Torres theorem leans heavily on Oxford Professor Nick Bostrom’s views, writing that he “argues that if there’s a mere 1% chance of 10^52 digital lifetimes existing in the future, then ‘the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.’ In other words, if you mitigate existential risk by this minuscule amount, then you’ve done the moral equivalent of saving billions and billions of existing human lives.” 

As he explained in his conversation with Douglas Rushkoff, Torres identifies TESCREALism as a philosophical ‘bundle’ that, in a sense, trivializes the lives and sufferings of currently existing humans by finding a greater importance in the possibly trillions of posthumans that could exist in physical and/or virtual space in the future — ‘people’ having experiences that can be valued beyond our imagining. Some of those quoted tend to use statistics to value experience, which is about as alienated from experience as you can get.

I can assume you all know about transhumanism and the singularity. If you’re here, you probably know about Ben Goertzel’s project to build AGI. But are most of you familiar with the eccentricities and extremities that have attached themselves to Rationalism (as defined by LessWrong), Effective Altruism and Longtermism?

In the interview below, I mainly ask Torres to thrash out how real all this is. Do a lot of people buy into the whole philosophical bundle? My own attitude, even as a longtime associate of transhumanism, has always been kind of “are you for real?” when it comes to people taking their shit too seriously, particularly when they’ve deluded themselves into thinking they’re rational. 

In a follow up poll, I will ask Mindplex readers and veterans of the transhumanist culture to weigh in on the TESCREAL bundle. 

RU Sirius:  In your book Human Extinction: A History of the Science and Ethics of Annihilation, you went from writing about existential threats as a historical phenomenon to various transhumanist tropes. As I was reading it, it was like suddenly we had gone from science and geology into science fiction. Then I was wondering if there was science fiction in older times. (I suppose there was the Bible and other myths.) How did you get into this? 

Émile Torres:   Back in the mid-2000s, I encountered transhumanism for the first time. And I was initially really critical of it. The second paper I ever published was a critique of transhumanism. But  then, certain considerations led me to believe that transhumanism is a defensible position, and I became a sort of transhumanist.

And one of the main considerations was that the development of these technologies is inevitable. So if you’re going to be against transhumanism, then maybe you need to be against the development of certain person-engineering technologies. But since they’re inevitable, there’s no point in opposing it just to hold back the tide. So the best thing to do is to join the transhumanists and do what you can to ensure that that project is realized in the most optimal way.

The notion of existential risk was tightly bound up with transhumanism from the start: existential risk was initially defined as ‘anything that might prevent us from creating a posthuman civilization’.

RUS:  I’m sure there must have been mention of existential risk before that in various intellectual circles… like related to nuclear war and so forth?

ÉT:  There was definitely talk of extinction and global catastrophe. But what’s new about this idea of existential risk — right there in the definition — is the idea of desirable future development. 

There were people, particularly in the second half of the 20th century, arguing that one reason human extinction would be bad is that it would foreclose the realization of all future progress, future happiness, and so on. But that lost-potential argument was never formalized. The focus was really on going extinct. Everybody on earth is going to die. You and me are going to die. Our families will die. That was the foreground. Lost potential was not prominent. 

The notion of existential risk, I think, flipped that around and foregrounded the lost potential: the argument became that the worst aspect of human extinction is the lost potential. It’s not the 8 billion people who are going to die. That’s very bad, but the badness of all the lost potential is orders of magnitude larger.

RUS:  I may be a bit out of touch with the transhumanist culture… to me this is a bizarre iteration of transhumanism. It’s not something I bumped into much when I was interacting with that world in 2007-2010 as editor of h+ magazine. At that time, you’d mostly hear about life extension and other enhancements. Or immortality, if you wanted to get really far out. The notion of uploaded mind children was around, but as something very speculative. But the idea of sacrificing just about everybody to imaginary future people as you’ve discussed in your writings about TESCREAL did not seem to be much in circulation back then.

ÉT: That sounds right to me. I think this notion of potential is really central to longtermism. The initial definition comes from 2002, with Bostrom discussing the  transition dynamics from our human to a posthuman civilization, foregrounding the potential of becoming posthuman. This was also bound up with this notion that the creation of posthumanity isn’t just valuable because it’s a good way for cosmic history to unfold. But also, you and I might benefit, right?

So why is creating a posthuman civilization important (according to Bostrom and people like him)? Well, because if it happens within my lifetime, maybe I get to live forever. Or even if it happens within maybe a thousand years, I still get to live forever because I’ll sign up with ALCOR and get resurrected. So I really see this moment where there is a  sort of the pivot towards thinking about the far future. I think initially, for the transhumanists, it was bound up with their own fate as individuals. 

RUS: I was thinking that maybe – for example – Eliezer Yudkowsky is being selfless when he talks about risking nuclear war and sacrificing most life on the planet to make sure AI doesn’t happen before he thinks we’re ready. Because it seems to me he could have at least a 50:50 chance of being a victim of the nuclear war that he is willing to risk to prevent the development of AI too soon. So I’m thinking he’s being selfless but he loves the idea of the blissful future humans so much that he’s willing to sacrifice himself.

ÉT: My understanding of the history is that it was really in the 2000s that people in this community became increasingly aware of just how huge the future could be. With that awareness came a corresponding shift in the moral emphasis.

Yudkowsy wants to live forever. On Lex Fridman’s podcast, he said that he grew up believing that he would live forever. And so part of the trauma for him, as he mentioned on that podcast, is being in this situation where AGI is so close, and he’s having to face his own mortality, maybe for the first time. It seems like his thinking exemplifies this pivot throughout the 2000s.

RU: To me it sounds like it’s all fantasy. Some of this stuff that you’ve mentioned being part of this bundle – like the theoretical trillions of people, including digital people, having quantifiably great experience — it sounds like dormroom stoned nerd brainstorms that just never ended. They keep elaborating from the original premise, getting more and more isolated from real-world experiences turn by turn. Ideas used to mature – now they just seem to get crankier. I can’t prove it, but it could be the result of the attention economy. To misquote Neils Bohr, “Your idea is crazy but it’s not crazy enough to get a following on social media.”

ÉT: With respect to the attention economy, my sense is that longtermists recognize that this vision of the future is kind of nuts. I mean, some of them have used the term ‘crazy town’. Consequently, I think they do their best to avoid mentioning what their actual goals are publicly. Crazy ideas do grab the public’s attention, but in this case, I think they feel that some of these ideas are not good PR. 

What About Useful AI?

Credit: Tesfu Assefa

RUS: Regarding your assertion that AI activity can only be explained by this ideological configuration. I don’t know whether you’re talking about practical AI for, say, tracking and responding to weather conditions, developing vaccines and other responses to pandemics, developing medicines, etc. Or if you’re referring only to AI that is performing what we consider intellectual or creative things.

ÉT: I don’t think AI in general is motivated by this ideology. The race to AGI is. And I think there are two factors. One that’s obvious is the profit motive. Microsoft and Google expect to make billions of dollars off of these large language models. But I think the other crucial component of  the explanatory picture is TESCREALism. 

It’s like… why did DeepMind form in the first place? Why did Demis Hassabis – who was at a lot of these transhumanist conferences – found it? And Shane Legg, who received $100,000 from the Canadian Singularity Institute for Artificial Intelligence after completing his PhD thesis, and gave talks at the Singularity Summit conferences.

RUS: If I went to all the boardrooms in Silicon Valley and talked to the owners and the programmers, do you think most of them would embrace this entire TESCREAL idea? My guess is they would tend to be transhumanists, and quite a few might be singularitarians, but they are probably not into the ‘trillions of future people’ aspect of that project. I mean, how ubiquitous are these ideas really?

ÉT: In terms of the ubiquity of TESCREAL beliefs, I think you’re right. A lot of them wouldn’t even use the word transhumanism. You could ask, “Would you support re-engineering the human organism?” Or ask, “Are you funding projects to try to re-engineer the human organism so we can merge AI into our brains?” I think a lot of them would say yes. And they’d be for aspects of the longtermist worldview like the imperative to colonize space and plunder the cosmos. My strong suspicion is that’s the water that these people swim in.

An article I want to write would be about the different terms and ideas that various authors use to describe the culture of Silicon Valley – using different terms, but ultimately describing the same phenomenon. So what I mean by TESCREALism is the same thing that far-right guy Richard Hanania calls the “tech right.”

There was a Huffington Post article about how he holds white supremacist views. And he said, “I hate the word TESCREALism.” So he called it the ‘tech right’. Douglas Rushkoff calls this ‘the mindset’ – he says it is everywhere in Silicon Valley among tech billionaires and so on; in talking to them about their views, he found that they all thought: “the future is digital. We’re going to upload our minds. We’re going to spread throughout space” and so on. What Rushkoff means by ‘the mindset’ is basically what I mean by TESCREALism. Would these people who embody ‘the mindset’ say, “yeah, I’m a longtermist, and I believe that the most morally important thing to do is to conquer space and create all these digital people in the future?” I don’t know. But their worldview looks a lot like longtermism.

RUS:  Do you think a lack of concern for currently living people is a sort of political manifestation of the longtermist view is driving some of the people of  Silicon Valley towards right-wing extremism?

ÉT: I think that’s largely correct. I think some people, like Wil Macaskill, [a figure in ‘effective altruism’] really accept this very abstract philosophical position that what matters is that there are huge numbers of people in the future. And a lot of tech billionaires see this vision as bound up with their fate as individuals. So the thinking is like… “I want to build a bunker to survive the apocalypse so I can get to space, have my mind digitized” and so on. And that definitely can lead to this disregard for most human beings. A wild example of this is the news that broke that Sam Bankman-Fried’s brother and somebody else at FTX had discussed the possibility of buying the island nation of Nauru explicitly so that members of the ‘effective altruism’ movement could survive an apocalyptic event that  kills up to – as they wrote in the document – 99.9% of human beings.

The Singularity is Probably Not Near

Credit: Tesfu Assefa

RUS: Ben Goertzel said that I should ask you if you think the Singularity will happen. And if it will happen, will it happen in 2035 or 2050?

ÉT: I guess it depends on what one means by the Singularity. There’s the intelligence explosion interpretation… there’s the Kurzweilian idea that just has to do with the rate of change.

RUS: I think of the Singularity as the point where the AIs get smarter than us, and beyond that, you can’t predict anything. You can’t predict who we’ll be, or if we’ll be around, or what the world will be like. The science fiction writer Vernor Vinge was the first person to suggest that idea of a Singularity. We would make intelligences that would become as incomprehensible to us as we are to worms.

ÉT: I’m sympathetic with that view of the Singularity. There’s just not much we can say beyond it. I’m very skeptical of the intelligence explosion idea. And the rate of change idea from Kurzweil seems to be in direct and significant tension with the fact that a climate catastrophe is almost inevitable unless there’s some new technology that, at scale, removes carbon dioxide from the atmosphere.

RUS: Kurzweil shows that the inclining line of human technological development survived two World Wars (actually world wars boosted technology development) and Mao and Pol Pot and all kinds of terrible events.

ÉT: I think climate change is different than that.

RUS: Yeah, I think so too.

ÉT: We’re talking about global civilization. Is it gonna survive? I don’t know. I mean, there are legit climatologists out there who don’t think it will unless there’s immediate action to avert catastrophic climate change.

I remember arguing, many years ago, with colleagues in the existential risk field, where I was claiming that climate change is a much bigger deal than they were suspecting. They thought: “We’ll invent AGI. And once we have AGI, it’ll…”

RUS: It’ll figure out what to do, yeah.

ÉT: Figure out what to do. But how are we gonna create AGI in a world that’s just crumbling and falling apart? How are we gonna host conferences on AI when the weather is so hostile that you can’t get there?

RUS: I guess the question becomes how radical the intervention of weather catastrophes is in the immediate future. People are thinking they might be able to accomplish AGI in the next 10-20 years or so. And we’re already dealing with all kinds of crappy weather and deaths and destruction. But to the visible eye, western civilization seems to roll on. People get in their cars and go to the store. Food is still being distributed.

So we do seem to be carrying on, and maybe we will do that for 10 or 20 years. If the people making the AGI and related robotics and so forth are able to manage to get to the lab and do their work, get in their cars and get enough food etc., then maybe they can  accomplish what they hope to. I guess that’s the idea.

ÉT: It’s just not my area of expertise. But my sense is that, in terms of the LLMs that we have, there’s no obvious path from those systems like ChatGPT to genuine AGI or superintelligence.

RUS: A lot of people are saying that ChatGPT and the like are not much to brag about. Michio Kaku, who generally tends to be a hyper-optimistic tech promoter, called it a glorified tape recorder.

ÉT: I think it was Gary Marcus who was laughing about the rise and fall in prestige, if you will, of ChatGPT. It became a joke line during a Republican debate.

RUS: It happens so fast these days.

ÉT: Yeah. So I don’t think that Singularity is going to happen, probably. And I would put money on it not happening soon, not happening in 2045 like Kurzweil predicts. 

What About the Humane Transhumanists, Singularitarians and AI Enthusiasts?

RUS: Let me ask you about the varying ideologies and ideals within transhumanism and its spin-offs. You’ve mentioned Ben Goertzel — the captain of the ship here at Mindplex — in various places as having a central role in the ‘bundle’ because of his longtime pursuit of AGI. And I know Ben to be a humanist, and more or less a liberal or even better. I know he doesn’t want to exploit or enslave or kill off the current people on earth but wants to try to lift everybody. So I know from experience that there’s a  lot of philosophical variation within transhumanism. 

I can remember when they asked me to create the magazine for humanity+, I had my own assumptions based on earlier experiences with the Extropians. So I confessed to these guys at a meeting, I said,  “I’ve got to tell you right up front that I’m not a libertarian. I’m a leftist with a libertarian streak.”  And one of the guys said “Yeah me too.” And the other guy said “I’m not even sure about the libertarian streak.” 

Generally, around that time – around 2007 – I learned that a lot of the people engaged with that official transhumanist organization thought of themselves as liberals, sort of conventional mainstream liberals. And there’s a lot of variation within that world.  

ÉT: I recognize and affirm that. The notion of TESCREALism is supposed to capture the techno-utopian vision that came out of some of these movements, and to gesture at the worst aspects of that. I think they’re the ones that have become most influential now. So, like the democratic socialist James Hughes — he was somewhat influential. But, compared to someone like Bostrom, his influence is minor. And I absolutely recognize that there are transhumanists who like anti-libertarian. Some of them are Mormons.

RUS: Yeah… the Mormon transhumanists! They’re adorable. I think when you had people like Peter Thiel and Elon Musk attaching themselves to these ideas, they probably became the main attractors to transhumanism or the ideas of human enhancement associated with it. More people who would be attracted to their ideologies have been  pulled in, particularly in the Silicon Valley culture. These ruling class scary monsters and super creeps became sort of the main widely-available public voice for those kinds of views. Then you had the neoreactionary movement and the dark enlightenment. Most of the people talking about those tended to embrace transhumanist tropes. That became the alt-right; it fed into the spread of right wing extremism.

You can see how the idea of the glorious future – stand up straight and tall and shoot yourself up into the glorious future – could attract a certain type of fascist sensibility.

ÉT: That’s my impression also. Obviously there’s a fascinating history involving futurism and fascism. Maybe it does tend to attract a certain type of person or lends itself to being interpreted or exploited by fascists. TESCREALism captures that aspect.

Is Less Wrong A Cult?

RUS: I remember being at a Singularity Conference and being approached by someone involved in Less Wrong. And it felt something like being approached by a cult. I wonder if you run into any actual cult-like behavior in your studies, like people gathering in communities and getting guns to defend themselves, or worship the leader and that sort of thing.

ÉT: There’s definitely that charismatic leader aspect to rationalism. There are these Less Wrong posts that are just lavishing praise on Yudkowsky. I remember seeing a list of one or two sentence statements about Yudkowsky. One of them was something about how “inside Eliezer Yudkowsky’s pineal gland is not an immortal soul, but another brain.” “In the history of Western thinkers, there was Plato, Immanuel Kant, Eliezer Yudkowsky.”
(Laughter)
Someone who I won’t name told me that the Bay Area rational scene is a full-grown apocalypse cult. 

I think EA (Effective Altruism) is sort of a cult. There was an article published by Carla Cremer recently. She talked about a secret competitive ranking system in which participants get points subtracted if they have IQs of less than 120.

RUS: Oh! I was thinking I might ask people engaged in transhumanism if they even believe in IQ as a legitimate measurement of intelligence. 

ÉT: I’d be really curious to know. Because I do think that IQ realism is pretty widespread within this community. Bostrom has written that IQ is good but imperfect. So they sort of lean towards IQ realism.

Does Anyone Call Themselves a TESCREAList?

RU: You noted that Marc Andreessen has identified himself with this bundle that you co-created. Have others directly embraced the term as a positive identity that you’re aware of?

ÉT: No, not really. Hanania acknowledges it in arguing that the ‘tech right’ is
a better term. He said we were basically right about what the streams are, what the bundle is, but ‘tech right’ is a better term. I’m not surprised that there aren’t more people coming out and saying they identify as TESCREAL.

RUS: Maybe after this runs on Mindplex there’ll be a bunch of people deciding that is who they are. Oh dear. Whatever have we wrought?

Eugenics

Credit: Tesfu Assefa

RUS: Let me try a thorny issue: eugenics. What about intervening at the germline to prevent horrific diseases and stuff like that? Do you think there can be a legitimate use for that sort of thing?

ÉT: Yes. I do think that could be permissible under certain circumstances. I mean, I have worries about the limits of what that technology will be used for. Will it be used just for what we would intuitively call therapeutic purposes? My main concern is that it could easily open the door to an ‘enhancement’ approach. As soon as you’re talking about enhancements, there are questions like, “What criteria are you using to judge whether some modification is better?” That’s where you get into the issue of ‘super-classes’ which Bostrom has written about. 

A lot of that is probably ableist. What ‘enhancing’ means for somebody like Bostrom might be completely different than what I might mean. Right?

RUS:  I must confess I had a knee-jerk reaction the first time I heard the term ableism. People should be able. Generally, we should be in favor of abilities and not get into a place where people are worshiping their broken parts, so to speak. At the same time, people should have the right to choose how they want to be. But I’m uncomfortable with the idea that people would want to maintain what most people would consider a kind of brokenness. And I wonder: where’s the line for that?

ÉT: My sense is that words like ‘broken’ are normative terms. I think disability rights activists or disability scholars have a point when they say, “there’s an infinite number of things that I can’t do.” I can’t fly. The notion of disability, some would argue, is just a social construct. We live in a society that is unaccommodating for somebody who can’t do something that a statistical majority of other people can do. That’s what a ‘disability’ is. So maybe, you fix society, the disability goes away even if the inability remains.

RUS: How would you think about physical problems that make life more difficult for an individual, or for the people around them? 

ÉT: There are two aspects to that. One is the inability and the other is the society one lives in. So you can fix or eliminate disability by improving society. And then there’s a good argument that a lot of the inabilities that we classify as disabilities would not be seen as bad. It’s just different. There are people of different heights. There are people who can’t walk. I find my heart is filled with a lot of empathy for the disability scholars — some of whom are disabled themselves — arguing that they wouldn’t want to change. And their view that we shouldn’t aim for a world in which people like them no longer exist.

Techno Gloom

RUS: Do we need to worry about extreme forms of tech negativism? For example, the person who can’t walk on their own will rely on good technology to get around
and probably hope for even better technology. And there’s a real move towards extreme tech negativism now, clearly provoked partly by the sort of TESCREAList people that you’re bringing to the surface. I wonder if you’re a little worried that there might be an overreaction, a tech reactionary kind of move that is actually harmful?

ÉT: Not so much because I haven’t seen a lot of evidence, at least like my social media feed.

RUS: You don’t live near San Francisco…

ÉT: To put it simplistically, a lot of the people in my circle are for good technology,  not bad technology. Maybe small-scale technologies, which doesn’t mean low-tech. So you can have advanced technologies that would enable somebody to get around who can’t walk. But without the megacorps, and all the control  and all the risk that comes with that.

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter

About the Writer

RU Sirius

13.96204 MPXR

R.U. Sirius is the former copublisher and editor-in-chief of the 1990s cyberpunk magazine MONDO 2000 and author and coauthor of 11 books including Counterculture Through The Ages. Currently involved in a project building an immersive virtual environment in collaboration with PlayLa.bz around the theme of his song with MONDO Vanilli and Blag Dahlia titled "I'm Against NFTs".

Comment on this article

14 Comments

14 thoughts on “TESCREALism.  Has The Silicon Valley Ruling Class Gone To Crazy Town? Émile Torres in conversation with R.U. Sirius

  1. Beautiful!

    1 Like
    Dislike
    Share
    Reply
  2. TESCREAL is a bad labek for what torres is talking about, but he's attacking the right people and it covers the sources of the ideas they are using for ideological cover. here is exhibit a: https://a16z.com/the-techno-optimist-manifesto/


    and here is a direct rebuttal without the distracting acronym: https://atomless.substack.com/p/adreessens-basilisk

    1 Like
    Dislike
    Share
    Reply
  3. Émile Torres, a philosopher and historian who has focused recently on existential threats, developed what they refer to as a ‘bundle’ (we might call it a memeplex) that claims to link the above series of -isms into a sort-of singular force that has been embraced by many of the super-wealthy and influential in the tech world.


    Torres explores speculative transhumanist ideas.TESCREALism prioritizes future posthumans.Statistics quantify experiences, possibly creating detachment.
    1 Like
    Dislike
    Share
    Reply
  4. As an African, TESCREALism is a mix of shock, excitement, sadness, little bit of hope, and then confusion. What mostly gets on my nerves is that we Africans find ourselves in the backseat once again. 🙄

    2 Likes
    Dislike
    Share
    Reply
  5. Oddly...it seems like some folks feel like the future can be avoided, instead of embraced.


    It's so exciting to see how humanity is starting to embrace the chaos of the exponentially unfolding future!

    2 Likes
    Dislike
    Share
    Reply
    1. Respectfully, I must contend that your comment is shallow and , and requires a vigorous rebuttal. Surrendering to the future without critical examination is a naive aspiration.

      In an age marked by rapid technological advancements, it's paramount for humanity to remain vigilant about the unfolding future. Rather than blindly embracing the notion of progress, we must actively shape it to ensure fairness and safety for all. The future isn't an inevitability that we should merely accept like passive cattle; it's a collective responsibility to steer towards a horizon that benefits the many, not just the privileged few.

      A sobering example of the need for vigilance lies in the historical context of eugenics. In the early 20th century, eugenics movements emerged, with some advocating for the promotion of certain races while suppressing others. If we permit a select group with a hidden racist agenda to control the course of genetic advancements, they could easily manipulate these technologies into tools of oppression and control. Genetic engineering could be used to reinforce harmful stereotypes, create divisions in society, or even lead to the development of weapons designed to target specific genetic traits. It is our collective duty to ensure that the future of genetic science upholds principles of justice, equality, and respect for all, preventing the misuse of these powerful tools for destructive or discriminatory purposes.

      There's nothing 'odd' about challenging wrong ideas in a significant and extreme movement like Transhumanism. However, it's genuinely concerning to see some folks promoting a harmful future as something everyone should welcome!



      Like
      Dislike
      Share
      Reply
  6. The creation and sustenance of a civilization require a collective effort from all its members. However, when a certain group of people or a class within this civilization becomes "modern," deluded with fame, power, and money, and ultimately dissatisfied with what they have, they undoubtedly lose the essence of humanity. This class poses a significant challenge to the progress and betterment of our human civilization, whether it's pre or post-Singularity; it will only hinder it. Regrettably, the majority of individuals associated with TESCREALism appear to be heading down this perilous path, driven by ego and avarice.

    I have witnessed a horrifying transformation within TEASCREALists, now they are quickly losing their empathy. They can no longer understand or sense the perspectives, feelings, needs, or intentions of the 'common people'. Their loss of empathy are perilous because they no longer realize that their dreams are not shared by everyone, and they cannot even accept the fact that they are by no means our appointed leaders who determine the future of our race! They don't feel remorseful about any shit, and they now act in selfish and even vindictive ways, detached of the harm it inflicts on the rest of us. They have dehumanized us simply because we don't believe in their expedient ivory towers. If left unchecked, our world will collapse into a terrifying abyss where violence and dehumanization prevail.

    All I see in this group, especially in the last 5 years, is an excessive emphasis on technology. Placing so much importance on technology as the solution to all problems is truly absurd and borderline insane. They deny the present and, as if that's not enough, they urge others to also disregard pressing current issues and give them all the money so they can solve future problems. I am very sure, quite sure, we are at risk of Elitism, and it feels like we are losing control as the majority, with decisions that shape society being made by a small group of TESCREAlists. How can this be sane when it encourages suffering in the present for the sake of future benefits?

    1 Like
    Dislike
    Share
    Reply
  7. Blimey, this is a proper mind-boggler if you're a newbie to it. As for meself, I've 'ad it up to 'ere with this balderdash. They claim they are on the move, so they ask for dosh to take flight, and once we fork it over, they just vanish with it!

    Well, let me tell you, the Silicon Valley culture, mate, it's gone a bit pear-shaped, innit? It's all flashy gadgets and empty promises, like a dodgy bloke selling you a busted mobile phone down the local market. These tech bigwigs are all show, no substance, I reckon. They talk about ethics and transparency, but when it comes to it, they're as trustworthy as a snake in a tracksuit.

    And don't get me started on their so-called innovations, mate. They make a big fuss about the next big thing, but half the time, it's just smoke and mirrors. Take that Elizabeth Holmes business, for instance – promising the moon with her blood-testing gizmo, turns out it's all a load of codswallop. It's high time they put their money where their mouth is and started making tech that actually benefits the average Joe and the economy. Right now, it's all hat and no cattle, if you ask me.

    1 Like
    Dislike
    Share
    Reply
  8. this is a well researched but at the same time disturbing article

    Like
    Dislike
    Share
    Reply
  9. Oh, I also forgot to spill the beans, that motherf***er penned down an apology letter when Émile Torres exposed his dirty laundry. I'm gonna drop the link to that letter here too. But man, the following statement from his letter has really pissed me off. Right now, I'm kickin' back, crankin' up some Lightnin' Hopkins - Bring Me My Shotgun.

    For crying out loud, take a look for yourself and tell me what you make of it. Is it a straight-up apology or just a bunch of trash talk? Am I losing it 'cause I'm a Black American? How in the world ain't this the hottest topic of discussion among the Ethics of Tech crew?


    The best thing would probably have been for all of those words to have been buried and forgotten. Good ideas have been taken up in other contexts, further developed, and written up in more readable formats; and the no-good stuff deserves to rest in the great ash heap of history. The mailinglist served a useful function at the time, as a place where people from around the world could have freewheeling conversations about wild ideas. Nowadays the Internet offers many other forums that better serve those functions.


    After this, he straight-up said he ain't a racist, not then, not now. He also made it clear he ain't down with no eugenics.

    Anyway, here's the link to that letter, folks. https://nickbostrom.com/oldemail.pdf

    2 Likes
    Dislike
    Share
    Reply
  10. Techno Gloom

    I think I am going to be negative now.

    Hmmm, after reading the article and then some of the comments here, I thought this is just another example of the moral decadence among the Silicon Valley rich guys. Then I started to read it again, this time clicking the external links. I am so infuriated and shocked to discover that the leading TESCREAList, the man who is praised as a rational and visionary thinker, is a rotten racist motherf***er! Sorry, Mom, you didn't raise me to use such foul language, but this dude deserves it!

    Here is a word by word quote from Emile's other article mentioned here and I have also added the link to the original article.


    "Sometime last year, I happened to come across an email from 1996, written by a 23-year-old graduate student at the London School of Economics named “Niklas Bostrom.” Upon reading it, my jaw dropped to the floor, where it stayed for the rest of the day.

    Here’s part of what Bostrom, now known as “Nick Bostrom,” an Oxford University philosopher who’s been profiled by The New Yorker and become highly influential in Silicon Valley, sent to the listserv of “Extropians”:


    Blacks are more stupid than whites.

    I like that sentence and think it is true. But recently I have begun to believe that I won’t have much success with most people if I speak like that. They would think that I were [sic] a “racist”: that I disliked black people and thought that it is fair if blacks are treated badly. I don’t. It’s just that based on what I have read, I think it is probable that black people have a lower average IQ than mankind in general, and I think that IQ is highly correlated with what we normally mean by “smart” and stupid” [sic]. I may be wrong about the facts, but that is what the sentence means for me. For most people, however, the sentence seems to be synonymous with:

    I hate those bloody [the N-word, included in Bostrom’s original email, has been redacted]!!!!


    Here is the link to the original article referred. https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/


    1 Like
    Dislike
    Share
    Reply
  11. The notion of existential risk, I think, flipped that around and foregrounded the lost potential: the argument became that the worst aspect of human extinction is the lost potential. It’s not the 8 billion people who are going to die. That’s very bad, but the badness of all the lost potential is orders of magnitude larger.


    On December 9, 2013, I published an article on H+ Media titled 'An Appeal to Transhumanism on the Question of Technological Inequality in Africa'. Back then, I was only in my mid-20s, and naively, I assumed there was someone to appeal to. Again, back then, the definition of 'existential threat' was not revised, or if it was, I wasn't aware of it. 

    Anyways, the central point of that article is that radical technology and the people involved in it are ignoring the vast majority of humans and our day-to-day problems. I tried to 'appeal' to these people that when we solve tomorrow's problems, it shouldn't be at the expense of today's potential. Enhancing humanity can start with practical tech that can be applied today to help all of us improve our lives as we know it on earth as she exists today. Alas, fast forward a decade, and I am now part of the very same existential threat I fought against. Was it Batman or Harvey Dent that said, "you either die a hero or live long enough to see yourself become the villain"? 

    In this interview, what Émile Torres asserts is really disturbing. If the TESCREALism camp has redefined the meaning of existential risk as "the loss of potential" and not as the loss of the lives of the 8 billion humans, then we are heading toward our extinction. A definite and most certain tech-caused extinction.

    I was one of those people who always had a relaxed view about man-made existential threats; this is just another exaggerated doom and gloom cry, I used to tell myself. Luckily, I was right hitherto. And my approach worked so far because the majority of the agents labeled as 'those most likely to create a distraction' don't necessarily do what they do with a deliberate intention, and the ones that do are fanatics and a minority and are unappealing to the mainstream. Today, this is at risk because the agents, the players, that are feared to cause global annihilation via 'technological progress,' have changed the definition of annihilation. This is disturbing and so 1984. 

    If the TESCREALism camp starts to define existential threat as the inability to secure the coming Tech Singularity and a future for 'intelligent sentients' and their civilization, then my worst fear is coming true. The probability of the near future where we, the common humans, retaliate with a preemptive strike, starting the 'war' often discussed by the likes of Hugo de Garis in his "Artilect War" and Zoltan Istvan's infamous book 'The Transhumanist Wager' is adding momentum too quickly. Even the likes of Eliezer Yudkowsky and their banzai to Bomb data centers and AI research centers to stop the rise of AGI will appear alluring. When I discuss this stuff a decade ago, someone would often say, "Well, the pro-tech side will most likely win because they can apply tech." Now, I am afraid that someone with the means is going to say, 'Well, a preemptive strike will level the field, and the anti-tech side will have the advantage if they act before it is too late'.

    In the good old days of the 2010s, Zoltan admitted that his tech champion, his protagonist character, Jethro Knight, has gone too far! Now, it is disturbing to see that Jethro has changed his definition of 'existential threat' so no one will say he has gone too far.

    I am not going to dive into the folly of this new definition from a logical perspective because this is equivalent to saying that in the event of an alien invasion, we should not fight back even if the aliens declare total annihilation of the human race as long as the aliens added a statement that they have accepted to integrate our culture into theirs and also promise to apply some human DNA in their future where it is applicable.

    Rather, I want to focus on the dangerous modern deconstructionism here, that damn Derrida! From ancient Egypt to present-day USA, when powerful groups started to change the meaning of words, it only means one thing: distortion of reality. An 'existential risk' means a threat to a species' survival in its biological form. Here, I am not even using the narrow, human-centric definition, which could simply mean a threat to the survival of human beings. In its old and correct meaning, existential risk means a threat or a risk that can potentially annihilate an existing species. Follow me a little bit longer and see how this 'modern deconstructionism' is the cause of our soon-to-come demise if we continue on this messed-up path.

    For Y to be an existential risk to X, X has to EXIST! In the case of TEASCREALism, the first folly is Y is considered (whatever Y they claim) as an existential risk for an X that doesn't exist today. The second folly, and the most dangerous part, is that while Y could be an actual existential risk to X (Y as the current radical tech and the direction it is heading and X as the future of actual existing humans today), they are undermining the existential risk by cutting out X from the equation and introducing a new factor called Z as the potential for Technological Singularity, which will guarantee the survival of sentient beings, which may or may not be humans.

    This then means: 1) they can do Y because it is not a threat to X (Radical tech as Y and present-day and future biological humans as X). In other words, the moral obligation is buried under six feet as long as the meaning of Existential Threat is no longer related to X. 2) they can even endanger X because preserving X can also be a Y for Z (delaying or restricting radical tech to save biological humans can be an existential threat for the post-Singularity Super AI, Posthumans, Cyborgs, Uploaded Minds, or whatever fancy intelligent sentient forms anointed to inherit the planet in the post-Singularity). Then this means, doing Y at some time will be the moral obligation as long as it guarantees the safety of Z!

    I am not bringing a slippery slope argument here: with the new definition (I hope Émile is wrong), building a tech that can potentially wipe out every living thing just to generate enough energy for a super Intelligent computer which, in return, can upload intelligent beings' memory on a server capable of running forever is a benevolent and visionary good while trying to stop such tech to save humans and the other living things is an utter theocratic evil! Yes, for me, this is not a trivial "hey dude, thoughts and understandings evolve along with civilizations, so words and their meanings also change" scenario. For me, this is a dangerous deception!

    Existential threat is not a risk for some potential; it is a risk for something that actually exists.




    5 Likes
    Dislike
    Share
    Reply
  12. First and foremost, it is quite challenging to adequately cover all of the points made in this article via my comment, which admittedly is lousy and a bit concise compared to the size of Sir R.U. Sirius and his guest's discussion. Nonetheless, I will attempt to convey the two core ideas that resonated with me most from this thought-provoking interview.


    TESREALism, as discussed in the interview, is filled with various "isms" that aim to bridge the gap between technology and our human experience. However, unfortunately, it does not include or closely resemble humanism. Rather, it feels like a business venture. A venture which will only benefit the leading figures, and sadly, it is not yet a fully developed industry as there are no practical and concrete applications with tangible results. In other words, it remains primarily hype, and hype is not a unique attribute of TESREALism; it is also prevalent in the AI/Blockchain/Web3 community.


    Humanity, it seems, is accustomed to a deus ex machina approach - we are conditioned to expect someone or something to swoop in and save the day. For proponents of AI, it is the hope of the emergence of "superintelligent AI", for the transhumanist crowd, it is the promise of the "post-human". For the blockchain dudes, it is that "single supercoin that can govern all transactions fairly, equally, and transparently". In the Web3 camp, it is the tool that can "harness the internet for decentralization, privacy, and the empowerment of the public". At least the latter two seek to give more power to those who have been "historically marginalized", in this case, the common people. The first two, which in my opinion can easily fit the TESREALism camp well, are all about the elite individual: we do because we can!


    The central problem with this approach is that it is not grounded in the present; the proponents of these ideas cannot test their proposed solutions, and therefore, their arguments often stem from their own hubris. They abhor the concept of God, yet they worship their own creations: what is all knowing AGI or a powerful cryogenic chamber? aren't these their gods! They claim humanity is not good enough to inherit tomorrow (the revised definition of existential risk), yet they strive to secure their position in the future in one form or another. They are in denial of today, but they dream of immortality. They are sick of the present financial system, yet they milk it to its last dime like leeches, born of the Hindu goddess Druga, with 80 restless hands so they can fund their own objectives.


    The prominent idea of TESREALism and TEASREAList technology, which has become super popular in the last ten years, is often associated with the concept of "transcending" human limitations, including intellectual, physical, mental, and emotional abilities. However, it is very funny when you see that there is no attempt (unbiased and human centric) to expand our understanding of what it means to be human. How can you transcend something when you do not know it? It is just like the current dilemma in the transgender camp: they do not define what a woman is, or condemn those who define it, and yet they tell us transwomen exist! 🤪🤪


    For me, the Achilles' heel of TESREALism is its focus, which is the elite individual. It has little or zero regard for the larger social and environmental contexts in which we live. Furthermore, it has little regard for reality and practicality too; it is plagued with Overoptimism Bias. The future is nothing if the present is not addressed! AIs, post-humans, cryogenics, ethical eugenics, or any shit from this camp will only create a "two-tier" society in which some people have access to technological enhancements while others do not. This is at best, and the worst shit is it might be cleaning the path to a nasty dystopian future if not extinction. 


    The whole thing reminds me of Father Zossima (if my memory serves me well, Father Zossima is also quoting an unnamed doctor), a beloved character from Fyodor Dostoevsky's "The Brothers Karamazov," arguably the number one novel from the West. Oh, sorry, I often forget that for you guys (white people in the West), Russia is not part of Western civilization or even white. Okay, so I will say it's the best work of literature in the world where people have white skin.


    "The more I love humanity in general, the less I love man in particular."


    Now, this is the mother of all HUBRIS! It is the archetypal hubris. Thank God we have Dostoevsky! TESREALism is tirelessly of proving itself as the savior of humanity (mostly of the future but sometimes even that of today). The leaders in TESREALism profess an elevated love for all humankind, and other life forms on this planet, but on the ground and in their day-to-day actions, they have unequivocally failed to demonstrate (or even grasp ideally) genuine goodwill toward their neighbor, the particular man! But this is just the sad part; the dangerous part is when they love this neighbor, it is the best man, the elite!

    3 Likes
    Dislike
    Share
    Reply
  13. Interesting one...


    It is unclear to. me how many SV tech bro transhumanists are serious

    about longtermism, vs. just leveraging it mostly as an excuse to

    justify simple greed and selfishness and sweep aside annoying current

    moral complexities and tradeoffs...


    I think Bostrom is intellectually serious about it, and Yudkowsky too,

    but the SBFs and Thiels etc. .. hmm....  Perhaps they mostly find it

    a convenient ideological cover for what they feel like doing anyway...


    If you think about the difference btw kindness to hypothetical future

    beings vs. kindness to current beings, it's partly that the latter is

    an I-Thou relationship and has potential to change the self via

    engagement with the recipient of your kindness.  Whereas kindness to

    potential future beings does not connect you with them concretely and

    thus brings much less potential for self-transformation and

    psychological development...


    An inclination toward kindness to far-future hypothetical beings vs.

    currently present beings would seem related to a diminished capacity

    for deep 2-way I-though relationships... which indeed seems

    characteristic of a certain percentage of CV tech bros...


    I would ask them, can they estimate the odds that having AGI emerge

    from an "uncompassionate to present companions" predecessor

    system/situation will poison the mind of the AGI and cause it to grow

    up as an ASI-hole unable to have I-Thou relationships ....  I mean

    it's a reasonable hypothesis that: An AGI/ASI seeded with concrete

    I-Thou 2-way kindness will lead to a growing system continuing that

    sort of kindness.. .and an AGI system seeded from a situation of

    "uncompassionate callousness toward current suffering" will lead to a

    growing incapable of real sharing and I-Thou compassion...


    Certainly this hypothesis cannot be proven definitively but nor really

    can it be proved that sacrificing current lives has any particular

    nontrivial odds of saving future trillions..


    When deal with massive uncertainties like those associated with

    post-Singularity potentials, it's very easy obviously to place more

    weight on those futuristic hypotheses that suit one's current belief

    or convenience...


    Anyway Emil seems to sort of blame longtermism for tech bro

    callousness, whereas my view is more than tech bro callousness is

    there for plain old commonplace humans-suck reasons and has grabbed

    onto longtermism as a way to justify itself and make itself sound

    fancy....  Can we find an example of someone who started out doing

    good works in practice, then adopted longtermism and started wreaking

    current evil? I don't know the example... but it would be interesting

    if so...

    2 Likes
    Dislike
    Share
    Reply

Related Articles

Like
Dislike
Share

14

Comments
Reactions
💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this button; you can only select one article every month.

People's Choice
Bookmarks