In the year 2000, Sun Microsystems cofounder Bill Joy wrote an essay provocatively titled “Why the Future Doesn’t Need Us.” Despite being published April 1, it was no joke; and despite its 11,000-word length, it would go viral. The reason why was encapsulated in its subhead. To wit:
“Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species.”
Now, almost a quarter way through the 21st century, warnings that such technology — artificial intelligence (AI), to be precise — could spell man’s doom are everywhere. A prime example is a May 30 statement from research organization Center for AI Safety reading, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
But here’s the kicker:
According to website Salon, some signatories to the above statement actually welcome homo sapiens’ extinction.
Oh, it’s not that they’re like serial killer Carl Panzram, who once reportedly said, “I wish all mankind had one neck so I could choke it!” (though one or two might be). Rather, some tech figures define “extinction” differently than we do — and they only fear the “wrong” kind.
To grasp this, understand that at issue is a movement (“religion” may be a better description) that has been called TESCREALism and which involves a group of ideologies termed the “TESCREAL bundle.” As Salon explains:
The term is admittedly clunky, but the concept couldn’t be more important, because this bundle of overlapping movements and ideologies has become hugely influential among the tech elite. And since society is being shaped in profound ways by the unilateral decisions of these unelected oligarchs, the bundle is thus having a huge impact on the world more generally.
The acronym stands for “transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism.” That’s a mouthful, but the essence of TESCREALism — meaning the worldview that arises from this bundle — is simple enough: at its heart is a techno-utopian vision of the future in which we re-engineer humanity, colonize space, plunder the cosmos, and establish a sprawling intergalactic civilization full of trillions and trillions of “happy” people, nearly all of them “living” inside enormous computer simulations. In the process, all our problems will be solved, and eternal life will become a real possibility.
Of course, the promise of “eternal life” sounds quintessentially religious, and it’s clear why secular people, who don’t believe in an afterlife, would find the promise of it appealing. They’re also “just” the latest in a long line of utopians. They’re not like Robert Owen, however, whose socialist-like endeavors failed in just one small early-19th-century commune; or atheist “reverend” Jim Jones, who orchestrated one late-’70s mass suicide; or even the Soviets, who directly controlled only one country. These are, as Salon stated, tech oligarchs who are shaping what will likely control, whole or in part, our future: AI. In fact, claims Salon, the aforementioned Center for AI Safety gets 90 percent of its funding from the TESCREAList community.
The point here, naysayers take note, is not whether you think these techno-utopians could actually pull off their grand, fantastic vision; it’s that with the technological power they’ll birth, their failure could be as destructive as their success.
This is staggering, too, when considering what they fancy “success”: creating a superior “posthuman” race — a new species — that supplants us on Earth (and beyond). They have expressed this aim, too. For example, relates Salon, “As the TESCREAList Toby Ord writes in his 2020 book ‘The Precipice,’ ‘forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential,’ adding that ‘rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today.’”
Likewise, Swedish philosopher Nick Bostrom “asserts that ‘the permanent foreclosure of any possibility of … transformative change of human biological nature may itself constitute an existential catastrophe,’” Salon also informs.
This is the uber-high-tech version of early 19th-century eugenics, out of which “transhumanism” (desire to transcend humanity), not surprisingly, grew. And this brings us to the TESCREALists’ conception of “good” and “bad” human extinction. Salon lists three kinds relevant here:
- Terminal extinction: This is simply your grandfather’s extinction, where all humans disappear.
- Final extinction: All humans disappear without leaving a successor.
- Normative extinction: Humans disappear while leaving successors that lack an “important” quality, such as consciousness.
It is only the last two that the TESCREALists fear. And people who believe machines could never become conscious should note that, theoretically, possession of such a quality by competing intelligent entities is perhaps unnecessary for our supplantation (this would be an instance of normative extinction).
So what form could such a successor take? Salon presents the example of using genetic engineering to fundamentally alter humans, who then become posthuman and integrate technology into their bodies (e.g., connecting their brains to the Internet via interfaces); they also could embrace “life-extension” technologies that could yield immortality.
Whatever form the “new man (machine?)” could take, TESCREALists believe “they have a responsibility to usher this new form of intelligence into the world,” as journalist Ezra Klein recently put it. In other words, they play God with the approval of their own consciences.
And that they pursue their aims without believing in God may be what’s most scary. In fact, they use terms such as “good” and “better” — as in the “common good” and a “better world” — without first precisely defining what “good” is or recognizing that there can be no such thing as objective good without the divine. It’s reminiscent of when G.K. Chesterton, critiquing this relativist confusion in his 1905 book Heretics, wrote of how moderns would emphasize the “need” for education, progress, and liberty without first “settling what is good.” Likewise, the type of people who today would respond to unwelcome moral correction with “Who’s to say what the ‘truth’ is?” or “Don’t impose your values on me!” are now claiming they can create AI that will have just the “right” values.
This is truly frightening because, after all, how can any type of advanced artificial intelligence ever be controlled by natural stupidity?