Artificial Intelligence To Spark Apocalypse Now? Only In Our Fevered Dreams
31.12.2023 - 01:52
/ tech.hindustantimes.com
/ Elon Musk
“I mean, sometimes you get these like, late civilization vibes,” said Elon Musk, Tesla's chief executive officer, at a recent event for the Cybertruck, his piece of absurdist automotive art. “The apocalypse could come along at any moment. And here at Tesla, we have the finest in apocalypse technology.” There's a lot of this end-of-days talk around right now. Even before the Covid-19 pandemic, there were stories about Silicon Valley billionaires prepping for Armageddon by purchasing bunkers in New Zealand. But this year I've been hearing and reading more and more of it, especially linked to artificial intelligence.
I find it more fascinating than troubling, because I see eschatological obsessions as social phenomena not rational analyses of where we're headed. Still, such thinking can be dangerous when exploited by political opportunists. At the very least, it's a wasteful distraction from addressable problems right in front of us. The real question we should be concerned with is why cataclysmic prophets sometimes attract big followings. Understanding this can help us avoid the paths they may lead us down.
The big Doomsday theme this year has been the existential risk from rapidly evolving AI technology. In 2023, everyone seemed to be experimenting with ChatGPT and other sophisticated large language models, feeding anxiety not only about how these tools might destroy jobs, but also about how AI was inching toward sentience and might some day kill us all.
Many of the venture capitalists and engineers behind this technology are adherents of the effective altruism movement and overlapping philosophies concerned about the future of humanity. They're not all apocalyptically inclined, but many are: One famous effective altruist, the FTX founder Sam Bankman-Fried, had been hatching a plan to buy the Micronesian island of Nauru, where he would build a bunker large enough to ensure the survival of most of the group. That was before the collapse of the cryptocurrency trading company and his fraud conviction.
The debate over AI safety was behind the chaotic removal and reinstatement of Sam Altman as chief executive officer of OpenAI, the company behind ChatGPT, in November. Dig deeper and you'll enter an amazing rabbit hole of schisms and rivalries, which in OpenAI's case boils down to differences between two groups, the “accelerationists” and “longtermists,” according to Emile P. Torres, a former EA member and chronicler of the factions.
The accelerationists don't buy the existential threat. Marc Andreesen, co-founder of VC firm Andreesen Horowitz and self-professed accelerationist, published a long techno-optimist manifesto in October saying he didn't believe in utopia or apocalypse. Instead, accelerationists