Existential Threats

The most frequent question I have been asked since What Comes After Homo Sapiens? has been published is not related to our successor species but rather to our current one, Homo sapiens. How long will we survive and what threatens us?

Some of this curiosity (?anxiety) has been fanned by the open controversy between Elon Musk and Mark Zuckerberg about the existential threat of artificial intelligence (AI).  Here are some of the past headlines:

“Elon Musk and Mark Zuckerberg clash over risks of artificial intelligence” (CBS News, July 26 2017)

“Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' (NPR, July 18 2017)

 “Elon Musk thinks artificial intelligence could cause World War III” (Fox News, Sept 5 2017)

“Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible' (CNBC Make It, July 24 2017)

“Musk: Artificial intelligence safety carries 'more risk' than North Korea” (USA Today, Aug 14 2017)

“Elon Musk and AI experts urge U.N. to ban artificial intelligence in weapons” (LA Times, Aug 21 2017)

“Mark Zuckerberg Argues Against Elon Musk’s View of Artificial Intelligence… Again” (Fortune, July 26 2017)

“Elon Musk says Mark Zuckerberg's understanding of AI is 'limited' (CNN Tech, July 25 2017)

“Musk and Zuckerberg bicker over the future of AI” (Engadget, July 25 2017)

So who is right? Only time will tell, of course, but since my books are dedicated to science-based speculative prediction, I am not one to shy away from judgment. To quote from What Comes After Homo Sapiens? this suggests that I favor Musk:

      I defer to greater brains than my own who are telling us that artificial intelligence will be the end of Homo sapiens or any other Homo that follows: Bill Joy, Stephen Hawking, Vernor Vinge, Shane Legg, Stuart Russell, Max Tegmark, Nick Bostrom, James Barrat, Michael Anissimov, Elon Musk, and Irving Good. Brilliant minds, Nobel Prize winners, renown inventors, and IT pioneers – all giving us warnings!

Since writing that book, my views now tend to favor Zuckerberg. I don’t think ASI will be achieved and even if there are some aspects of AI that get out of control, I don’t believe they will be existential threats as much as huge nuisances.

Of course there are other existential threats to Homo sapiens as discussed in chapter 10 of What Comes After Homo Sapiens?, Catastrophe. Another bolide impact like the one that doomed the dinosaurs 66 million years ago could do it, or a supervolcano or even nuclear holocaust.

What about genetic engineering discussed in both books? Homo nouveau as I describe it, is certainly the result of genetic engineering (aided by AI) and there are many other scenarios other than the one I describe that could lead to future speciation of Homo sapiens through genetic engineering. Are these existential threats?

Lee Silvers in his book, Remaking Eden, suggests a possible path to an existential threat through genetic engineering. Again to quote from What Comes After Homo Sapiens?:

            He envisions a future society practicing an extreme form of behavioral isolation based on genetic engineering. In this society, only a small portion of the population, which he calls the GenRich, have the financial means to genetically enhance their children. Using a process he calls reprogenetics, the GenRich have used genetic engineering techniques developed over decades that allow them to optimize a wide variety of human traits (including intelligence, athletic skill, physical appearance, creativity and many others) to put the GenRich in a controlling position throughout society.

            Over time, the wealth and cultural disparity between this GenRich minority population and the remaining “naturals” have become so great that there is little financial mobility or voluntary interbreeding between the two groups. Such a scenario could lead to cladogenesis by some chance genetic development of a postzygotic reproductive barrier.

Once speciation occurs, the long-term results are unpredictable. Homo nouveau, like the GenRich, at least in the early centuries or millennia, is probably not an existential threat. I say “probably not” only because there is a lot of uncertainty about what will happen when two human species coexist. Certainly things didn’t work out very well for the Neanderthals after Homo sapiens arrived. In fact, the same is true for Homo heidelbergensis, Homo erectus, Homo denisova and every other Homo species that may have coexisted with Homo sapiens. 

Previous
Previous

What's in a name?

Next
Next

Life