Will Humanity Face Extinction?

Does our confidence in the Lord’s promises have the unintended side-effect of communicating callousness, disaffection, and ambivalence to those around us who don’t believe?
Earth
(NASA)

Nick Bostrom has one of the more interesting — and depressing — jobs that I’ve read about recently. As director of Oxford’s Future of Humanity Institute, Bostrom spends his days thinking about humanity and its fate in the coming centuries, millennia, and beyond. In other words, he tries to think of ways that human beings could become extinct in the near and distant future.

The mass extinction of species is nothing new to Earth history. The vast majority of species who have ever lived on our planet have gone the way of the dodo thanks to environmental changes, tectonic activity within the Earth itself, the impact of massive asteroids, etc. In the 21st century, we’re no more protected from such things than the dinosaurs were. Earlier this year, scientists told the U.S. Senate that, without a few years of advance warning, there is no way that humanity could stop an asteroid on a collision course with Earth — and if that asteroid were a kilometer or more in diameter, it would likely mean the end of human civilization.

However, Bostrom doesn’t focus on such “traditional” extinction scenarios like asteroid impacts or climate change. Rather, his job is to search out those scenarios for which there is no precedent, and more often than not, arise out of our own ingenuity and technological development. You might take that to mean nuclear war or a biological disaster like an engineered plague. However, we have already experienced those disasters as a species in some capacity. At the top of Bostrom’s watch-list is something for which we have no precedent whatsoever, which puts us in completely unknown existential territory: artificial intelligence.

In Ross Andersen’s recent Aeon Magazine profile of Bostrom, the possible risks of artificial intelligence are laid out thusly:

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

“The basic problem is that the strong realisation of most motivations is incompatible with human existence,” Dewey told me. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”

Such a scenario may seem outlandish and more the domain of sci-fi a la the Terminator franchise or Daniel H. Wilson’s Robopocalypse than a plausible existential threat. But that’s sort of the point: we humans are terribly bad at predicting because we don’t take the long-term view. We’re more of an immediate gratification kind of species. As Andersen puts it:

The idea that we might have moral obligations to the humans of the far future is a difficult one to process. After all, we humans are seasonal creatures, not stewards of deep time. The brevity of our lives colours our intuitions about value, and limits our moral vision. We can imagine futures for our children and grandchildren. We participate in their joys and weep for their hardships. We see that some glimmer of our fleeting lives survives on in them. But our distant descendants are opaque to us. We strain to see them, but they look alien across the abyss of time, transformed by the passage of so many millennia.

However, I must confess that when I read about a scenario like mankind’s subjugation by artificial intelligence — or any other distant existential threat — there’s another reason for my ambivalence that has nothing to do with how farfetched or far-off it may seem. I don’t get too worked up because I simply assume Christ is going to return before anything like that could happen.

My intent here is not to get into an eschatological debate about raptures, dispensations, and whatnot. However, I would hazard a guess that most Christians, regardless of their chosen eschatological framework, would agree with the basic notion that Christ will return before things get really bad (i.e., humanity is wiped out). This belief can provide us with a measure of comfort during times of fear and anxiety, be it war, economic depression, or disaster. As bad as things are, we’ll be rescued before things get worse. We simply need to persevere. Such an attitude can be summed up by the words of that old-timey spiritual:

This world is not my home, I’m just passing through.
My treasures are laid up somewhere beyond the blue.
The angels beckon me from Heaven’s open door
And I can’t feel at home in this world anymore.

And yet, how does our living out of such a belief appear to our skeptical, unbelieving friends and neighbors? The 18th century philosopher Jean-Jacques Rousseau described Christianity thusly:

Christianity is an entirely spiritual religion, concerned solely with heavenly things; the Christian’s country is not of this world. He does his duty, it is true; but he does it with a profound indifference as to the good or ill success of his endeavors. Provided that he has nothing to reproach himself with, it matters little to him whether all goes well or ill here below.

Is this an accurate description of how we Christians live out our lives in the here and now? Are we, as the adage goes, so heavenly minded that we’re of no earthly good? Life is fraught with uncertainty and anxiety. In our present age alone — never mind some possible AI-dominated future in the distant centuries-to-come — we face global economic instability, unending war and violence, political gridlock, and gross social inequality. These are not abstract concepts; they are forces and factors that can have huge impacts in our lives and the lives of our neighbors.

Does our confidence in the Lord’s promises have the unintended side-effect of communicating callousness, disaffection, and ambivalence to those around us who don’t believe? Do they see us as Rousseau did, as people who are “concerned solely with heavenly things”? Or does our confidence in the Lord’s promises encourage us to engage, positively and proactively (2 Corinthians 1:3 – 4) with this world precisely because we believe that the Lord is in control of history, that nothing happens outside of His purview? Not even the possible rise of our AI overlords?

This entry was originally published on Christ and Pop Culture on .

Enjoy reading Opus? Want to support my writing? Become a subscriber for just $5/month or $50/year.
Subscribe Today
Return to the Opus homepage