header-image

Good Shepherds

The intelligence of machines has exceeded our own to the extent that programmers accept their decision-making with blind faith. Does that make ai our new god?
DISCUSSED

Deep Learning, Dataism, John Calvin, The Book of Job, God’s Unfathomable Will, Ivan Karamazov, Erich Fromm, Non-Euclidian Geometry, Predictive Policing, The Trial, Ted Kaczynski, Ray Kurzweil, Life as an Irreducible Mystery

Good Shepherds

Meghan O’Gieblyn
Facebook icon Share via Facebook Twitter icon Share via Twitter

Science was supposed to have banished God, but he keeps turning up in our latest technologies. He is the ghost lurking in our data sets, the cockroach hiding beneath the particle accelerator. He briefly appeared three years ago in Seoul, on the sixth floor of the Four Seasons Hotel, where hundreds of people had gathered to watch Lee Sedol, one of the world’s leading go champions, play against AlphaGo, an algorithm created by Google’s DeepMind. Go is an ancient Chinese board game that is exponentially more complex than chess; the number of possible moves exceeds the number of atoms in the universe. Midway through the match, AlphaGo made a move so bizarre that everyone in the room concluded it was a mistake. “It’s not a human move,” said one former champion. “I’ve never seen a human play this move.” Even AlphaGo’s creator could not explain the algorithm’s choice. But it proved decisive. The computer won that game, then the next, claiming victory over Sedol in the best-of-five match. 

The deep-learning program that developed AlphaGo represents something entirely new in the history of computing. Machines have long outsmarted us—thinking faster than us, performing functions we cannot—but not until now have they surpassed our understanding. Unlike Deep Blue, the computer that beat Garry Kasparov in chess in 1997, AlphaGo was not programmed with rules. It learned how to play go by studying hundreds of thousands of real-life matches, then evolved its own strategic models. Essentially, it programmed itself. Over the past few years, deep learning has become the most effective way to process raw data for predictive outcomes. Facebook uses it to recognize faces in photos; the CIA uses it to anticipate social unrest. These algorithms can now predict the onset of cancer better than human doctors, and can recognize financial fraud more accurately than professional auditors. When self-driving cars take over the streets, these algorithms will decide whose life to privilege in an accident. But such precision comes at the price of transparency: the algorithms are black boxes. They process data on a scale so vast, and evolve models of the world so complex, that no one, including their creators, can decipher how they reach conclusions. 

Deep-learning systems comprise neural networks, circuits of artificial nodes that are loosely modeled on the human brain, and so one might expect that their reasoning would mimic our own. But AI minds differ radically from those evolved by nature. When the algorithms are taught to play video games, they invent ways to cheat that don’t occur to humans: exploiting bugs in the code that allow them to rack up points, for example, or goading their opponents into committing suicide. When Facebook...

You have reached your article limit

Sign up for a digital subscription and continue reading all new issues, plus our entire archives, for just $1.50/month.

More Reads
Essays

Send in the Clones

James Pogue
Essays

The Disaster and How Some Escaped

Will Stephenson
Essays

Notes in the Margin (Part III)

Peter Orner
More