Are We Endangered Yet? Artificial Intelligence and the Human Species
Five years ago, Google cofounder Sergey Brin said, “You should presume that someday we will be able to make machines that can reason, think, and do things better than we can.” That someday had already arrived in 2014 when this statement was made, and it is not standing pat. Artificial intelligence, or AI, had beaten the best chess players in the world and gone on to top the grandmaster of Go, a uniquely complicated and intuitive game, which it opened with a move so unorthodox that its opponent left the table, flustered. Fifty moves later the computer won. It had created its own strategy.
Just as recently, two Facebook chatbots were programmed to conduct negotiations over small consumer items (hats, baseball bats), each seeking the best bargain, and soon began using a language of their own. Facebook shut them down. About the same time, Google developed a translation tool between English and other languages and found, likewise, that its program had developed an approach that bypassed English altogether. Google liked the result, and kept it going. Both Facebook and Google dialects had become incomprehensible to humans. What could possibly go wrong here?
Perhaps the most user-friendly take on this question is a TED Talk by Nick Bostrom, a philosopher and technologist, entitled, “What Happens When Our Computers Get Smarter Than We Are?” You can find it easily on YouTube, and it is worth watching. Bostrom posits that machines can preserve our humanity, or they may just leave us behind.
He begins with the speed of change. If, as he imagines, our species appeared on Earth only one year ago, then the industrial era began within the last two seconds and the first computers appeared in the latest blink of an eye. This progress was driven largely by human intelligence. The human driver is now challenged by computer “superintelligence” that learns, and then decides, on its own. When Bostrom asked two groups of AI experts at which point would there be a 50 percent probability for computers to perform “almost any job as well or better than humans can do,” the median answer was between 2040 and 2050. We do not yet know, of course, but surprises come daily.
The reasons are simple physics. While our biological neurons fire at about 200 times a second, computer signals can travel at the speed of light. Our brains are furthermore couched inside one small cranium, while computers can be housed in warehouses, in city blocks of buildings. All of this potential lies dormant, waiting to be tapped. When it is, we will see a superintelligence explosion, and our own fate may depend on what it does. “Think about it,” Bostrom proposes, “machines will be better at inventing than we are, and doing it on digital time scales.” At which point the possibilities are unfathomably large.
For Bostrom, this raises two concerns. One is that if we created a “really powerful optimization process” for this superintelligence in order to obtain X, “You better be sure that X includes all we care about.” King Midas asked for the golden touch, and got it. He put his hand on his daughter and she turned to gold, and then his food became cold metal under his touch. (Query: The point is well taken, but is programing “all we care about” even possible?)
A second concern is that, once engaged, there may be no off-ramp. We of course could program one, but a yet more intelligent machine could find a work-around by hook or crook. Hackers and entire countries will try. Sooner or later, he says, “The genie will get out of the bottle.” Then what? We are not at the point of knowing this either. Meanwhile AI is going gangbusters, on the lips and in the labs in institutions far and wide, inventing more boldly every step. CRISPR, IoT, Blockchain etc., none with brakes and no stop signs in sight. Asked recently whether limits of some sort might be necessary, one enthusiast replied “I certainly hope not!”
Bostrom is not alone. His book on the subject, Superintelligence: Paths, Dangers, Strategies, was followed by inter alia Max Tegmart’s Life 3.0: Being Human in the Age of Artificial Intelligence, and now Brett Frischmann and Evan Selinger’s Re-Engineering Humanity. All three, and a growing body of articles as well, describe the same phenomenon. Which, as with humans, is a bag of mixed blessings that we started opening years ago.
One is the simple fact of work. Technology began changing humanity from the days of the wheel and the plow, with largely beneficial outcomes. Industrial technology brought more benefits, including steady employment, which held true until the machines began erasing the workers, but there remained a demand for minds to plan, evaluate, and make decisions. With advances in AI even these functions yield, shrinking the work force further and dividing its rewards yet more starkly between haves and have-nots. In the taxi industry, GPS has removed the need to know a service area, followed by Uber, Lyft, and virtually anyone with a driver’s license, to be followed next by automatic vehicles that remove the driver altogether. Who wins in this scenario?
At the same time, and paradoxically, the nature of work has been dumbed down as well. The repetitive labor of the assembly line (called Taylorism after its founding father), where not entirely displaced by machine, is now reinforced by AI calculations of worker efficiency (called Digital Taylorism) through every step in the supply chain. Amazon employees report “feeling like robots,” their time so scrutinized that they urinate in bottles rather than walk to the bathroom. A related article begins, “Instead of asking ‘Are robots becoming more human?’ we need to ask ‘are humans becoming more robotic?’” It is not really a question. A recent cartoon features a robotic-looking recruiter telling several prospects, “You’ll love it here, it’s a totally dehumanizing environment.”
Another impact is also a commonplace. We are guided through the day by disembodied voices (“turn right at next stoplight”), and return home to tell the equally disembodied Alexa what to do. Digital media and a suite of evolving platforms attract us like moths to the blue screen, students with smart-phones walk across campus like zombies, families at dinner stare at their mobile devices (not even the same TV show), a new way for humans to be. While these devices are indisputably handy (and at times can save lives), they enable the aggressive among us to predate at will, and the more retiring of us to retreat to electrons for the experiences of life itself, severed from the kind of physical contact that humans have relied on for tens of thousands of years.
Artificial intelligence leaves us as well at the mercy of those who, armed with a computer-load of data about everything we have ever purchased, photographed, emailed, “liked,” or done, will sell us yet one more thing. Digital face recognition now tells department store sales staff about your buying history and financial profile before you even get to the counter. With AI diagnostics the largest corporations in the world have the ability to know us better than we know ourselves — and to act on it.
More problematic still, and reaching root principles of democracy, AI allows any entity and any country to target us, individually, with rumors and falsehoods to which it deduces we are susceptible, and that we will then carry forward like articles of faith. To wit: the late presidential election. As NPR’s The Hidden Brain reveals weekly, humans are driven by habits and emotions, and whoever can target them most effectively will win the day. AI will run impeccable political campaigns.
At this point, perhaps only as a salve, most treatments of AI turn hopeful: in return for a cornucopia of benefits humans will find a way to keep control, to form a cooperative relationship with its new (and vastly more capable) partner. A recently seen wall-size ad in the Houston airport reads “Let’s write the future with Robots that have what it takes to Collaborate!” Harvard University scholars have taken the same approach, infusing ethics into AI-related courses toward “a future that combines the speed and statistical prowess of intelligent computers” with “innate human talents. . . the frontier of AI systems.”
I am not so sanguine. We have already embraced the gods of easy information, and in turn yielded the field of thinking about it. My classes produce papers steeped in web-drawn data but short on analysis. My nephew boasts that he doesn’t have to know a thing, he can pull it up on Google — which he can, including square roots and the Fall of Rome. Neither am I sure that we humans care about yielding to AI any more than we do about yielding personal data to Google, which makes a fortune selling it to others, who make fortunes using it to target us. We have become a commodity, and we seem content with the bargain.
At road’s end, what is it that makes us human? For several centuries we thought it was the ability to reason but we are now creating systems that out-reason us hands down. (Apparently they also create excellent memes.) Which leaves the human heart. It still exists, but can it marry the machine? Inevitably, it seems, we will find out.
In the meantime, a young scientist in China, for the best of reasons, practiced gene editing recently on two human embryos susceptible to HIV — and met a storm of criticism. Senior peers convoked a meeting and pronounced against the practice, stepping away from the fire. But only for the moment. It was too soon, they said, the practice was premature; it was not the wrong thing, just the wrong time. And when the right time comes? This, inevitably, we are going to find out too.
To be sure, none of this has happened yet but it is difficult to imagine, given human ingenuity and the stakes involved, what limits may even be possible. It is also difficult to imagine, for the first time in the human experience, just how we ourselves will look, think, and act (and reproduce) one century from now. Is there a point down the road when, like bringing water to a boil, we stop being Homo sapiens and start being Homo something-else?
And will we care when it arrives?
Oliver Houck on artificial intelligence and our species.