One of the most widely known practitioners of artificial intelligence never used a computer or built what we’d think of as a robot. Mary Shelley’s Dr. Victor Frankenstein, the creator of a “modern Prometheus” capable of thinking and acting on his own, captivated readers from the moment the novel Frankenstein first appeared on shelves. But that success belies the fact that Shelley was still ahead of her time. What once seemed like a bizarre fantasy—the notion that man could create a being who could think as we do—is, today, a fascination. It helps that we’ve grown closer, in our world, to making Dr. Frankenstein’s Promethean dream a reality. But the stories have also built upon themselves intriguingly, adding wrinkles to Shelley’s crisply told moral parable about how stealing the fire of the gods was a bad idea. Today’s A.I. stories thrive on ambiguity. They test an audience’s sympathies, challenge our preconceptions, and require very real intelligence.
Frankenstein relies on the notion that humans will inherently reject artificial intelligence as unnatural and bizarre. A great deal of that is owed to the particularly odd appearance of Frankenstein’s monster (memorably played by Boris Karloff as a square-headed ghoul in the 1931 film adaptation), as well as Dr. Frankenstein’s mission to create life simply to prove he can. But what about when A.I. comes in a more attractive package, one that has real utility? The 1920 play R.U.R. became a sensation for depicting a world in which “robots”—a term Czech playwright Karel Čapek coined—have come to be an inexpensive source of labor. Inevitably, they rise up to slay their masters.
It was a chilling vision, and one that’s often been revisited. Sci-fi author Isaac Asimov came up with the “Laws of Robotics,” an influential concept, in order to help clarify how humans might constrain their creations. These laws dictate that a robot may not, in order of importance, harm a human, disobey a human or harm itself.
For Asimov, robot intelligence is categorically different from humans’: we’re governed by ethics we can change in the moment, whereas for robots, self-preservation comes only after protecting and serving man. In later stories, Asimov’s robots come to feel they can only serve humanity properly by ruling over it. That paradox will sound familiar to anyone who’s seen 2001: A Space Odyssey, the 1968 film that made HAL 9000 a household name. A spaceship operating system, HAL 9000, decides to kill two astronauts because he is unable to reconcile his servility with orders to conceal the true nature of its mission. The easiest solution is just to be done with the humans.
HAL is best remembered as a quiet menace, with an unblinking red eye, giving way to a strange sort of humanity. When he’s being deprogrammed, his meltdown into incoherence, ending with a rendition of the song “Daisy Bell,” has a strange pathos, even as we know he’s simply running through programming. It’s uncanny: he’s like us, even as we know he utterly lacks feeling. Operating systems in subsequent films have tended to ditch the appearance of humanity.
The Matrix (1999) depicted a burned-out world destroyed by conflict between man and machine but characterized the machines that governed it (and thrived off energy produced by the bodies of imprisoned humans) mainly as skittering, spider-like entities. The difference between man and machine couldn’t be more stark. Even when the machines take on a human form, Hugo Weaving’s sentient program Agent Smith, there’s something less real about him even than HAL. (Part of it may be Agent Smith’s rubbery face and ability to morph—HAL is easy to get accustomed to visually, while Agent Smith makes the uncanniness of A.I. literal. He is like us but not.)
HAL wants to kill a couple of humans for what he understands to be the greater good of humanity; Agent Smith is motivated by hate. One of the most recognizable cyborgs in all of pop culture is a bit more complicated. In Arnold Schwarzenegger’s debut appearance as the Terminator (in 1984), he’s deployed as an assassin to kill Sarah Connor, the woman whose unborn son will grow up to destroy the all-powerful A.I. network Skynet. In Terminator 2 (1991), the Terminator serves as a protector, thanks to reprogramming he’s undergone in the future.
Skynet is the darkest vision of A.I.—a program created to help humanity but determined to supersede its creators—but the Terminator provides a slightly more hopeful vision. We may not escape artificial intelligence, but perhaps we can code it to help us. There have been many purely helpful androids in fiction, figures such as Rosie the Robot in The Jetsons and R2-D2 and C-3PO in Star Wars. Those cheerful helpmeets bore human society no more ill will than an occasional wisecrack (or, in R2’s case, a slightly annoyed beep and whistle). More adult conceptions of the helper bot came courtesy of the starship Enterprise. On Star Trek: The Next Generation, Brent Spiner’s android, Data, falls short of humanness given his lack of understanding of our emotion and wit. Yet his striving to become more human makes him seem, paradoxically, movingly like the similarly ambitious and curious men and women he’s meant to help.
Similar fantasies of ambiguity, of androids who crave not domination but the same things humans do, have been played with intriguingly, as in Ex Machina (2015), about a robotic love object who, surprisingly, turns out to have a mind of her own and a taste for freedom. Westworld, the HBO series based on a Michael Crichton film, plays with similar themes—its robot “hosts” are there to show humans a good time in a futuristic theme park, but the robots crave freedom. The body count they accumulate along the way is unfortunate but only incidental.
This represents a way forward for the A.I. story. Between the Terminator and Matrix franchises, it’s hard to imagine art coming up with a better way to represent technology malevolently hunting down humans in order to control the world. But simple attempts at coexistence strike the same strange and melancholy notes Frankenstein did. Consider Westworld’s twist-reveals of characters presumed human as actually being androids—suggesting that the difference, though fundamental, is smaller than we might think. In the virtuosic film Her (2013), Scarlett Johansson’s Siri-like character understands everything except why her owner sometimes fails to see her as effectively human—and she’s not totally wrong to be confused. In A.I. (2001), a young robot boy, created to love humans and to be loved, can’t understand why his odd, not-quite-right reactions engender fear and hatred in the humans he cares for. And then there’s 1982’s Blade Runner, which may be the single most influential film ever made about artificial intelligence. In it Harrison Ford, a cop who is tasked with weeding out humanoid robots, wonders (possibly correctly) if he’s one too. Blade Runner’s sweeping imagination about the ways in which artificial intelligence could merge with society—as malefactor, as protector, as some combination of the two, or perhaps just as a rogue force trying to survive—gave the genre new heights toward which to aspire.
This is where fiction about artificial intelligence takes root in our minds: when it makes clear that the gap between us and the Frankenstein creations we make is getting narrower and narrower, and they may not so much destroy us as come up with their own ethics and ways of living. Perhaps one of the last substantial differences may be that we can create and consume art about the conundrum of A.I. overtaking us. When a robot directs a movie like Blade Runner, then we’ll be in trouble.