Can Autonomous Cars Learn to be Moral?
How should a self-driving car behave? It’s not as simple as you might think.
It’s a choice no one should ever have to make.
Driving down a road bordered by a rock wall on one side and a deep crevasse on the other, I suddenly have to make a terrible split-second decision: Do I hit the pair of lost hikers who just stepped out from a break in the rock face or swerve off the road, sacrificing myself to the 50-foot cliff drop?
Thankfully, this didn’t happen in real life—it was just a simulation where people decide how a self-driving car should react in similar situations. The game tests myriad other conditions, too, including a futuristic twist on a classic thought experiment in ethics known as the trolley problem: a self-driving car must decide whether to stay its course and hit the five people standing on the road directly ahead, or swerve out of the way, killing a single pedestrian on the sidewalk.
When I tell Iyad Rahwan, one of the game’s creators and an associate professor at the Media Lab at MIT, that the game made me squirm with discomfort, he chuckles. “I’m glad that happened,” he says. “That was exactly the intended purpose. To see that, yes, it’s difficult.”
These exact situations are unlikely to occur—especially for cautious and law-abiding autonomous vehicles—but statistics suggests that with enough self-driving cars on the road, sooner or later a computer will be forced to choose who will be hit and who will be saved. And there’s often no time for human intervention.
Machines That Learn
Autonomous machines—those that can gather information about their surroundings and decide what to do without human input—are slowly emerging from their quiet applications in factories and outer space and entering our immediate environment. You might already own a robotic vacuum cleaner or a self-adjusting thermostat, and soon, when you order a replacement on Amazon, it might be delivered by drone.
But for most people, cars may be the most noticeable vanguard of robotics, and one of the most consequential. “An autonomous car is like a robot that you’re sitting inside of. It exemplifies a lot of the big issues when you’re dealing with machines,” says Wendy Ju, the executive director for interaction design research at Stanford’s Center for Design Research.
One of the biggest issues will be trust. Autonomous cars are a showcase for machine learning, a relatively new way to program computers. Most of today’s programs that run on our phones and in our cars are algorithms consisting of a fixed list of commands that respond in prescribed ways. While these command-response pairs can be extremely complicated, they are all initially programmed into the computer. But machine learning algorithms are different—they adapt their behavior when presented with new information. In short, they learn and not always in ways we expect.
A recent example of machine learning’s potential is Google DeepMind’s computer program AlphaGo, which used artificial neural networks, a type of machine learning, to beat the world champion of Go, a classic board game. AlphaGo’s career began by observing millions of positions in actual Go matches to learn how humans play the game. Then it played against itself over and over again, continuously reevaluating its strategy.
For most people, cars may be the most noticeable vanguard of robotics.
Go is such a complex game that it practically demanded a machine learning approach. There are more possible positions in a game of Go than there are atoms in the universe, making it impossible to write a simple algorithm with a list of game positions and responses. Similarly, an autonomous vehicle will undoubtedly encounter new and unexpected situations, regardless of how many simulations engineers run. The generality and nuance of machine learning is necessary to help self-driving cars make quick decisions.
Most of the time, autonomous vehicles will have to make mundane decisions: when to merge, how quickly to merge, and whether to swerve to avoid a pothole. Not all decisions will be as consequential as the trolley problem, but many will involve engaging in social subtleties, something that’s simple for a person but nuanced and complex enough to stymie robots. For example, if a self-driving car is waiting at a stop light and the light turns green but the car in front doesn’t move, how long should it wait before honking? Will it be able to intuit why the car is stopped at a green light?
“The thing we are discovering when we’re making autonomous cars is how amazing people are. The way we drive and negotiate the road and figure out what’s going on—most of the time in one piece—is pretty amazing,” Ju says.
Automatic Pilots
Already we’re beginning to see computers forced to reckon with thorny circumstances. On May 7, 2016, Joshua Brown, a 40-year-old technology consultant from Canton, Ohio, was killed when his Tesla Model S slammed into the side of a turning tractor-trailer. He was using the car’s Autopilot feature, as Tesla confirmed, and it had failed to spot the white trailer against the bright Florida sky.
Partially autonomous cars like the Tesla Model S control functions such as speed and steering but still require an attentive driver. The mix of human and machine controls blends seamlessly most of the time, but can be problematic in some cases.
There is some precedent for humans handing over the reins of a powerful and potentially deadly machine: the airplane. For over a century, planes have been aided by some form of autopilot. Initially, planes used gyroscopes to hold a steady course and altitude. Today, nearly all of the regular mid-air controls are handled by a computer. “Autopilots for planes have been very effective and quite successful,” says Aaron Steinfeld, an associate research professor at the Robotics Institute at Carnegie Mellon University. Steinfeld points out that, since the introduction of modern autopilots, flying grew markedly safer.
But aircraft autopilot differs in a key way from autonomous vehicles—hand the controls back to a person when it encounters a sticky situation. Many people don’t expect autonomous vehicles to be able to do that. Besides pilots being “highly trained and regularly evaluated,” Steinfeld says, “you have a lot more time to react in a plane. You’re up in the air, there’s time to figure out what’s going on, time to gain situational awareness, and for the most part it’s possible to do that. In a car, you may have a second or two seconds to gain situational awareness and make the appropriate action.” As the recent Tesla crash demonstrated, that’s not always possible.
Shared Control
Still, some degree of autonomy is beginning to seem inevitable in cars. “Most people think that the way we are going to get to a future where we have autonomous cars is going to be really piecemeal. There are a lot of cars that are increasingly automated,” Ju says.
The National Highway Traffic Safety Administration defines five categories of vehicular autonomy. The lowest, Level 0 cars have no automation, and highest, Level 4, is considered fully self-driving and can safely operate in all situations with no driver input, or even while unoccupied.
Tesla Motors has been marketing cars with Level 2 or 3 features for over two years. Autopilot features like steering assistance and speed control are designed to improve driver safety, and they often do. But they cannot—and are not intended to—replace an engaged human driver.
The crash in May is the first known fatality using Tesla’s Autopilot, but it is certainly not the first documented misuse of Tesla’s Autopilot feature. Despite the company’s firm warnings that drivers need to remain alert and ready to take the wheel at a moment’s notice, YouTube is full of videos of Tesla drivers eagerly showing off their hands-free antics, from “driving” from the back seat to napping in a traffic jam.
Tesla is road-testing autonomous features, designed to ease the driving experience and take partial control, while still asking drivers to maintain full control—a seeming paradox to some. Echoing many other experts, Madeleine Elish, a research analyst at Data and Society Research Institute, says she was concerned that Tesla is “engaging in an unethical research practice by shifting the risk from developer to the user.”
Despite concerns about the way Tesla released Autopilot, the feature might already be saving lives. On their blog, Tesla points out that the fatal crash occurred after 130 million miles of Autopilot use, which compares favorably to the average 94 million miles of driving per fatal car crash in the United States. Waiting for foolproof full automation might cause more people to die in traffic accidents in the meantime. “If we wait for perfect, we’ll be waiting for a very, very long time,” Mark Rosekind, head of the National Highway Transportation Safety Administration, said at a recent conference . “How many lives might we be losing while we wait?”
There is still a lot more to learn about self-driving cars, and there’s a limit to how much researchers can learn through simulations. “Tesla’s doing a good job testing, putting a lot of information out there about what’s going on on the road all the time,” Ju says. Tesla may be taking the necessary—if precarious—first steps toward full automation.
Staying Awake at the Wheel
Tesla states on its blog that “every time that Autopilot is engaged, the car reminds the driver ‘Always keep your hands on the wheel. Be prepared to take over at any time.’ ” To reinforce this warning, the car periodically checks if the driver’s hands are on the wheel, gradually slowing down if they are not.
The problem is, even when a driver’s hands are at the wheel, the system is competent enough to make most of the driving experience mind-numbingly banal.
A research team led by Ju is investigating less prescriptive ways for an autonomous car to keep a largely useless driver engaged. Study participants who were asked to simply supervise a car became sleepy, but people watching videos or reading on a tablet were less drowsy. “Driving is a really boring activity, and they are trying to keep themselves mentally occupied,” Ju says. “So the car should be doing things to keep the driver engaged, just in case it finds itself in a situation that it can’t manage.”
Many companies are hoping to skip past this awkward phase of shared human and machine responsibilities. To avoid the problems of partial autonomy, Google has committed to waiting until they can release a Level 4 fully self-driving car. Their website states that “the full potential of self-driving technology will only be delivered when a vehicle can drive itself from place to place at the push of a button, without any human intervention.”
The Trolley Problem
The promise of self-driving cars is undeniable. They have the potential to eliminate nearly 90% of accidents—those caused by human error—and expand driving privileges to a wider range of people, including the elderly and people with certain disabilities. But driving is a complex activity. How do you teach a car to navigate the many complicated ethical, social, and strategic situations that a person encounters while driving?
A first step might be to teach a car the basics: all the laws and guidelines you need to know to get a driver’s license. But just following the law won’t be enough. Autonomous vehicles will also need to selectively break the rules, just like humans. For example, a self-driving car might blow through a stop sign to avoid being rear-ended by a car which has lost control of its brakes.
“If we wait for perfect, we’ll be waiting for a very, very long time.”
Regardless of how many individual situations a car is taught to navigate, there will always be unforeseen circumstances where the car is forced to make a decision. For example, there will certainly be new variations of the trolley problem, where a car is forced to pick between two evils. Should it run over three pedestrians or drive off a cliff, sacrificing the driver? Should it prioritize the lives of children over adults?
In a recent coauthored paper, Rahwan, the MIT Media Lab professor, presented the results of a survey where participants were asked what the car should do in these dilemmas. Generally, people’s preferences were guided by a utilitarian philosophy, where vehicles should always make the choice that minimizes the death toll. Participants were in favor of the utilitarian option—the one that benefitted the largest number of people—even when it required that the driver sacrifice themselves and their child sitting in the front seat.
But how people react in these situations may not be an ideal guide. Autonomous vehicles might be held accountable to decisions that a human would never make. Take, for example, a truck is barreling down the road, headed straight at a group of five pedestrians, Rahwan says. Your self-driving car could swerve to intercept the truck and save the five pedestrians, but killing you in the process.
We certainly don’t expect humans to make this calculation, let alone act on it. But a genuinely utilitarian vehicle—one that placed the highest value on the total number of lives saved—might easily make a strikingly unhuman, self-sacrificing decision.
In Rahwan’s research, people thought that autonomous vehicles should be publicly manufactured with utilitarian preferences, but they said they would be unwilling to buy a car knowing it might sacrifice their lives. By producing self-driving cars with strictly utilitarian algorithms “you actually end up with a less utilitarian outcome, where people don’t buy self-driving cars and they keep killing each other by texting on the road,” Rahwan says.
The moral that Rahwan draws for fully automatic cars echoes the way that Tesla has defended its Autopilot feature after the fatal crash: focusing too much on the rare, tragic moments will prevent the adoption of technology that would overall save lives.
Such cases emphasize the difficulties in anticipating and programming every eventuality and subtlety. Autonomous vehicles will almost certainly need to learn, continuously updating their internal models of the roadway and their standards of acceptable driving. “Maybe this is how we need to think about machines,” Rahwan says. “They are kind of like children so we have to be careful how we teach them and we have to make sure they are taught, in a good manner and by the right people, because otherwise they would be far less predictable.”
Trusting Machines
As artificial intelligence develops increasingly subtle and complex decision making processes, it will become harder to determine who’s accountable for a machine’s actions: the engineer who designed it, the consumer who purchased it, or the machine itself. Because machine learning adapts itself in ways we can’t predict, we won’t necessarily be able to study the source code of an autonomous machine to understand why it acted the way it did.
Instead, it might be more of a process of reconstruction. “If you are storing large amounts of data about what the system is seeing, where its decision making process was, where decision points occurred, you can go back through and see why it did something and what happened when it did that. It’s like a very detailed black box on an airplane,” says Steinfeld, the Carnegie Mellon researcher.
Without fully understanding the motivation for a self-driving car’s decisions, we might be slow to trust them, particularly since autonomous vehicles might not drive in a way that is familiar. For example, roads driven exclusively by self-driving cars may not need traffic lights, and yet they could be able to safely navigate them at uncomfortably high speeds.
“We need to get over the transparency of the algorithm itself and look more at the transparency of the behavior,” Rahwan says. This, he points out, is basically what we do for humans. “When we hire somebody as a driver, we don’t open their brains and look into them to make sure they are ethical.”
In general, we make judgments about people’s character and form expectations about their future actions based on their past behavior, despite how little understanding we have of the complex biological and sociological factors that comprise a human. Perhaps we should take the same approach with artificial intelligence.
There are simple ways that researchers can foster that acceptance. Steinfeld and his team explored conditions under which people are more inclined to use the autonomous features of a hypothetical search-and-rescue robot. “What we found was that people generally do a lot better in appropriate use of autonomy in terms of letting the robot take control when the robot provides some feedback to the user about its current capabilities,” he says.
Robots that usually act like humans and provide human-like feedback can give the false impression that the robot will always act like a person. Speaking of her own experience in a self-driving car, Ju says that “when you’re sitting in a car and the car is driving down the road and it’s not doing anything weird, it’s actually super boring, and you get to a point where it’s easy to over-trust the car.” When the self-driving car she was in began to slowly veer out of its lane, Ju assumed it would realize the mistake and correct it, like a sleepy driver, but it didn’t. Unlike a person, the car lacked a way to understand the mistake it was making.
Some researchers like Ju have suggested that self-driving cars take their own version of a driving test. Naturally, it would have to be very different from the one humans take. Machines are very good at memorizing rigid rules—which is most of what humans have to learn—and very bad at making judgment calls and reconfiguring general knowledge to specific situations—things we take for granted when a human takes a driving test.
Self-driving cars are pressing together two extremes—the complexity and fuzziness of ethics and social etiquette with the precise and logical world of computing—forcing both sides to show their hand. When building autonomous machines, Rahwan says, “we can no longer afford to hide behind uncertainty or vagueness.”
Illustrations credit: Alicia DeWitt