“We need to get over the transparency of the algorithm itself and look more at the transparency of the behavior,” Rahwan says. This, he points out, is basically what we do for humans. “When we hire somebody as a driver, we don’t open their brains and look into them to make sure they are ethical.”

In general, we make judgments about people’s character and form expectations about their future actions based on their past behavior, despite how little understanding we have of the complex biological and sociological factors that comprise a human. Perhaps we should take the same approach with artificial intelligence.

There are simple ways that researchers can foster that acceptance. Steinfeld and his team explored conditions under which people are more inclined to use the autonomous features of a hypothetical search-and-rescue robot. “What we found was that people generally do a lot better in appropriate use of autonomy in terms of letting the robot take control when the robot provides some feedback to the user about its current capabilities,” he says.

Robots that usually act like humans and provide human-like feedback can give the false impression that the robot will always act like a person. Speaking of her own experience in a self-driving car, Ju says that “when you’re sitting in a car and the car is driving down the road and it’s not doing anything weird, it’s actually super boring, and you get to a point where it’s easy to over-trust the car.” When the self-driving car she was in began to slowly veer out of its lane, Ju assumed it would realize the mistake and correct it, like a sleepy driver, but it didn’t. Unlike a person, the car lacked a way to understand the mistake it was making.

Some researchers like Ju have suggested that self-driving cars take their own version of a driving test. Naturally, it would have to be very different from the one humans take. Machines are very good at memorizing rigid rules—which is most of what humans have to learn—and very bad at making judgment calls and reconfiguring general knowledge to specific situations—things we take for granted when a human takes a driving test.

Self-driving cars are pressing together two extremes—the complexity and fuzziness of ethics and social etiquette with the precise and logical world of computing—forcing both sides to show their hand. When building autonomous machines, Rahwan says, “we can no longer afford to hide behind uncertainty or vagueness.”

Illustrations credit: Alicia DeWitt

Share this article

National Corporate funding for NOVA is provided by Carlisle Companies. Major funding for NOVA is provided by the NOVA Science Trust, the Corporation for Public Broadcasting, and PBS viewers.