To your garden variety recommendation algorithm, your first dive into a website might present something of a tabula rasa. With nothing to guide its suggestions, a platform will spit out a little of everything: sports, politics, science, kittens. Whatever you do with that content will inform the algorithm’s next move. Over time, as it stockpiles behaviors ranging from the types of articles you share to the amount of time you spend watching a video, the algorithm will start to refine its recommendations.

The goal, of course, is to keep you coming back—to keep turning profits. That means filtering out anything you might deem a waste of time until the algorithm has effectively personalized your experience of a platform, showing you, in theory, only the things you’d want to see (it’s worth mentioning that these codes don’t have the capacity to craft a distinct model for each and every user; people are probably getting binned into pools based on preferences and demographics).

At first pass, this sounds like a great system. It’s certainly a successful one, as evidenced by the billions of people around the world currently enjoying content online. The problem is, it doesn’t take much for personalization to shapeshift into polarization.

This can be a tough distinction to make. Polarization is arguably an extreme form of personalization on topics we’re nervous about, Celis says. Obsessive dives into birdwatching and presidential primaries might plumb the same depths; the potential consequences just don’t carry the same weight.

But these algorithms aren’t drawing those kinds of distinctions. They certainly don’t have a conscience that tells them when they’ve gone too far. Their top priority is that of their parent companies: to showcase the most engaging content—even if that content happens to be disturbing, wrathful, or factually incorrect.

In other words, what’s good for a social media giant isn’t always in line with what’s good for an individual, says Hadi Elzayn, a computer scientist studying algorithmic learning theory at the University of Pennsylvania.

Elzayn compares the process to craving junk food. Asked point blank, most people would say they want to eat a balanced diet. But broccoli doesn’t give us the same high that sugar, fat, and salt do; that’s why most of us will still reach for cookies after waxing eloquent about cutting calories. Every time the broccoli goes untouched, the platform learns that it’s a safe bet to just fill your plate with cookies instead, rather than wasting time and space with more diverse options—your future stomachache notwithstanding. These algorithms aren’t tabulating your long-term health goals. They’re catering to your immediate gratification.

“This principle is often called ‘revealed preference’ in economics,” Elzayn says. “If you want to maximize engagement, or make money, you should follow what someone does, not what they say they’ll do.”

From here, it’s not a far leap to start to sway someone’s opinion, says Nisheeth Vishnoi, a computer scientist specializing in the intersection of artificial intelligence, ethics, and society at Yale University. Over time, the stuff that initially sated inherent cravings might no longer do the trick. Algorithms learn to lure us in with sweeter, fattier, saltier foods—or more radical content—whatever continues to elicit that primal response.

Preferences also become reinforced by familiarity, Vishnoi says. As a person gathers more information on a topic, that knowledge can start to bolster a sense of camaraderie or empathy, he says. Viewpoints get closer, and the sense of connection grows.

“Hate is a relative thing,” he says. “If you see it from a hundred kilometers, it’s different than when you’re right in the middle of it…this may be one reason people are becoming more extreme without even realizing it.”

Behind the curtain

All this paints a pretty grim picture. But the algorithms themselves aren’t the problem—not exactly.

“People tend to anthropomorphize these algorithms, but they’re not something sentient,” says Meredith Broussard, a data journalist who specializes in the ethics of artificial intelligence. “These are all processes happening inside a dumb box.”

After all, these algorithms can only steer users in the direction of content that’s already hosted on their sites—most of which simply aren’t set up to reliably present unbiased, factual news. “There’s a huge disconnect between what people think these platforms are and what they actually are,” says Safiya Noble, a communications expert who studies how digital media platforms affect society at the University of Southern California. “People are accustomed to thinking these platforms are reliable, trustworthy news sources. What they really are is large-scale advertising platforms.”

Recently, Facebook has come under fire over privacy concerns and its mishandling of fake news and extremist content. Image Credit: amesy, iStock

Virality doesn’t really come about organically. Paid-for advertisements, boosting options, and more have rejiggered the landscape of content visibility on social media websites. “This flies in the face of what many people think—that these are free speech zones, that everything has an equal chance of being seen and known,” Noble says. “That’s just not true.”

Unfortunately, visibility and truth don’t always go hand in hand.

Amid growing concerns about misinformation, both Facebook and YouTube have vowed to curtail the spread of false and provocative messages on their platforms, The Guardian reported last month. But with content streaming onto to platforms at unprecedented rates, there’s simply no real capacity to monitor every individual post, never mind how it’s weighted or ranked in the feeds of billions of users. Additionally, these algorithms are far from foolproof: It’s become pretty trivial to engineer fraudulent clicks and views to inflate the amount of screen time an idea receives.

And the barriers to entry are far lower than those to removal. Policing, flagging, and removing the overwhelming deluge of extremist and radical content is about as fun as it sounds. The people who hold these jobs are “looking at the worst kinds of filth every day,” Broussard says.

With so many loopholes, it’s no wonder “garbage floats to the top,” Broussard says.

Taking back the reins

There’s no silver bullet for the issues at hand. But that doesn’t mean we’re out of options.

One potential avenue forward might involve placing limits on algorithms, Celis says. Current iterations don’t have a clear cap on the amount of personalization—or polarization—that feeds can offer.

If a user’s political leanings tend leftward, for instance, that person’s feed might eventually be dominated entirely by liberal content. Even if that’s in line with the individual’s interests, this presents a pretty skewed version of reality. Algorithms of the future, however, might be able to maintain more diversity in what users see by occasionally slipping a few more moderate articles into the queue.

Celis, Vishnoi, and their colleagues at Yale are currently putting these ideas to work. Admittedly, it’s not a perfect system: There’s no guarantee that users will take the bait. But it all boils down to perspective. The simple act of showing that the other viewpoint still exists—and might just have some valid points to make—could go far.

Before such algorithms hit the market, though, society will need to weigh in on how they can be best put into action (if at all). The most effective way to present “alternatives” is also still up for debate, Vishnoi says. Presented in the wrong way, or in too high a proportion, they could easily backfire and end up reinforcing already extreme views.

All this means that addressing our algorithm woes will require more tweaking a few lines of code. Social inequality isn’t a problem that will be solved with an app—and algorithms don’t operate in a vacuum. The best solutions will also require rethinking the larger environment in which these technologies operate. As such, curbing the promotion of extremist ideologies will require partnering computational perspectives with diverse views from the fields of sociology, psychology, public policy, and more.

Many of the most important discussions are yet to come, but Broussard, Noble, and Vishnoi are among those that agree that the companies that host these algorithms must continue to be held accountable. Drugs and other new technologies that carry potential risks must undergo substantial vetting and safety testing before they’re allowed to hit the market, Noble says, but so far, the same hasn’t applied to algorithms—even though they affect billions around the world. “We expect that companies shouldn’t be allowed to pollute the air and water in ways that might hurt us,” she says. “We should also expect a high-quality media environment not polluted with disinformation, lies, propaganda. We need for democracy to work. Those are fair things for people to expect and require policymakers to start talking about.”

It’s clear that the spread of misinformation was an unintended consequence of the deployment of algorithms to maximize engagement. Social media platforms—very understandably—followed the money trail. But a lack of foresight doesn’t absolve these companies of culpability, Noble says.

Policies won’t change overnight. In the meantime, though, it’s worth remembering that we’re not powerless against these algorithms—nor are we beholden to ingest only the content they serve up. We all have agency over what we look at, or don’t look at, online. Being more deliberate about these behaviors and expanding the breadth of content we engage with could even send a message to an algorithm, Celis says. Or, at the very least, offer a fresh perspective on a familiar topic. (And for those who’d rather not feed the beast, there are ways to adjust your settings on social media platforms like Facebook for maximum privacy.)

Artificial intelligence has come a long way, but there’s still a lot it can’t do. Search and suggestion algorithms can certainly come in handy when you’re looking up a state capital or converting pounds to kilograms. But relying on these platforms for more complex queries—those that engage empathy, open-mindedness, and instinct—is an exercise in futility, Noble says.

Learning and the acquisition of knowledge inevitably go beyond the limits of social media and search engines. People should acknowledge these limitations and accept that technology is not a panacea for ignorance, she says. Only then can humanity be emboldened to write itself back into the conversation.

After all, these algorithms didn’t just invent themselves. At the end of the day, the next move is ours.

Receive emails about upcoming NOVA programs and related content, as well as featured reporting about current events through a science lens.

Share this article

National Corporate funding for NOVA is provided by Carlisle Companies and Viking Cruises. Major funding for NOVA is provided by the NOVA Science Trust, the Corporation for Public Broadcasting, and PBS viewers.