• How the Algorithm Rewards Extremism

    Clive Thompson on Big Tech, the Internet, and the Mess We're In

    While still at Google, he began to do doctoral research into our attention and how modern tech was affecting it. “Nobody goes into tech thinking, I want to spy on people and make the world a worse place,” he said. “They’re well intentioned.”  But the business models have a propulsive force of their own.

    Article continues after advertisement
    Remove Ads

    Eventually, after ten years at Google, Williams left; he wound up at the University of Oxford, where he wrote Stand Out of Our Light, a penetrating meditation on the civic and existential dangers of big tech. “I’ve gone from one of the newest institutions on the planet to one of the oldest,” he says wryly.

    *

    Scale also makes algorithms reign supreme.

    Why? Because once a big-tech firm has millions of users—posting billions of comments a day, or listing endless goods for sale—there’s no easy way for humans to manage that volume. No human can sort through them, rank them, make sense of them. Only computers and algorithms can. When scale comes in, human judgment gets pushed out.

    Article continues after advertisement
    Remove Ads

    This is precisely what confronted Ruchi Sanghvi and the Facebook team that crafted the News Feed. They couldn’t show users every posting of their friend, because that would drown them in trivia. They needed automation, an algorithm that would pick only posts you’d most likely find interesting.

    How does Facebook figure that out? It’s hard to know for sure. Social networks do not discuss their ranking systems with much detail, to prevent people from gaming their algorithms; spammers constantly try to suss out how recommendation systems work so they can produce spammy material that will get up-ranked. So few outside the firms truly know. But generally, the algorithms up-rank the type of content you’d expect: posts and photos and videos that have amassed tons of likes or “faves” or attracted many comments, reposts, and retweets, with a particular bias toward recent activity.

    Signals like these help fuel the “recommended” videos on YouTube, the “trending” topics on Twitter or Reddit, and the posts that materialize in your News Feed. When algorithmic ranking works, it’s enormously useful. It picks the wheat from the chaff. But it has biases of its own. Any ranking system based partly on tallying up the reactions to posts will wind up favoring intense material, because that’s the stuff that gets the most reactions. As scholars have found, social algorithms around the internet all seem to reward material that triggers strong emotions. Hot takes, heartstring-tugging pictures, and enraging headlines are all liable to be very engaging. One study found that the top-performing headlines on Facebook in 2017 used phrases that all suggested deeply emotional, OMG curiosity—phrases like, “will make you” or “are freaking out” or “talking about it.” Of course, this is perfectly harmless when we’re talking about heartwarming kitten videos or side-eye GIFs from last night’s episode of Claws.

    But when it comes to the public sphere, these algorithms can wind up favoring hysterical, divisive, and bug-eyed material. This is not necessarily a new problem, of course. In America, for example, the national conversation has struggled with people’s propensity to focus on fripperies and abject nonsense ever since the early years of the republic, when newspapers were filled with lurid, made-up scandals. But algorithmicized rankings have pushed this long-standing problem into metabolic overdrive. In YouTube, to take one example, video celebrities have raced to trump each other with ever crazier, more dangerous stunts; one father became so obsessed with retaining his 2-million-fold viewers that he began posting videos of his children in active distress (his “TRAUMATIC FLU SHOTS!!!” video included “a young girl’s hands and arms are held above her head as she screams with her stomach exposed,” as BuzzFeed described it).

    If you’re favoring material that generates attention, the wackier the post, the more it’ll get attention.

    My friend Zeynep Tufekci, an associate professor at the University of North Carolina who has long studied tech’s effect on society, argued in early 2018 that YouTube’s recommendations tend to over-distill the preferences of users—pushing them toward the extreme edges of virtually any subject. After watching jogging videos, she found the recommendation algorithm suggested increasingly intense workouts, such as ultra-marathons. Vegetarian videos led to ones on hard-core veganism.

    Article continues after advertisement
    Remove Ads

    And in politics, the extremification was unsettling. When Tufekci watched Donald Trump campaign videos, YouTube began to suggest “white supremacist rants” and Holocaust-denial videos; viewing Bernie Sanders and Hillary Clinton speeches led to left-wing conspiracy theories and 9/11 “truthers.”

    At Columbia University, the researcher Jonathan Albright experimentally searched on YouTube for the phrase “crisis actors,” in the wake of a major school shooting, and took the “next up” recommendation from the recommendation system. He quickly amassed 9,000 videos, a large percentage that seemed custom designed to shock, inflame, or mislead, ranging from “rape game jokes, shock reality social experiments, celebrity pedophilia, ‘false flag’ rants, and terror-related conspiracy theories,” as he wrote. Some of it, he figured, was driven by sheer profit motive: post outrageous nonsense, get into the recommendation system, and reap the profit from the clicks. Recommender systems, in other words, may have a bias toward “inflammatory content,” as Tufekci notes.

    Another academic, Renée DiResta, found the same problem with Facebook’s recommendation system for its “Groups.” People who read posts about vaccines were urged to join anti-vaccination groups, and thence to groups devoted to even more unhinged conspiracies like “chemtrails.” The recommendations, DiResta concluded, were “essentially creating this vortex in which conspiratorial ideas can just breed and multiply.”

    Certainly, big-tech firms keep quiet about how their systems work, for fear of being gamed. But since they seem to self-evidently favor high emotionality, it makes them pretty easy to manipulate, as Siva Vaidhyanathan, a media scholar and author of Antisocial Media, notes.

    “If you’re favoring material that generates attention, the wackier the post, the more it’ll get attention,” he says. “If I were to construct a well-thought-out piece about monetary policy, I might get one or two Likes, from people who are into that. But if I were to post some crack-pot theory about how vaccines cause autism? I’m going to get a tremendous amount of attention—because maybe one or two of my friends are going to say you’re right, and a tremendous number are going to say no, you’re wrong, and here’s the latest study from the CDC proving you wrong. That attention to disprove me only amplifies my message. So that means anything you do to argue against the crazy is counterproductive.” As he concludes: “If you’re an authoritarian or nationalist or a bigot, this is perfect for you.”

    Article continues after advertisement
    Remove Ads

    Indeed, this is precisely the problem that recommendation algorithms have visited on countries around the world. In the last US federal election, far-right forces—including the Russian government, via troll farms intent on sowing division in the US and supporting Donald Trump—found algorithmically sorted, highly emotional social media an enormously useful lever. Everywhere from Facebook to YouTube to Reddit and Twitter, hoaxes and conspiracies thrived.

    There was the infamous “Pizzagate” conspiracy theory that Hillary Clinton ran a child-sex ring out of a Washington restaurant; there were memes claiming Clinton had a Democratic staffer murdered. Meanwhile, white-nationalist memes, crafted on relatively lesser-known right-wing sites, used Facebook, Twitter, YouTube, and other social networks to make the jump into the mainstream.

    It didn’t help that social media had made it easier for people to build ideological echo chambers by following and friending primarily those they already agreed with. That made it even less likely that they’d encounter any debunking for a piece of dis-info or a racist meme.

    And it also didn’t help that it was extremely easy for electoral muck stirrers to use “bots”—fake, automated accounts on Twitter or Facebook—to up-vote conspiracy posts, making them seem artificially popular. Far-right operators and Russian troll farms became expert at wielding bots to sucker recommendation algorithms into picking up their posts, bringing them to the attention of an audience much larger than these marginal trolls could manage on their own—and often thence into even larger mainstream-media coverage, via journalists boggling at all these up-voted online memes.

    If you’re an authoritarian or nationalist or a bigot, this is perfect for you.

    In the years before the election, the social networks were, it appears, only dimly aware that these coordinated political campaigns were growing. To be sure, Facebook knew that people spread dumb hoaxes on their service. They’d long fielded complaints about that stuff. In January 2015, they released a new spam-reporting option that let users report a News Feed post as being “false news.”

    Article continues after advertisement
    Remove Ads

    But before the media coverage of electoral interference hit, the idea that far-right or foreign groups might be actively collaborating to game their systems was not, as previous employees told me, widely on the radar. “I don’t think there was a good awareness of it,” Dipayan Ghosh, who worked for Facebook from 2015 to 2017 on privacy and public policy, tells me. As BuzzFeed found, one Facebook engineer had discovered that hyper-partisan right-wing content mills were getting among the highest referral traffic from Facebook. But when he posted it to internal employee forums, “There was this general sense of, ‘Yeah, this is pretty crazy, but what do you want us to do about it?’”

    Systems that rewarded extreme expression were troubling in the US, to be sure. They’ve been arguably an ever bigger nightmare in parts of the world like India—which has more Facebook users than the US, and where the ruling party began hiring armies of people to write harassing, hate-filled messages about opponents and journalists.

    A virulently anti-Muslim movement has used Facebook to issue theocratic calls to slaughter Muslims. In the Philippines, Rodrigo Duterte has used 500 volunteers and bots to generate false stories (“even the pope admires Duterte”) and harass journalists. Even the ad networks of social media were used by foreign actors looking to monkey-wrench American politics.

    In the spring of 2018, US special investigator Robert Mueller revealed that “Russian entities with various Russian government contracts” had bought social-network ads for months, attacking Hillary Clinton and supporting Donald Trump and Bernie Sanders, her primary rivals. But it wasn’t hard to understand why they’d find this route useful. Google, Facebook, and Twitter’s ad tech is designed specifically to help advertisers microtarget very narrow niches, making it the perfect way to reach the American citizens they wanted to hype up with conspiracies and dis-info: disaffected, angry, and racist white ones, as well as left-wing activists enraged at neoliberalism. Microtargeting is a superb tool for sowing division, because it means each gnarled, pissed-off group can get its own customized message affirming its anger.

    The spectacle appalls Ghosh. After he left Facebook, he wrote a report for New America arguing that “the form of the advertising technology market perfectly suits the function of disinformation operations.” Political misinformation “draws and holds consumer attention, which in turn generates revenue for internet-based content. A successful disinformation campaign delivers a highly responsive audience.”

    Adtech, the engine of rapidly scaling web business, is “the core business model that is causing all the negative externalities that we’ve seen,” he tells me. “The core business model was to make a tremendously compelling and borderline addictive experience, like the Twitter feed or Facebook messenger or the News Feed.”

    All of these former employees told me the same thing: Nobody who built these systems intended for bad things to happen. No one woke up thinking, I’d like to spend today creating a system that erodes civil society and trust between fellow citizens. But the drivers of big tech—the rush for scale, the “free” world of ads, the compulsive engagement—brought them there anyway.

    “Facebook does not favor hatred,” Vaidhyanathan concludes. “But hatred favors Facebook.”

    __________________________________

    Adapted from Coders by Clive Thompson, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by Clive Thompson.

    Clive Thompson
    Clive Thompson
    Clive Thompson is a longtime contributing writer for the New York Times Magazine and a columnist for Wired. He is the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better.





    More Story
    Our Software, Ourselves: Who Really Writes Our Narratives? Never before in history have there been so many ways to recall our lives and the lives that surround us. That’s not exactly...
  • Become a Lit Hub Supporting Member: Because Books Matter

    For the past decade, Literary Hub has brought you the best of the book world for free—no paywall. But our future relies on you. In return for a donation, you’ll get an ad-free reading experience, exclusive editors’ picks, book giveaways, and our coveted Joan Didion Lit Hub tote bag. Most importantly, you’ll keep independent book coverage alive and thriving on the internet.

    x