• How the Algorithm Rewards Extremism

    Clive Thompson on Big Tech, the Internet, and the Mess We're In

    I could say, again, that software is eating the world, though it might be more accurate at this point to say it’s “digesting” it. But what’s noticeable also is the fact that size matters. These days, some of the biggest civic impacts come from the truly titanic, globe-spanning tech companies that sit in the midst of our social and economic life. “Big Tech,” as the journalist Franklin Foer dubs it.

    Indeed, there are now a surprisingly small handful of firms that dominate the public sphere. There are the ones that govern how we communicate (like Facebook, Twitter, YouTube, Apple, and Netflix), ones that touch commerce (Amazon, Uber, Airbnb), and the information brokers and toolmakers of our work lives (Google, Microsoft). Big tech is a useful way to think about the particular challenges of software that dominates its area, because it highlights the near monopolies many of these firms enjoy. And they’re mostly extremely young, new companies. Many rose to dominance in barely more than a decade. Their histories are marked by frantic, metastatic growth.

    This is not surprising, because it’s in the nature of software itself. A software firm ships code, and code is a historically weird type of product. It’s a machine that does things but which can be replicated globally for little-to-zero additional marginal cost of distribution. It’s as if Chevrolet could design a single Camaro and then instantaneously teleport 200 million copies to the driveways of every household in America. This is a fact that strikes, and occasionally even stuns, the engineers for big firms.

    At one point while writing this book, I visited with Ryan Olson, a lead engineer for Instagram, right after his team had just pushed out a massive update (introducing the wildly popular video Stories, cribbed from their rival Snapchat). Olson told me about how, a mere hour or two after their update, he’d been traveling around San Francisco—in bleary, post-crunch exhaustion—and noticing everyday people using his fresh, new code.

    “It’s a pretty cool experience,” he said, “to be riding on a train, or last night I was at the climbing gym, and I looked over and someone is using the product. I don’t know if there’s ever been historically any other way where you could reach so many people”—or where “so few people define the experience of so many.” The thrill of overnight growth is vertiginous, powerful, and addictive. It’s why so many coders—particularly those making consumer products—have a holy reverence for scale. They love the idea of creating something that grows at an exponential pace: It’s used by two people, then four, then eight, and soon the entire damn planet. Why, if you can spread your creation around the world so easily, would you ever want to do something small? Isn’t there something kind of sad about a piece of code that doesn’t grow at a frantic, kudzu-like pace?

    Indeed, among the reigning kingpins of Silicon Valley there’s a sort of contempt for things that fail to become massive. Smallness seems like weakness. You may recall the story of Jason Ho, the hacker who created a thriving small business by making time-clock code used by companies around the world. It made so much money that he was able to spend much of his twenties with the freedom to travel and invest. If I’d done that, I’d certainly consider it a success myself.

    But when I mentioned Ho’s company to the thirty-something founder of a very large tech firm, he scoffed. To him, it was “lifestyle business”—Silicon Valley–speak for an idea that will never scale into the stratosphere. That sort of product is fine, sure, he told me, but Google could do the same thing and put him out of business in a second. If you weren’t aiming to be giant, he asked with a shrug, why bother doing it?

    This sentiment is arguably even more pronounced in other software markets like China, which has a famously competitive, winner-take-all tech market. When in 2015 I toured the offices of the e-commerce firm Meituan in Beijing, the company was only five years old but in a frenzy of expansion, hiring young engineers as rapidly as they could roll off the transom of computer science programs. The CEO Wang Xing and I peered out over the sprawling floor of coders, festooned with hundreds of plants to make the scene feel less sterile.

    “In China, you either have to become massive or you will get crushed,” Wang told me soberly. (Meituan alone had survived probably a few thousand competitors, as the tech investor Kai-Fu Lee estimated, when I spoke to him.) In the world of high-tech firms, the race to scale is propelled by a carrot (the magical ease of duplicating and running code worldwide) and a stick (the shark-like competition).

    If you weren’t aiming to be giant, he asked with a shrug, why bother doing it?

    The lust for scale is also fueled by the dictates of venture capitalists. They place their bets on dozens or hundreds of companies, encouraging them all to grow ferociously. The vast majority won’t, but with luck, one or two will break out—making so much money, so quickly, that it makes up for all the other losses. Venture capital is thus perfectly content to accept an ambitious flameout. It adores a sudden, exploding success. But the one thing it finds useless and annoying is a company that’s merely stable, maybe growing a small bit. Even if that firm is making a little profit, who cares? The investor isn’t looking for stability: they want rapid growth that leads to a bigger return on their investment.

    The Y Combinator accelerator—which takes in several dozen tech firms each year, to try and help them into the big leagues—ends each cohort’s program with a Demo Day, where the young companies show off their products for a room of handpicked venture capitalists. The start-ups are inevitably desperate to include in their presentation a hockey-stick chart—the one that shows their user base suddenly blasting off into the sky.

    One evening, I visited the hacker-house of People.ai, a company that just days earlier had done their Y Combinator demo. They pecked at keyboards and exhaustedly described how they’d spent the three months in Y Combinator frantically registering new clients for their service, in an attempt to produce that hockey stick. “You think about it, the three months it’s all about building the numbers—but you’re going to show them off for only 10 seconds, on your ‘growth’ slide,” Oleg Rogynskyy, the cofounder, said.

    Kevin Yang, the lead programmer and cofounder, laughed while remembering the investors sitting there, arms crossed, awaiting the growth figures. “Is that hockey stick not hockey stick enough?” he joked.

    “The X axis has to be half the page,” Rogynskyy said.

    *

    Scale, of course, brings enormous benefits. It’s certainly financially valuable for the big tech firms! If they grow fast enough, they scare off competitors and develop the lock-in of “network effects.” When a social network like Facebook or WeChat gets big enough, users can’t easily stop using it, because all their friends are there. And certainly, when a tech firm grows rapidly it can be enormously beneficial for users, too. Because of Facebook’s global ubiquity, it’s now the easiest way for people to organize virtually anything, large or small, from family meetups to political fund-raising campaigns to search-and-rescue efforts. The new attention to police abuse of power in recent years? It’s been fueled partly by the commanding size of Facebook and Twitter—which lets users rapidly spread video of horrifying and incontrovertible examples, including livestreamed ones. It is these firms’ huge footprint that permits everyday people to wield it as a broadcasting network.

    But the frantic drive for scale also changes software firms. It inexorably pushes them toward tactics that range from dodgy to exploitative. After all, to scale at such a ferocious clip, you can’t charge your users any money up front. The service needs to be “free.” This is particularly true for social networks: They can’t get a million users overnight if every user has to shell out, say, $10 to join. So the only other way to make money is to get as huge as possible, then sell advertising to your audience. Facebook and Twitter and Google have all adopted this free-to-use model—indeed, Facebook boasts on its sign-up page that “It’s free, and always will be.” And the ad market has been deeply lucrative for them: In 2017, Twitter’s revenues were $2.4 billion, Facebook’s were $40.65 billion, and Google dwarfed them both with over $100 billion.

    Yet advertising changes the nature of how software firms treat their users—something that many coders and designers, deep inside the bowels of the companies, began to uneasily apprehend.
    One such techie was James Williams. A thoughtful, philosophical guy who’d studied English in college before earning a master’s degree in product-design engineering, he’d joined Google in the mid-00s to work as a strategist on the firm’s search advertising systems. He was drawn in by the mission of improving people’s access to information. Googlers talked about that mission all the time in soft-glow terms, and he loved it. “The default view was that ‘more tech is better,’ ‘more information is better,’” he notes.

    But Williams eventually began to notice the same side effects that had perturbed Leah Pearlman and Justin Rosenstein, the pair who helped invent Facebook’s Like button. Like them, Williams noticed that any tech firm selling ads inevitably becomes motivated to keep its users staring endlessly at the app. After all, you can only deliver ads to someone while they’re staring at your service. So you quickly begin building as many psychological lures as possible into your code.

    The big tech firms would pepper us users with alerts, trying to interrupt us during other tasks, to get us to come back to the mother ship. They’d slap little “quantification” numbers everywhere, to stoke our curiosity and our desire to “clean things up”: You have 14 new items in your feed! What could they be? And they’d make all these alerts bright red, to increase the chance we’d pounce on them. These trends, Williams argued, went into overdrive after the iPhone emerged.

    If users click on it, it must be what they want.

    “Before mobile, the internet was bounded in a place, because you could step away from it and close the laptop,” he tells me. “But once it was in your pocket, it was a firehose.” It is easy for engineers, Williams realized, to justify these psychological tricks—to argue they’re good. After all, they’d test each new tweak and trick by using A/B tests: Make the alert red, make it yellow, and see which one users click on more often. Red wins, so it must be the right choice! This data-driven form of design can make each psychological trick seem objectively correct: If users click on it, it must be what they want.

    To the scale-driven engineering mind, the ethical questions of “What should we be making?” are easily subsumed into the sheerly technical question of “What will help the system grow more and have a bigger throughput?” One anonymous former Facebook employee put it neatly, in a comment to BuzzFeed: “They believe that to the extent that something flourishes or goes viral on Facebook—it’s not a reflection of the company’s role, but a reflection of what people want. And that deeply rational engineer’s view tends to absolve them of some of the responsibility, probably.”

    Once advertising and growth become the two pillars of a big-tech firm, then it’s nearly inevitable that they’ll seduce their users into endless, compulsive use—or “engagement,” as it’s euphemistically called. “You’re trying to manage your attention, and they have some of the smartest people in the world trying to distract you,” as Williams says. The end result, he decided, is there’s a fundamentally adversarial relationship between the goals of the coders and designers and those of their users. The former are constantly trying to trick and nudge users into compulsive behavior. It works because the nudges are subconscious, or algorithmically invisible. If they were more obvious, we might reject them. Imagine, Williams says, that GPS worked in a similarly adversarial fashion. You’d ask it to take you home, and it would insert five detours along the way, to bring you past locations that satisfy the needs of advertisers.

    Even worse, the dictates of digital advertising have led to a ceaseless tracking of our individual activities online. If a tech firm is offering advertisers the ability to custom target me, they want to know as much as they can about me: what other websites I surf, what neighborhoods I visit, what keywords occur in my emails and public postings. The advent of deep learning makes tech firms even hungrier for more of our personal info, because deep learning works best when it has mammoth amounts of “training” data, the better to predict what ad we’d like to see or what mood we’ll be in on Mondays. This has produced the world where Facebook even collects information on phone calls you’ve made on your smartphone, as the novelist and University of Houston professor Mat Johnson discovered (“cool totally not creepy,” he joked on Twitter.)

    While still at Google, he began to do doctoral research into our attention and how modern tech was affecting it. “Nobody goes into tech thinking, I want to spy on people and make the world a worse place,” he said. “They’re well intentioned.”  But the business models have a propulsive force of their own.

    Eventually, after ten years at Google, Williams left; he wound up at the University of Oxford, where he wrote Stand Out of Our Light, a penetrating meditation on the civic and existential dangers of big tech. “I’ve gone from one of the newest institutions on the planet to one of the oldest,” he says wryly.

    *

    Scale also makes algorithms reign supreme.

    Why? Because once a big-tech firm has millions of users—posting billions of comments a day, or listing endless goods for sale—there’s no easy way for humans to manage that volume. No human can sort through them, rank them, make sense of them. Only computers and algorithms can. When scale comes in, human judgment gets pushed out.

    This is precisely what confronted Ruchi Sanghvi and the Facebook team that crafted the News Feed. They couldn’t show users every posting of their friend, because that would drown them in trivia. They needed automation, an algorithm that would pick only posts you’d most likely find interesting.

    How does Facebook figure that out? It’s hard to know for sure. Social networks do not discuss their ranking systems with much detail, to prevent people from gaming their algorithms; spammers constantly try to suss out how recommendation systems work so they can produce spammy material that will get up-ranked. So few outside the firms truly know. But generally, the algorithms up-rank the type of content you’d expect: posts and photos and videos that have amassed tons of likes or “faves” or attracted many comments, reposts, and retweets, with a particular bias toward recent activity.

    Signals like these help fuel the “recommended” videos on YouTube, the “trending” topics on Twitter or Reddit, and the posts that materialize in your News Feed. When algorithmic ranking works, it’s enormously useful. It picks the wheat from the chaff. But it has biases of its own. Any ranking system based partly on tallying up the reactions to posts will wind up favoring intense material, because that’s the stuff that gets the most reactions. As scholars have found, social algorithms around the internet all seem to reward material that triggers strong emotions. Hot takes, heartstring-tugging pictures, and enraging headlines are all liable to be very engaging. One study found that the top-performing headlines on Facebook in 2017 used phrases that all suggested deeply emotional, OMG curiosity—phrases like, “will make you” or “are freaking out” or “talking about it.” Of course, this is perfectly harmless when we’re talking about heartwarming kitten videos or side-eye GIFs from last night’s episode of Claws.

    But when it comes to the public sphere, these algorithms can wind up favoring hysterical, divisive, and bug-eyed material. This is not necessarily a new problem, of course. In America, for example, the national conversation has struggled with people’s propensity to focus on fripperies and abject nonsense ever since the early years of the republic, when newspapers were filled with lurid, made-up scandals. But algorithmicized rankings have pushed this long-standing problem into metabolic overdrive. In YouTube, to take one example, video celebrities have raced to trump each other with ever crazier, more dangerous stunts; one father became so obsessed with retaining his 2-million-fold viewers that he began posting videos of his children in active distress (his “TRAUMATIC FLU SHOTS!!!” video included “a young girl’s hands and arms are held above her head as she screams with her stomach exposed,” as BuzzFeed described it).

    If you’re favoring material that generates attention, the wackier the post, the more it’ll get attention.

    My friend Zeynep Tufekci, an associate professor at the University of North Carolina who has long studied tech’s effect on society, argued in early 2018 that YouTube’s recommendations tend to over-distill the preferences of users—pushing them toward the extreme edges of virtually any subject. After watching jogging videos, she found the recommendation algorithm suggested increasingly intense workouts, such as ultra-marathons. Vegetarian videos led to ones on hard-core veganism.

    And in politics, the extremification was unsettling. When Tufekci watched Donald Trump campaign videos, YouTube began to suggest “white supremacist rants” and Holocaust-denial videos; viewing Bernie Sanders and Hillary Clinton speeches led to left-wing conspiracy theories and 9/11 “truthers.”

    At Columbia University, the researcher Jonathan Albright experimentally searched on YouTube for the phrase “crisis actors,” in the wake of a major school shooting, and took the “next up” recommendation from the recommendation system. He quickly amassed 9,000 videos, a large percentage that seemed custom designed to shock, inflame, or mislead, ranging from “rape game jokes, shock reality social experiments, celebrity pedophilia, ‘false flag’ rants, and terror-related conspiracy theories,” as he wrote. Some of it, he figured, was driven by sheer profit motive: post outrageous nonsense, get into the recommendation system, and reap the profit from the clicks. Recommender systems, in other words, may have a bias toward “inflammatory content,” as Tufekci notes.

    Another academic, Renée DiResta, found the same problem with Facebook’s recommendation system for its “Groups.” People who read posts about vaccines were urged to join anti-vaccination groups, and thence to groups devoted to even more unhinged conspiracies like “chemtrails.” The recommendations, DiResta concluded, were “essentially creating this vortex in which conspiratorial ideas can just breed and multiply.”

    Certainly, big-tech firms keep quiet about how their systems work, for fear of being gamed. But since they seem to self-evidently favor high emotionality, it makes them pretty easy to manipulate, as Siva Vaidhyanathan, a media scholar and author of Antisocial Media, notes.

    “If you’re favoring material that generates attention, the wackier the post, the more it’ll get attention,” he says. “If I were to construct a well-thought-out piece about monetary policy, I might get one or two Likes, from people who are into that. But if I were to post some crack-pot theory about how vaccines cause autism? I’m going to get a tremendous amount of attention—because maybe one or two of my friends are going to say you’re right, and a tremendous number are going to say no, you’re wrong, and here’s the latest study from the CDC proving you wrong. That attention to disprove me only amplifies my message. So that means anything you do to argue against the crazy is counterproductive.” As he concludes: “If you’re an authoritarian or nationalist or a bigot, this is perfect for you.”

    Indeed, this is precisely the problem that recommendation algorithms have visited on countries around the world. In the last US federal election, far-right forces—including the Russian government, via troll farms intent on sowing division in the US and supporting Donald Trump—found algorithmically sorted, highly emotional social media an enormously useful lever. Everywhere from Facebook to YouTube to Reddit and Twitter, hoaxes and conspiracies thrived.

    There was the infamous “Pizzagate” conspiracy theory that Hillary Clinton ran a child-sex ring out of a Washington restaurant; there were memes claiming Clinton had a Democratic staffer murdered. Meanwhile, white-nationalist memes, crafted on relatively lesser-known right-wing sites, used Facebook, Twitter, YouTube, and other social networks to make the jump into the mainstream.

    It didn’t help that social media had made it easier for people to build ideological echo chambers by following and friending primarily those they already agreed with. That made it even less likely that they’d encounter any debunking for a piece of dis-info or a racist meme.

    And it also didn’t help that it was extremely easy for electoral muck stirrers to use “bots”—fake, automated accounts on Twitter or Facebook—to up-vote conspiracy posts, making them seem artificially popular. Far-right operators and Russian troll farms became expert at wielding bots to sucker recommendation algorithms into picking up their posts, bringing them to the attention of an audience much larger than these marginal trolls could manage on their own—and often thence into even larger mainstream-media coverage, via journalists boggling at all these up-voted online memes.

    If you’re an authoritarian or nationalist or a bigot, this is perfect for you.

    In the years before the election, the social networks were, it appears, only dimly aware that these coordinated political campaigns were growing. To be sure, Facebook knew that people spread dumb hoaxes on their service. They’d long fielded complaints about that stuff. In January 2015, they released a new spam-reporting option that let users report a News Feed post as being “false news.”

    But before the media coverage of electoral interference hit, the idea that far-right or foreign groups might be actively collaborating to game their systems was not, as previous employees told me, widely on the radar. “I don’t think there was a good awareness of it,” Dipayan Ghosh, who worked for Facebook from 2015 to 2017 on privacy and public policy, tells me. As BuzzFeed found, one Facebook engineer had discovered that hyper-partisan right-wing content mills were getting among the highest referral traffic from Facebook. But when he posted it to internal employee forums, “There was this general sense of, ‘Yeah, this is pretty crazy, but what do you want us to do about it?’”

    Systems that rewarded extreme expression were troubling in the US, to be sure. They’ve been arguably an ever bigger nightmare in parts of the world like India—which has more Facebook users than the US, and where the ruling party began hiring armies of people to write harassing, hate-filled messages about opponents and journalists.

    A virulently anti-Muslim movement has used Facebook to issue theocratic calls to slaughter Muslims. In the Philippines, Rodrigo Duterte has used 500 volunteers and bots to generate false stories (“even the pope admires Duterte”) and harass journalists. Even the ad networks of social media were used by foreign actors looking to monkey-wrench American politics.

    In the spring of 2018, US special investigator Robert Mueller revealed that “Russian entities with various Russian government contracts” had bought social-network ads for months, attacking Hillary Clinton and supporting Donald Trump and Bernie Sanders, her primary rivals. But it wasn’t hard to understand why they’d find this route useful. Google, Facebook, and Twitter’s ad tech is designed specifically to help advertisers microtarget very narrow niches, making it the perfect way to reach the American citizens they wanted to hype up with conspiracies and dis-info: disaffected, angry, and racist white ones, as well as left-wing activists enraged at neoliberalism. Microtargeting is a superb tool for sowing division, because it means each gnarled, pissed-off group can get its own customized message affirming its anger.

    The spectacle appalls Ghosh. After he left Facebook, he wrote a report for New America arguing that “the form of the advertising technology market perfectly suits the function of disinformation operations.” Political misinformation “draws and holds consumer attention, which in turn generates revenue for internet-based content. A successful disinformation campaign delivers a highly responsive audience.”

    Adtech, the engine of rapidly scaling web business, is “the core business model that is causing all the negative externalities that we’ve seen,” he tells me. “The core business model was to make a tremendously compelling and borderline addictive experience, like the Twitter feed or Facebook messenger or the News Feed.”

    All of these former employees told me the same thing: Nobody who built these systems intended for bad things to happen. No one woke up thinking, I’d like to spend today creating a system that erodes civil society and trust between fellow citizens. But the drivers of big tech—the rush for scale, the “free” world of ads, the compulsive engagement—brought them there anyway.

    “Facebook does not favor hatred,” Vaidhyanathan concludes. “But hatred favors Facebook.”

    __________________________________

    Adapted from Coders by Clive Thompson, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by Clive Thompson.

    Clive Thompson
    Clive Thompson
    Clive Thompson is a longtime contributing writer for the New York Times Magazine and a columnist for Wired. He is the author of Smarter Than You Think: How Technology is Changing Our Minds for the Better.





    More Story
    Our Software, Ourselves: Who Really Writes Our Narratives? Never before in history have there been so many ways to recall our lives and the lives that surround us. That’s not exactly...