Why Targeted Ads Are a Disaster for Democracy
Carissa Véliz on Big Data and the Consequences of the
Erosion of Our Privacy
The surveillance economy has gone too far. It has abused our personal data in too many ways, too many times. And the quantity and sensitivity of the data being traded makes this grand experiment too dangerous to be continued. We have to put a stop to the trade in personal data.
The data economy has to go because it is at odds with free, equal, stable, and liberal democracies. We can wait for a truly massive data disaster before we start to protect privacy—anything from a monumental leak of biometric data (consider that, unlike passwords, our faces are not something we can change) to the misuse of personal data for the purposes of genocide—or we can reform the data economy now, before it’s too late.
Personal data has become such a big part of the economy that it might sound unrealistic to pull the plug on it. But once upon a time the idea of recognizing workers’ rights sounded just as outlandish, if not more so. Today, we look at the past and lament the savageness of exploitative labor practices, for instance, during the industrial revolution. Tomorrow, we will look at today and lament the foolishness of the surveillance economy.
Although human beings do not always excel at averting disaster, some examples show that we are capable of coordinating our actions and redirecting an ill-fated path. Ozone in the outermost layers of the atmosphere absorbs most of the sun’s ultraviolet rays. Without an ozone layer to protect us, our eyes, skin, immune system, and genes would get damaged by ultraviolet rays. As the ozone layer thinned in the second half of the twentieth century, the incidence of skin cancers went up. In 1985, a group of scientists published an article in Nature describing the extent of the annual depletion of the ozone layer above the Antarctic. We were headed for disaster.The data economy has to go because it is at odds with free, equal, stable, and liberal democracies.
Only two years later, in 1987, the Montreal Protocol was signed, an international agreement aimed at banning the production and use of ozone-damaging chemicals, including CFC’s (chlorofluoro-carbons). These chemical compounds were used worldwide in refrigerators, air conditioners, and aerosol cans. What made them attractive was their low toxicity, flammability (like asbestos), and reactivity. Unfortunately, the fact that they don’t react with other compounds also makes them dangerous, because it gives them a long lifetime during which they can diffuse into the atmosphere.
Thanks to the opposition of experts and the public to the production and use of CFC’s, industry innovated and found alternatives. Ozone holes and thinning have been recovering at a rate of about 1 to 3 per cent a decade since 2000. At this rate, the ozone layer over the northern hemisphere is expected to be completely healed by the 2030s. By 2060, ozone will have made a full come- back worldwide. Phasing out CFCs had a further benefit: it halved global warming.
If we can save the ozone layer, we can save our privacy.
Most of the recommendations in this chapter are aimed at policymakers. Much like saving the ozone layer, ending the data economy requires regulation. There is no way around that. But what will make policymakers act is pressure from you, from us, from the people. It is ultimately up to us to demand an end to the personal data trade, and there is much you can do to help that effort.
Policymakers are often eager to protect us. But they may fear the consequences of taking a bold step—maybe their party colleagues will disagree, maybe voters will not appreciate what is being done on their behalf, maybe it’ll hurt their chances of climbing the political ladder. Politicians derive their power from us. If they know we care about privacy, and that we will withdraw our votes and support if they do not regulate for privacy, you can be sure they will act. They are just waiting for our cue. Our job is to be as well informed as possible so that we know what to ask of our politicians. You can express your convictions by getting in touch with your representatives, voting, and by protecting your privacy, which is the topic of the next and final chapter.
Stop personalized advertising
Let’s go back to where we started. The origin of the dark sides of the data economy is in the development of personalized advertising – therein lies the beginning of a solution. Microtargeted ads that are based on your identity and behavior are not worth the negative consequences they create.
One of the gravest dangers of personalized advertising, as we saw when we discussed the toxicity of personal data, is the possibility that it might corrode political processes. You might think that a more reasonable solution to that problem is to ban political ads, as Twitter did in 2019. But it is not easy to demarcate clearly what is political from what is not. Twitter defines political content as “content that references a candidate, political party, elected or appointed government offcial, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” What about ads denying climate change? Or informing the public about climate change? Or ads against immigration? Or ads that advertise family planning health centers? All these seem political, and could be intimately related to a particular candidate or election, and yet it is not clear that they would or should be banned by Twitter.One of the gravest dangers of personalized advertising is the possibility that it might corrode political processes.
A better solution is to ban personalized ads completely. It’s not just that they polarize politics, they are also much more invasive than most people realize. When you see a personalized ad, it doesn’t just mean that a given company knows more about you than your friends do. It’s much worse than that. As the page loads, and many times before you even get a chance to consent (or not) to data collection, competing advertisers bid against each other for the privilege of showing you their ad.
Real-time bidding (RTB) sends your personal data to interested advertisers, often without your permission. Suppose Amazon gets that data and recognizes you as a user who has visited their web- site before in search of shoes. They might be willing to pay more than others to lure you into buying shoes. And that’s how you get shown an Amazon shoe ad. Unfortunately, in that process, very personal data such as sexual orientation and political affiliation might have been sent to who knows how many possible advertisers without your knowledge or consent. And those companies are holding on to your personal data.
The allure of behavioral advertising is understandable. Users don’t want to be exposed to products they have no interest in. If you couldn’t care less about tractors, seeing tractors ash on your screen is a nuisance. In turn, advertisers don’t want to waste their resources showing ads to people who would never buy their product. As the nineteenth-century retailer John Wanamaker famously said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”
Targeted advertising promises to solve both problems by showing customers what they are interested in buying, and making sure advertisers only pay for ads that will increase their sales. That’s the theory, a win-win situation. Unfortunately, the practice looks nothing like the theory. The practice has normalized surveillance. It has led to the spread of fake news and clickbait. It has fractured the public sphere, and it has even compromised our democratic processes. As if all of these externalities weren’t enough, microtargeted advertising doesn’t even do what it says on the tin: it doesn’t show us what we want to see, and it’s not clear that it either allows advertisers to save money or increases their sales.
Advertising is, for the most part, a less scientific endeavor than one might imagine. Marketers often pursue an advertising strategy more out of a gut feeling than because they have hard evidence about what is going to work. In some cases, this intuitive approach has led to prominent businesses wasting millions of pounds.
If targeted ads are much more expensive than non-targeted ads, and the increase in revenues they offer is marginal, we may be losing our privacy for nothing at all. Platforms such as Google and Facebook might be unduly profiting from selling smoke.* A poll by Digiday confirms this suspicion. Out of forty publishing executives who participated in the survey, for 45 percent of them, behavioral ad targeting had not produced any notable benefit, while 23 percent of respondents said it had actually caused their ad revenues to decline. In response to the GDPR, the New York Times blocked personalized ads yet did not see ad revenues drop; rather, they rose.
One reason targeted ads may not be very successful in increasing revenue is because people hate them. Do you remember when ads were creative and witty? Ads used to be interesting enough that you could compile them in a one-hour TV show and people would want to watch them. Not anymore. Most ads these days—especially online ads—are unpleasant at best and abhorrent at worst. They are typically ugly, distracting, and intrusive. Contemporary advertising has forgotten the lessons of David Ogilvy, known as the father of advertising, who wrote that “you cannot bore people into buying your product; you can only interest them in buying it.” You cannot (and should not) bully people into buying your product either: “It is easier to sell [to] people with a friendly handshake than by hitting them over the head with a hammer. You should try to charm the consumer,” Ogilvy wrote. In some ways, online ads are worse than being hit by a hammer.
People may especially hate targeted ads because they invade our privacy. Have you ever felt inappropriately watched by your ads? You tell a friend about a sensitive topic—perhaps you’re thinking of changing jobs, or having a baby, or buying a house—and the next ad you see is directly related to what you thought was a private conversation. Unsurprisingly, research suggests that ads are less effective when people consider them creepy. If people know that an ad targeted them by tracking them across the web, or by making inferences about them, they are less likely to engage with it.
Google sensed that people would not appreciate being spied on long ago and adopted a secretive approach, as explained earlier. Do you remember the first time you started to understand how your data was being used by big tech? I’m guessing you didn’t learn about it from a clear message from one of the big plat- forms. Maybe you started to notice how the ads you saw were related to you but different from those seen by your friends and family. Or perhaps you read about it in an article or a book.In some ways, online ads are worse than being hit by a hammer.
That targeted advertising may not deliver the advantages it was designed to offer makes our loss of privacy seem all the more futile and absurd. But even if targeted ads worked in showing us what we want to see and increasing the revenue of merchants, we would still have good reason to scrap them.
Targeted ads may not work very well for businesses, but they might work quite well for swaying elections, as we’ve seen. A four percent effect in selling a product will not be enough to compensate for the cost of the ad, but that same effect in terms of numbers of voters could very well decide an election.
Personalized ads have normalized hostile uses of tech. They have weaponized marketing by spreading misinformation, and they have shattered and polarized the public sphere. As long as platforms like Facebook use personalized ads, they will remain divisive by exposing us to content that pits us against one another, despite the company’s mission statement to “bring the world closer together.” Facebook will be all the more damaging as long as it dominates online advertising.
Facebook takes publishers away from their own distribution channels and encourages clickbait content. The relationship between publishers and their audiences is weakened is especially troublesome in the case of newspapers. It makes them dependent on platforms that can change their algorithm and hurt their visibility. Even before Facebook announced a change in its algorithm in 2018 to promote posts by family and friends, as opposed to content produced by publishers, news organizations were already experiencing a dive in Facebook-referred traffic. Anecdotally, some sites reported a 40 percent drop. BuzzFeed had to fire people, and the biggest newspaper in Brazil, Folha de S.Paulo, pulled its content from Facebook.
Banning targeted advertising will boost competition. One of the elements that is preventing competition against Facebook and Google is the amount of personal data they have hoarded. Everyone wants to advertise with them partly because there is an assumption that the more data a platform has, the more effective it will be at personalizing ads. If everyone used contextual advertising instead, platforms would be on a more equal footing. Contextual advertising shows you ads of shoes when you type “shoes” in your search. It doesn’t need to know who you are or where you’ve been. If companies were not allowed to use personal data for ads, that would do away with some of the competitive advantage of Google and Facebook, although the two tech giants would still be advertising behemoths given the number of users they have.
There is a place for advertising in the online world: especially for informative advertising (as opposed to combative or persuasive advertising), which, in David Ogilvy’s view, is the kind of marketing that is both more moral and more profitable. Online advertisers would do well to remember Ogilvy’s adage that “advertising is a business of words.” Perhaps online ads should look more like magazine ads than television ads. Instead of designing noxious ads that both surveil and distract us out of our minds with jumping ashy images, online ads could strive to be based on words and facts, following Ogilvy’s ideal. Including facts about a product, as opposed to adjectives, and adding good advice, such as how to remove a stain, or a food recipe, are examples of good practices. Online advertisers should offer us information, instead of taking information from us.
* It is worth bearing in mind, though, that even if targeted ads are not worth their cost, big platforms can provide access to such a large audience, that it might still be in the advertisers’ interests overall to use those platforms.
From Privacy Is Power: Why and How You Should Take Back Control of Your Data by Carissa Véliz, available now via Melville House.