Disinformation and fake news have become a prominent feature of all elections. Two days before the French presidential election in 2017, an online disinformation campaign took off around the hashtag #MacronLeaks. Computer hackers had broken into the Emmanuel Macron campaign’s email accounts and posted the contents online. On Twitter, the #MacronLeaks campaign aimed to make sure that people knew about the hack and to create maximum uncertainty about the contents of the leak. As France went to the polls with Macron in a head-to-head vote against far-right candidate Marine Le Pen, the campaigners wanted to generate as much uncertainty as possible in the minds of the voters.
The #MacronLeaks campaign was largely run using bots: accounts that run a computer script to share information on a massive scale. Bots are potentially good at spreading fake news, because it is easy to create lots of them and they will say whatever you tell them to. They are fake ants, telling Google, Twitter, and Facebook that something new and interesting has arrived on the Internet. The bots’ creators hope to generate sufficient interest in the hashtag to lift it to the front page of Twitter, where it appears for all users to click on.
Emilio Ferrara, based at the University of Southern California, decided to track down the bots and find out what they were up to. First, he measured the “personality” of the Twitter accounts posting about the French election. He employed regression techniques to automatically classify whether a user is a human or a bot. He told me he found that users with a large number of posts and followers and whose tweets had been favorited by other users were much more likely to be humans. Less popular and interactive Twitter users were likely to be bots. His model was sufficiently accurate that when picking two users, one that was a bot and one that was a human, he could identify the bot in 89 percent of cases.
The #MacronLeaks Twitter bot army certainly had an impact. In the two days before the election, around 10 percent of election-related tweets were about the leaks. This was at a time of an election news blackout in France, under which newspapers and TV don’t report on politics until after the polls have closed. The hashtag #MacronLeaks made it on to the Twitter trending lists, which meant real users saw it on their screens and clicked on it to find out more. The bot army had come into action at just the right moment.
The problem for the bots’ creators was that they were reaching a very specific audience. Most of the messages about #MacronLeaks were sent in English rather than French. The two most common terms in these tweets were “Trump” and “MAGA,” referring to Trump’s election slogan of “Make America Great Again.” The vast majority of the human users sharing and interacting with the bots were alt-right sympathizers based in the US and not people who were directly involved with (or eligible to vote in) the French election.
Another important observation that Emilio made was that the tweets about #MacronLeaks had a very limited vocabulary. The tweets were repeating the same message over and over again, without broadening out the discussion. They tended to contain links to alt-right US websites, such as The Gateway Pundit and Breitbart, and to the profit-making sites which had spread fake news during the US election.
“The people least likely to believe fake news or conspiracies are those voters who are undecided; exactly the people who are going to decide the election outcome.”
Ultimately, the effect of the bots on real French voters was, at most, very minor. Macron won the election with 66 percent of the vote.
Emilio’s results are similar to those found by Hunt Allcott and Matthew Gentzkow about fake news in the US presidential election. Hunt and Matthew found that only eight percent of people in their study believed fake news stories. Moreover, the people who did believe these stories tended to already hold political beliefs that aligned with the sentiment of the fake news. Republican sympathizers would tend to believe that “The Clinton Foundation bought $137 million in illegal arms,” and Democrats would tend to believe “Ireland [will be] accepting Americans requesting political asylum from a Donald Trump presidency.” These results are further supported by earlier research showing that Republicans are more likely to believe that Barack Obama was born outside the US and that Democrats are more likely to believe that George W. Bush knew about the 9/11 attacks before they happened. The people least likely to believe fake news or conspiracies are those voters who are undecided; exactly the people who are going to decide the election outcome.
There is an irony about the articles that many newspapers and news magazines ran throughout 2017, about bubbles, filters, and fake news, an irony similar to that of the Mandela effect. These stories are written within a bubble. They play on fears, mention Donald Trump, drop references to Cambridge Analytica, criticize Facebook, and make Google sound scary.
The YouTubers, that my kids watch, often discuss using “meta-conversations” for increasing views. Vloggers, like Dan & Phil, take it up three or four levels: analyzing how they make fun of themselves for being obsessed with the fame and fortune that makes them so famous in the first place. The same joke applies to the media stories about bubbles and fake news, except many of their authors fail to see the ultimate irony in what has happened. Articles about dangers of bubbles rise to the top of a Google search with phrases like, “ban Trump from Twitter” or “Trump supporters stuck in bubble.” But very few of these articles get to the bottom of how online communication works. The fake news story ran and ran, generating its own click juice, without anyone looking seriously at the data.
There is no concrete evidence that the spread of fake news changes the course of elections, nor has the increase in bots negatively impacted how people discuss politics. We don’t live in a post-truth world. Bob Huckfeldt’s research on political discussions shows that our hobbies and interests allow other people’s opinions to seep into our bubbles. Emilio Ferrara’s study shows that, for now at least, the bots are talking to each other and a small group of alt-right Americans who want to listen. Hunt and Matthew have shown that following and sharing fake news is an activity for the few, rather than the many. And no one can remember the stories properly anyway. Lada Adamic’s research shows that conservatives on Facebook were exposed, through shares by their friends and by news selected for them by Facebook news, to only slightly less liberal-leaning content than if they had chosen their news totally at random.
There is some rather weak evidence that a social-media bubble prevented US liberals from seeing what was going on in their society during the 2016 presidential election. In both Lada’s blogosphere and Facebook studies, liberals experience less diversity in opinion than conservatives. It is a minor effect, though, and even my cheap shot at some “meta-met” liberal journalism isn’t entirely justified. Many journalists continue to hold Google and Facebook to account, and push them to improve further. Liberals might be slightly more susceptible than conservatives to echo chambers, but this is probably because they use the Internet more.
I felt that I had come full circle. When I started looking at the algorithms that influence us online, I was enamored with the collective wisdom created by PredictIt. But then I found out that “also liked” dominated our online interactions, leading to runaway feedback, and alternative worlds. In situations where there are commercial incentives involved, Google’s algorithms can become overloaded with useless information generated by black hats trying to divert traffic through their affiliate sites on the way to Amazon. At that point, I became disillusioned with Google, and Facebook didn’t seem to be helping with its endless attempts to filter the information we see.
“While it is true, in theory, that a small investment can grow through ‘also liking’ and social contagion, there is no credible evidence that this happened in the case of the Russian adverts.”
Why was the situation different in politics? Why aren’t the black hats of fake news having the same effect as the black hats of CCTV cameras?
The first reason is that the incentives are not the same. The Macedonian teenagers spreading fake news have very limited income sources. Much of their advertising income is obtained from Trump memorabilia for which, in comparison with all the products on Amazon, there is a minuscule market. The income of the most successful fake news-generating Macedonian teenager was (according to the teenagers themselves) at the very most $4,000 per month, but only during the four months leading up to Trump’s election. In the long term, CCTV Simon’s site is the much better investment, if you want to become a black hat entrepreneur.
The second reason black hats aren’t taking over politics is that we care a lot more about politics than we care about which brand of CCTV camera we buy. We even care about politics more than we care about Jake Paul’s fake beef with RiceGum. There may well be an increased skepticism about the media and politicians, but there is no evidence for decreased engagement of people, young or old, in political questions. On the contrary, young people use online communication to launch campaigns on specific issues—such as environmentalism, vegetarianism, gay rights, sexism, and sexual harassment—and to organize real-life demonstrations. While very few people are actively blogging about CCTV cameras or widescreen TVs, there are lots of very sincere people writing about politics. On the left-wing, campaigns like Momentum within the Labour Party in the UK and Bernie Sanders’s presidential 2016 campaign are built through online communities. On the right, nationalists organize protests and share their opinions online. You or I might not agree with all of these opinions, and the bullying and abuse that occurs on Twitter is unacceptable, but most of the posts that individual people make relate to how they genuinely feel. The vast quantity of these posts means that we can’t help but be subjected to myriad contrasting opinions.
That is not to say that we should be complacent about the potential dangers. It is plausible that a state-organized black hat campaign, for example by the Russian government, could mobilize sufficient resources to influence an election. There is little doubt that some Russia-backed organizations tried to do exactly that during the last US presidential election, spending hundreds of thousands of dollars on adverts on Facebook and Twitter. And, at the time of writing, a special prosecutor in the US is investigating the Trump campaign for participating in this operation.
Irrespective of Trump’s potential involvement, these campaigns haven’t, as yet, created the click juice that allows them to be major influencers. Over a billion dollars is spent on presidential campaigns by the candidates, dwarfing the Russia-backed investment. While it is true, in theory, that a small investment can grow through “also liking” and social contagion, there is no credible evidence that this happened in the case of the Russian adverts.
There are problems with Google Search, Facebook’s filtering, and Twitter’s trending. But we also have to remember that these are absolutely amazing tools. Occasionally, a search will lift up information that is incorrect and offensive, to its front page. We might not like it, but we also have to realize that it is unavoidable. It is an inbuilt limitation of the way Google works, through a combination of “also liking” and filtering. Just as the ants going around in circles is a side effect of their amazing ability to collect vast quantities of food, Google’s search mistakes are an in-built limitation of its amazing ability to collect and present us with information.
The biggest limitation of the algorithms currently used by Google, Facebook, and Twitter, is that they don’t properly understand the meaning of the information we are sharing with each other. This is why they continue to be fooled by CCTV Simon’s site, which contains original, grammatically correct but ultimately useless text. These companies would like to be able to have algorithms monitor our posts and automatically decide, by understanding of the true meaning of our posts, whether it is appropriate to share and who it should be shared with.
It is exactly this question, of getting algorithms to understand what we are talking about, that all of these companies are working on. Their aim is to reduce their reliance on human moderators. To do that Google, Microsoft, and Facebook want their future algorithms to become more like us.
__________________________________
From Outnumbered: From Facebook and Google to Fake News and Filter-bubbles – The Algorithms That Control Our Lives (featuring Cambridge Analytica). Used with permission of Bloomsbury. Copyright © 2018 by David Sumpter.