Irfan was a technology worker in his thirties who came from Urumqi, the regional capital of Xinjjiang and commercial hub in northern Xinjiang, where he implemented mass surveillance projects until he left his job in 2015. He fled Xinjiang in 2018 and settled in Turkey.
Irfan had originally experienced trouble finding a job in the poor Urumqi region. Many of the top jobs went to Han Chinese migrants who sidelined the local Uyghurs.
“But I had a friend who had some access to the mayor of Urumqi,” Irfan explained. “He put in a call for me. And then the telecom company contacted me in 2007 with an offer. They needed an IT manager who could help set up one of the first surveillance systems there.
In his new job, Irfan was granted access to two key locations: the local public security office, where he helped manage the company surveillance network, and the telecom company’s network itself.
“Our mandate was to scour the city and plant cameras everywhere we could. My supervisors said the government wanted us to fight crime. I accepted that. I thought it was an honorable mission.”
With a small team, Irfan searched street corners, alleyways, streets where cars tended to speed, and pockets known for robberies and purse snatchings, as documented in municipal data. Then his team would connect cameras to optic cables that went back to the public security buildings, where police operatives would survey the city from a control room.
After installing the latest in what quickly became a vast network of cameras, Irfan would return to his control room in a nondescript concrete building, where he and other information technology workers sat in front of large wall-mounted video screens.
Throughout 2010 and 2011, Irfan and his colleagues became increasingly aware of how they could train AI algorithms to recognize faces and behaviors, match them up with a national database of citizens, and help police find perpetrators.
“We had the hardware, we had the cameras, we had almost everything we needed to make this work,” he said. “But we realized that we were missing the key ingredient: we needed more data.
“Otherwise, the facial recognition technology was useless. The AI needed data, in the form of either facial images, social media, criminal records, credit card swipes, or whatever other data resulted from some kind of activity or transaction. Then the system could plough through all the information we fed it and find correlations that humans couldn’t, in a fraction of the time.”
“Why was there so little data?” I asked him.
“State secrecy,” Irfan replied. “The government didn’t have good information on its own country and its people. And so we didn’t have the quality data that we needed to feed into the AI software. Without a good database on all our citizens, we couldn’t match up people’s faces or criminal records that easily. We couldn’t use AI to catch criminals. It was a terrible system.”
Irfan’s team scoured other company offices and the government for data. They came up empty-handed. “The solution didn’t come from the government,” Irfan confirmed. “It came from the corporations.”
For Chinese companies, the ability to control massive digital payment platforms—Chinese consumers didn’t use credit cards, preferring to pay with their mobile apps—naturally led to the urge to expand into credit rankings, making use of all that data on hundreds of millions of people who were making payments every day. What if they could additionally be ranked in categories like “trustworthiness,” based on their online shopping and payment activity? It was like a credit score but more all-encompassing.
In rapidly developing countries like China, millions of people had never had access to traditional credit and hadn’t established a credit score. By gathering mass data on everyone, China could leapfrog the credit hurdle and empower people across the country, giving them access to loans.
But social credit was sinister too. Li Yingyun, an executive at one credit service, Sesame Credit, told the Chinese magazine Caixin that “someone who plays video games for 10 hours a day, for example, would be considered an idle person, and someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.”
People all over China with better social credit scores would qualify for harmless benefits: VIP bookings at hotels and car rentals, and more prominent profiles on dating websites. Those who fell too low on the system could be denied bank loans and apartment rentals.
But China’s social credit system was far from centralized into an Orwellian panopticon that documented everything. China’s impenetrable bureaucracies and office politics stood in the way. A Tencent employee admitted to me that “each division in the company was reluctant to share the data being gathered from each platform we owned like QQ and WeChat. The affiliates and the offices inside those companies all competed with each other.”What if consumers could be ranked in categories like “trustworthiness,” based on their online shopping and payment activity?
Irfan and other technology employees told me they saw their biggest fears confirmed in July 2015. Irfan opened his smartphone to a news article stating that China’s legislature had passed the first in a series of game-changing national security laws, with 154 votes in favor, zero against, and one abstention. The law permitted the government to make use, for law enforcement, of data it had accumulated by various forms of surveillance.
The law’s wording was vague and replete with euphemisms and double-speak. It laid down the ultimate power of the Chinese Communist Party as a “centralized, efficient and authoritative national security leadership system.”
“All citizens of the People’s Republic of China, state authorities, armed forces, political parties, people’s groups, enterprises, public institutions, and other social organizations shall have the responsibility and obligation to maintain national security,” it stated. The New York Times suggested the law was a call to mobilization, a vague collection of principles exalting security as the national priority.
Three months later, Microsoft hit a landmark in the development of facial recognition technology, which could be deployed for surveillance and policing. Led by Dr. Sun Jian, whose research team had spent the previous four years perfecting AI software by adding more and more neural nets, Microsoft Research Asia now had ResNet: a new AI-powered facial recognition system with a deep neural network of 152 layers. ResNet eclipsed Google and other companies at an industry competition held in September 2015, showing its software to be far more accurate in identifying faces than anything else on the market.
Then Dr. Sun, like many of his former colleagues at MRA, jumped ship. He joined an old friend from the Microsoft research office who four years earlier had founded the facial recognition start-up Megvii. Megvii made the facial recognition software Face++, used by the Chinese government and private corporations interested in harvesting the demographics of their customers. The emergent ecosystem, of which Face++ was just a part, was building the technology to do everything: watch people with cameras, draw connections between faces and voices, give police the smartphones and apps they needed to monitor the population, and link all that up to a massive surveillance network processed by AI.
Now, by 2015, Megvii and its competitor SenseTime, the other big facial recognition developer, were getting more attention from around the world.
By 2015, seeing new breakthroughs from Microsoft, Megvii, and SenseTime, the Chinese government wanted a piece of the action, hoping to turn these start-ups into national champions for technology. It launched a $6.5 billion venture fund for start-ups, with much of the new funding coming from private sources. Private venture capital, previously not a feature of the state-run communist system, increased to greater numbers than ever before.
The Financial Times reported in January 2015 that China’s private equity and hedge fund industries had ballooned, with thirty-one hundred hedge funds overseeing almost $56 billion, and another twenty-five hundred private equity managers overseeing total funds of $172.5 billion.
China was putting on a new face: technologically sophisticated, benevolent, and eager to show its growing national strength to its people and its companies.
From April 2013 to August 2015, Irfan’s office had been scooping up metadata from WeChat—the usernames of people who sent messages to each other, the durations of phone calls, and the times and dates of those messages. It tracked the origin and recipient of each message, but not its contents. From the metadata, it could extrapolate a great amount of information about people’s social networks. But now Irfan’s office had orders to take its data gathering further, delving into the WeChat messages themselves.
“The AI software was scanning everything,” he said. “It found correlations we couldn’t see. It even looked for messages that included words like ‘bomb’ and ‘gun.’ Humans didn’t have the time to do this, but the AI software could.”
The surveillance machinery once plagued with inefficiencies seemed to come to life. It emerged with what seemed like, to Irfan, an ability to think, to see, to perceive and understand, even though in reality the technology wasn’t anywhere near advanced enough to be sentient. It couldn’t exhibit general intelligence and it didn’t have the ability to perform in more than one programmed area. It could only carry out one task: analyzing what people wrote in their WeChat messages by drawing correlations between the use of key words, like “Koran,” “terrorist,” and anything that seemed related to religion or violence.
The evolving surveillance system, with the help of AI, would send back random personal information to the surveillance workers in the control room.
A young woman enjoyed going to the movies. A young father maybe had a drinking problem. One man showed the hallmark signs of a thief through the mysterious use of his text messages. Others were possible terrorists.
“We didn’t understand how the AI came to its conclusions,” Irfan admitted. “A lot of it was random and worrisome. I had no idea whether the suspicious people named by the AI were actually suspicious.
“Once we saw the merging of big data and the AI, that’s when everything changed. I got an order from my manager to move to another department. The other guys didn’t want me there. They were Han Chinese and I was one of the [few] Uyghurs in the office. And I didn’t feel comfortable. The Han employees maybe thought I would leak the information to help my fellow Uyghurs.”
By the fall of 2015, around the time of the passage of the national security law, whenever Irfan tried to greet old coworkers, or talk during company meetings, he started to get the cold shoulder. People avoided eye contact and didn’t want to be seen with him.
“Then they started blocking me from going into sensitive rooms. They told me to wait outside.”
And so Irfan was removed from his position and found a minor role as a lowly propagandist at a TV station. “My job was broadcasting videos on state television about how Uyghurs were living a happy life. My bosses even wanted to distribute their propaganda movies around the world, but it never happened.”
Even though Irfan no longer had access to the sensitive workings of the surveillance program, he had little reason to believe the government would stop its mass data-gathering project. How else could the government monitor, surveil, and separate its minorities from the majority?
Inevitably, they would try to find ways that went beyond social media and messaging apps alone.
Adapted from The Perfect Police State: An Undercover Odyssey into China’s Terrifying Surveillance Dystopia of the Future by Geoffrey Cain. Copyright © 2021. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.