Why Big Tech’s Abuse of Artificial Intelligence Doesn’t Need to Be Inevitable
Maximilian Kasy on the Urgent Need to Democratize the Technology That Shapes Our Lives
The story of humans versus machines
In the classic film 2001: A Space Odyssey, which was released in 1968, a spaceship headed to Jupiter is equipped with an onboard computer named HAL 9000. Over time, this computer becomes a deadly antagonist of the astronauts on the ship. After an apparent computer error, several crew members try to switch HAL off. In the name of safeguarding the secret mission of the spaceship, HAL kills the crew members. Eventually, however, the astronaut Dave Bowman succeeds at deactivating HAL, ignoring the computer’s desperate pleas to stop.
In The Terminator (1984), the conflict between humans and a self-preserving AI is taken up a notch, and the conflict becomes a question of survival for the entire human species.
Many of these same tropes appear in movies such as The Matrix (1999), I, Robot (2004), Transcendence (2014), Ex Machina (2015), M3gan (2022), The Creator (2023), and others. They reflect a particular fear of AI, one amplified by visible figures from the tech industry in this century: that we are headed toward a conflict between humans and machines. Elon Musk argued at the Bletchley Park AI summit that AI is “one of the biggest threats to humanity” and that, for the first time, we are faced “with something that’s going to be far more intelligent than us.” Sam Altman, of OpenAI, has claimed that generative AI could bring about the end of human civilization, and that AI poses a risk of extinction on a par with nuclear warfare and global pandemics.
Technology is not fate. Just as people make technology, people decide how it is used and what interests it serves.
In academia, this story has also found some resonance. The philosopher Nick Bostrom has written extensively about the existential risks of AI for humanity, and the possibility of an intelligence explosion, where AI keeps improving itself once it has reached human level. The computer scientist Stuart Russell, together with his collaborators at the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, has emphasized the so-called alignment problem—that is, the problem of making machine objectives align with human objectives.
Another dystopian story, which is almost equally scary, holds that AI won’t kill us, but it will render human workers obsolete, inevitably leading to mass unemployment and social unrest. A 2023 Goldman Sachs report, for instance, claimed that generative AI might replace three hundred million full-time workers in Europe and the United States.
The story told in Hollywood and in Silicon Valley tends to feature a heroic conflict between a man (it is usually a man) and a machine—Dave Bowman and HAL 9000 in Space Odyssey, Kyle Reese and the Terminator, Nathan and Ava in Ex Machina, or Sam Altman and the AI-caused extinction of humanity. The academic version of the story, as told by computer scientists, also tends to feature a man and a machine, where there is a value-alignment problem of the machine (that is, a mis-specified objective) or a bias of the machine relative to its objective.
What the old story misses
One of the key issues that the story of man versus machine misses: Technology is not fate. Just as people make technology, people decide how it is used and what interests it serves. These decisions are made over and over again as AI is developed and deployed. AI is, furthermore, ultimately not that complicated. How AI works can be understood by anyone. The real conflict is not between a human and a machine but between the different members of society. And the answer to the various risks and harms of AI is public control of AI objectives through democratic means.
AI is, at its core, automated decision-making using optimization. That means that AI algorithms are designed to make some measurable objective as large as possible. Such algorithms might, for example, maximize the number of times that someone clicks on an ad. AI therefore requires that somebody picks the objective—the reward— that is being optimized. Somebody must, quite literally, type into their computer: “This is the measure of reward that we care about.”
The important question, then, is who gets to pick the objectives of AI systems. We live in a capitalist society, and in such a society the objectives of AI are typically determined by the owners of capital. The owners of capital control the means of prediction that are needed for building AI—data, computational infrastructure, technical expertise, and energy. More generally, the objectives of AI are determined by those with social power, whether that is in the criminal justice system, in education, in medicine, or in the secret police forces of autocratic surveillance states.
One domain in which AI is deployed in society is the workplace. AI is used in robotized Amazon warehouses, in the algorithmic management of Uber drivers, and in the screening of job candidates by large companies. AI is also used in consequential domains outside the workplace, including the filtering and selection of Facebook feeds and of Google search results, where the objective is to maximize ad clicks. A third domain is predictive policing and the incarceration of defendants awaiting trial based on the prediction of crimes that they have not committed yet. Perhaps most devastatingly, AI is also deployed in warfare; it was, for instance, used to decide which family homes to bomb in Gaza beginning in 2023.
Of course, a good number of researchers and critics have warned of the dangers of using AI in these consequential domains. Joy Buolamwini, a computer scientist at MIT Media Lab, has written extensively on the dangers of inaccurate and racially biased facial recognition systems. Ruha Benjamin, a sociologist at Princeton, has emphasized that AI can replicate and reinforce existing social inequalities in domains such as education, employment, criminal justice, and health care. In a similar vein, Timnit Gebru, a computer scientist writing during her time working at Google, warned of the dangers of large language models acting as stochastic parrots, which repeat language patterns without understanding, and in doing so replicate the biases embedded in their training data. Meredith Whittaker, currently the president of the Signal Foundation, has criticized the political economy of the tech industry, where AI is used by powerful actors in ways that can entrench marginalization. Kate Crawford, professor at the University of Southern California and co-founder of the AI Now Institute, has emphasized the nature of AI as an extractive and exploitative industry.
The problem is that the objective optimized by the algorithm is good for the people who control the means of prediction…but not good for the rest of society.
Amid these overlapping critiques, each focused on a different aspect and pitfall of AI, it is challenging to formulate a systematic way of thinking about AI in society. One possible unifying perspective is provided by computer science. Computer scientists are trained to view most problems as optimization problems. In this context, optimization involves finding the decision that makes a given reward as large as possible, given limited computational resources and limited data.
The computer science perspective has informed much of the public discourse around AI safety and AI ethics, especially regarding topics such as fairness or value alignment: “If there is something wrong, then there must be an optimization error.” In this view, the issue is simply that an action was picked that failed to maximize the specified objective. This perspective does not get to the heart of the problem in most cases, however, because it doesn’t engage with the choice of the objective itself.
I argue that instead of optimization errors, it is conflicts of interest over the control of AI objectives that are the central issue. When AI causes human harm, the problem is usually not that an algorithm did not perfectly optimize. The problem is that the objective optimized by the algorithm is good for the people who control the means of prediction—people such as Jeff Bezos, founder and former CEO of Amazon, and Mark Zuckerberg, founder and CEO of Meta—but not good for the rest of society.
This understanding changes how we should think about possible solutions to the problems of AI. How do we address AI ethics and AI safety if the underlying problems are with the parties that set the objectives for AI? How do we choose these objectives in a way that serves the public rather than just a powerful minority? But the solution for the issues of AI ethics and safety can only be democratic control. Democratic control is not limited to democratically elected national governments; collective democratic decision-making can exist on many levels, including the workplace, the nation state, and the global level.
The challenge, of course, is that democracy is difficult. The democratic control of a new technology like AI requires public deliberation, and such public deliberation might seem impossible considering the view held by many (and reinforced by the tech industry) that AI is very complicated.
But despite all the technical jargon, and despite the breathless chase after the newest innovations, the basic ideas of AI are not that complicated, and not that new, and they can be understood by all of us. No matter who you are, don’t let anyone tell you that you are not the “type” to understand AI.
__________________________________

From The Means of Prediction: How AI Really Works (and Who Benefits) by Maximilian Kasy. Copyright © 2025. Available from University of Chicago Press.
Maximilian Kasy
Maximilian Kasy is professor of economics at the University of Oxford; previously he was an associate professor of economics at Harvard University. His research focuses on machine learning and the social impact of AI.



















