Apparently, comparing someone’s writing to AI is now a “classist slur;” and other news.
Another wild week for the makers of the popular predictive chatbots and large, generative pretrained transformer software. Here are just a couple stories about AI that came across my desk this week.
Baldacci Burns Businesses
The fallout around the discovery that the president was likely best friends with an accused sex criminal feels like it could be a plot point in one of David Baldacci’s legal thrillers. But the author wasn’t in DC last week to research his next book, he was in town to testify before the Senate Judiciary Subcommittee on Crime and Counterterrorism.
“I truly felt like someone had backed up a truck to my imagination and stolen everything I’d ever created,” Baldacci told a Senate committee at a hearing titled “Too Big to Prosecute? Examining the AI Industry’s Mass Ingestion of Copyrighted Works for AI Training.” Baldacci, a best selling author of over 50 legal thrillers and YA books, was invited to speak as a member of the Authors Guild about the repercussions of tech companies’ rapaciously stealing books.
Baldacci woke up to the dangers of this tech when his son used ChatGPT to generate a plot summary for a book like David Baldacci. The elder Baldacci was shocked that the program pulled “elements of pretty much every book I’d ever written” into its output. Since then, he’s been speaking out about the ways this tech is harming authors and publishers.
The bipartisan support for what Baldacci and the Authors Guild are saying gives me some hope, but with Trump and his hogmen so in bed with tech, I’m not holding my breath for meaningful oversight or regulation anytime soon.
Court Condones Case
This hearing took place just before a U.S. District Judge ruled that a class action suit against Anthropic’s Claude LLM can proceed. The case is based around the same complaints Baldacci made before Congress, that tech companies piracy of authors’ work to train their large language models is illegal and damaging.
Unfortunately, it’s been a mixed bag from this Northern District of California judge, who ruled last month that Anthropic’s use of writers’ work was fair use because it was “exceedingly transformative.” This logic doesn’t pass the smell test for me, especially since the judge also said that downloading books from piracy sites is not fair use.
Part of the quibbling here is because the classes in this suit are sorted into a “Pirated Books Class” and a “Scanned Books Class,” and some of the judge’s decision making here is around how to define these classes and which books were to be included in which group. Publisher’s Weekly goes into much more detail on the judge’s thinking and the Authors Guild’s response, but I’m personally hoping for some justice and remuneration for authors soon.
Scholar Surmises Slander
If you’re like me, you’ve maybe had some uncomfortable conversations with people who use AI, and been accused of being mean or backward for telling them to cut it out. According to at least one academic, telling people they’re bad for using this software is becoming such a phenomenon that it needs scholarly study to untangle its social and cultural effects.
A paper submitted to a conference on Human Factors in Computing Systems argues that phrases like “AI could have written this” amount to classist slurs in knowledge work fields. Whether meant as a joke or condescension or dismissal, this kind of “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.” In other words, telling people their work looks bad and AI-generated is gatekeeping laptop jobs. I haven’t read the whole paper so I’ll just say that it seems like a big claim.
But I’m also not surprised that people are upset to hear that their emails sound like something a spreadsheet coughed up.
Microsoft Misbehaves More
Finally (and thank god because my alliteration tank is running close to empty), Emily Atkin’s excellent “Heated” has an interview with two former Microsoft engineers who quit after repeated climate hypocrisy from the company. The engineers were hired to create “tools to make it easier for customers to use Microsoft’s AI in an ethical and environmentally sustainable way,” but found themselves repeatedly ignored and undercut. It’s worth reading the full interview, which is a great insight into how these tech companies are talking about and using these tools. You might not be surprised that this is another case of hypocrisy driven by a desire for profit.
Will and Holly Alpine, the engineers Atkin spoke to, were initially excited to be helping shepherd AI tech in a more responsible direction. And initially, Microsoft was saying the right things about their commitment to environmental concerns and their desire to go carbon negative.
But the engineers and other staff discovered that Microsoft was using its AI tools to help oil companies. The tech giant helped ExxonMobil increase oil extraction by 50,000 barrels per day and struck a deal with Chevron to “dramatically accelerate the speed with which [they] can analyze data to generate new exploration opportunities.” When confronted, Microsoft made more promises and dismissed concerns. The Alpines tried recommending ways to meet the goals the company was publicly claiming to want, but were again stymied or disregarded. The hypocrisy eventually drove the Alpines from the company. They’ve since founded Enabled Emissions, an organization to hold “Big Tech accountable for accelerating fossil fuel production.“
I shouldn’t be surprised by the hypocrisy of a company making money by stealing others’ work, or promising to be helping the planet while turbo-charging oil production. These accusations are never taken seriously by AI boosters, since any steps backwards now will be surpassed by all the steps forward in the future when we arrive in some glittering, just-over-the-horizon AI utopia. This magical thinking that “AI is going to fix everything” somewhere down the line ignores the fact that betting on AI is a huge gamble, backed up by tenuous evidence. And it implies that climate change is a technical problem, not stemming from a lack of political will. We have all the technology, science, and resources we need to curtail this problem right now. Climate change isn’t something we need to design a new widget for, it’s an organizing problem.
But all of these stories are also a reminder that these are products are built and managed by people and for-profit companies. These softwares aren’t divine clocks, or flawless arbiters that float above human biases. AI is neither artificial, nor intelligent. It’s a human engineered product built to operate in specific ways.
Don’t let the promised wonders and breathless marketing for this tech distract from the very real material concerns about its creation and use. Something built on stolen work, maintained with vast amounts of energy and water, and used to increase fossil fuel extraction and guide bombs is worth criticizing. If it’s bad classism to say that a melted AI image of Trump and Epstein holding hands looks dumb, then I say it’s good classism to say that rich people and rich companies shouldn’t be entitled to take and destroy whatever they want just to raise their stock prices.