Fable’s AI-generated end-of-year reading summaries veered into bigotry.
I’ve never taken a business school class, but I have to imagine that one of the very first things they teach you is “make sure to not say racist things to your customers,” right after you learn, “this money line should go up” and “regulation = bad.”
But Fable, “the social app for bookworms and bingewatchers,” whiffed big time on the no-bigotry axiom when their Spotify-Wrapped-style summarizations of users’ years in reading started spitting out some really nasty stuff. The culprit seems to be an overconfidence in AI; Fable used OpenAI software to extrude a quippy line or two about users’ reading habits, with disastrous results.
Instead of light and fun summaries, the large language model started serving up stuff like: “Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?” Another user was told that their book choices “make me wonder if you’re ever in the mood for a straight, cis white man’s perspective.”
And it wasn’t all anti-woke slop. One user who “read books about people with disabilities was told their choices ‘could earn an eye-roll from a sloth’” and other readers reported the AI spit out inappropriate comments related to disability and sexual orientation.
The app, which most people use to track and share what they’re reading, join book clubs, and read BookToker and celebrity reading lists, quickly turned off the feature and is promising to investigate, according to quotes given to the New York Times. They’ve also promised to use to remove all features that use AI—good!
This is a completely unforced error, and another great object lesson about why you hire human writers to do your writing and human editors to do your editing. This technology, no matter what the AI marketing people tell you, isn’t good! My colleague Calvin recently put this eloquently on a recent episode of the Lit Hub podcast: “A lot of the things being offered are AI glosses on extant things, because something that AI simply cannot do is stuff that even the worst writer in your worst creative writing workshop can do, like remember the name of the main character from paragraph to paragraph.”
The only thing these LLMs seem really adapt at is generating horrible headlines. Wired‘s piece on the Fable debacle pointed out past examples of bigotry burbling out of these AI tools, like how OpenAI’s Dall-E kept generating nonwhite people when prompted to depict “prisoners” and generating white people when asked for “CEOs”; or how AI search engines repeatedly reshared horrifyingly racist stuff about the genetic superiority of white people; or how facial recognition tech seems unable to tell Black people apart. The list goes on and on.
And yet companies like Fable keep rolling the dice with this stuff. Again, I don’t know from business school, but why take the risk? If it’s really as good as we keep being told, let them prove it before deciding to foist it on the public. We don’t have to live like this! Have some self-respect, people!