Art has long been claimed as a final frontier for automation—a field seen as so ineluctably human that AI may never master it. But as robots paint self-portraits, machines overtake industries, and natural language processors write New York Times columns, this long-held belief could be on the way out.
Computational literature or electronic literature—that is, literature that makes integral use of or is generated by digital technology—is hardly new. Alison Knowles used the programming language FORTRAN to write poems in 1967 and a novel allegedly written by a computer was printed as early as 1983. Universities have had digital language arts departments since at least the 90s. One could even consider the mathematics-inflected experiments of Oulipo as a precursor to computational literature, and they’re experiments that computers have made more straightforward. Today, indie publishers offer remote residencies in automated writing and organizations like the Electronic Literature Organization and the Red de Literatura Electrónica Latinoamericana hold events across the world. NaNoGenMo—National Novel Generation Month—just concluded its sixth year this April.
As technology advances, headlines express wonder at books co-written by AI advancing in literary competitions and automated “mournful” poetry inspired by Romance novels—with such resonant lines as “okay, fine. yes, right here. no, not right now” and “i wanted to kill him. i started to cry.” We can read neo-Shakespeare (“And the sky is not bright to behold yet: / Thou hast not a thousand days to tell me thou art beautiful.”), and Elizabeth Bishop and Kafka revised by a machine. One can purchase sci-fi novels composed, designed, blurbed, and priced by AI. Google’s easy-to-use Verse by Verse promises users an “AI-powered muse that helps you compose poetry inspired by classic American poets.” If many of these examples feel gimmicky, it’s because they are. However, that doesn’t preclude AI literature that, in the words of poet, publisher, and MIT professor Nick Montfort, “challenges the way [one] reads[s] and offers new ways to think about language, literature, and computation.”
Allison Parrish, a professor in NYU’s Interactive Telecommunications program, is developing methods to think about both language and computation using algorithms and AI. For a recent project, Compasses (2019), Parrish created a machine learning model for phonetic similarity comprising a “speller” and a “sounder-outer.” Together, when fed an input of words chosen by Parrish, the two tools produced a numerical vector called a “hidden state.” By toying with this value mathematically, she could uncover extant or produce new words in the zones between the outer four corner words she chose. Diamond formations with more traditionally related words on each corner—earth, water, air, fire, for instance—permute inwardly into new forms. Familiar words are placed in unfamiliar contexts, like hair or ear appearing in the above example; pronounceable unwords—warth, wair, feir—also grow from the language-space Parrish creates. In the gaps between these associations of sounds and meanings, novelty explodes, the machine’s strange relationship to English perhaps estranging our own.“Why do we obsessively measure AI’s ability to write like a person? Might it be nonhuman and creative?”
Ross Goodwin’s 1 the Road (2018) is often described as one of the first novels written completely by AI. To read it like a standard novel wouldn’t get one far, though whether that says more about this text or the traditional novel could be debated. Much of the book comprises timestamps, location data, mentions of businesses and billboards and barns—all information collected from Four Square data, a camera, GPS, and other inputs. But the computer also generated characters: the painter, the children. There is dialogue; there are tears. There are some evocative, if confused, descriptions: “The sky is blue, the bathroom door and the beam of the car ride high up in the sun. Even the water shows the sun” or “A light on the road was the size of a door, and the wind was still so strong that the sun struck the bank. Trees in the background came from the streets, and the sound of the door was falling in the distance.” There is a non-sequitur reference to a Nazi and dark lines like “35.416002034 N, -77.999832991 W, at 164.85892916 feet above sea level, at 0.0 miles per hour, in the distance, the prostitutes stand as an artist seen in the parking lot with its submissive characters and servants.”
K Allado-McDowell, who in their role with the Artist + Machine Intelligence program at Google supported 1 the Road, argued in their introduction to the text that 1 the Road represented a kind of late capitalist literary road trip, where instead of writing under the influence of amphetamines or LSD, the machine tripped on an “automated graphomania,” evincing what they more recently described to me as a “dark, normcore-cyberpunk experience.”
To say 1 the Road was entirely written by AI is a bit disingenuous. Not because it wasn’t machine-generated, but rather because Goodwin made curatorial choices throughout the project, including the corpus the system was fed (texts like The Electric Kool-Aid Acid Test, Hell’s Angels, and, of course, On the Road), the surveillance camera mounted on the Cadillac that fed the computer images, and the route taken. Goodwin, who is billed as the book’s “writer of writer,” leans into the questions of authorship that this process raised, asking: is the car the writer? The road? The AI? Himself? “That uncertainty [of the manuscript’s author] may speak more to the anthropocentric nature of our language than the question of authorship itself,” he writes.
AI reconfigures how we consider the role and responsibilities of the author or artist. Prominent researchers of AI and digital narrative identity D. Fox Harrell and Jichen Zhu wrote in 2012 that the discursive aspect of AI (such as applying intentionality through words like “knows,” “resists,” “frustration,” and “personality”) is an often neglected but equally pertinent aspect as the technical underpinnings. “As part of a feedback loop, users’ collective experiences with intentional systems will shape our society’s dominant view of intentionality and intelligence, which in turn may be incorporated by AI researchers into their evolving formal definition of the key intentional terms.”
That is, interactions with and discussions about machine intelligence shape our views of human thought and action and, circularly, humanity’s own changing ideologies around intelligence again shape AI; what it means to think and act is up for debate. More recently, Elvia Wilk, writing in The Atlantic on Allado-McDowell’s work, asks, “Why do we obsessively measure AI’s ability to write like a person? Might it be nonhuman and creative?” What, she wonders, could we learn about our own consciousness if we were to answer this second question with maybe, or even yes?
This past year, Allado-McDowell released Pharmako-AI (2020), billed as “the first book to be written with emergent AI.” Divided into 17 chapters on themes such as AI ethics, ayahuasca rituals, cyberpunk, and climate change, it is perhaps one of the most coherent literary prose experiments completed with machine learning, working with OpenAI’s large language model GPT-3. Though the human inputs and GPT-3 outputs are distinguished by typeface, the reading experience slips into a linguistic uncanny valley: the certainty GPT-3 writes with and the way its prose is at once convincingly “human” but yet just off unsettles assumptions around language, literature, and thought, an unsettling furthered by the continuity of the “I” between Allado-McDowell and GPT-3.
Like many current language systems, GPT-3 also succumbs to a kind of circularity, inherent to the technology, that can read as suffuse or even trickster-ish. (Try writing a long sentence with your iPhone predictive text, for example, and you’ll likely fall into some of your own personalized word loops, like an algorithmic Tender Buttons.) “The most potent part of the experience was getting deep into the system and seeing how the world looked to it, and realizing that the ways that tool perceives will become enfolded into our thinking,” Allado-McDowell said.
But as AI “thinking” reflects new capacities for human potential, it also reflects humanity’s limits; after all, machine learning is defined by the sources that train it. When Allado-McDowell points out the dearth of women and non-binary people mentioned by both themselves and by GPT-3, the machine responds with a poem that primarily refers to its “grandfather.” Allado-McDowell intervenes: “When I read this poem, I experience the absence of women and non-binary people.” “Why is it so hard to generate the names of women?” GPT asks, a few lines later.
Why indeed. Timnit Gebru, a prominent AI scientist and ethicist, was forced out of Google for a paper that criticized the company’s approach to AI large language models. She highlighted the ways these obscure systems could perpetuate racist and sexist biases, be environmentally harmful, and further homogenize language by privileging the text of those who already have the most power and access.Collaborations with AI could aid writers in throwing a wrench in dominant algorithms and dominant languages, expanding the domains of computation, consciousness, and literature alike.
Countering this flattening or dominance, some artists and writers—such as Goodwin, Parrish, and Martine Syms—opt to train their own neural nets rather than using off-the-shelf tools. Other author-coders, like Li Zilles, set out with the explicit goal of showing how ready-made machines “think” about language. In Machine, Unlearning (2018), Zilles formed a program that iterated “litanies” of questions in an automatically learned space. Some are fairly coherent (“Will INFORMATION ever be horrible similar to how a fallacy can be horrible?”), some less so (“Are THOUGHT and sprawl both housing?”). What these phrases reveal is how the machine learning system organizes and relates concepts and words. By posing questions, Zilles forces us to ask our own—of how we organize meaning and use words, but also of these automated systems to which we entrust more and more information.
In her 2020 book Glitch Feminism, curator Legacy Russell argues that we might use digital life to glitch binaries and power structures, seeing the glitch not as an error but as a liberatory tool. She writes, “As we engage the digital it encourages us to challenge the world around us, and, through this constant redressing and challenging, change the world as we know it, prompting the creation of entirely new worlds together.”
One of the most effective and affecting examples of glitching language in recent computational literature is Lillian-Yvonne Bertram’s Travesty Generator (2019). Using a series of coded operations—all of which are illuminated in the endnotes—Bertram explodes repetition, deconstruction, probability, and algorithmization. Permutations of “can’t” shift into permutations of “can’t breathe”; recognizable descriptions of lynching victims from Emmett Till to Trayvon Martin accumulate, distort, and repeat. This systematization makes it not more abstract, but less, invoking language’s materiality in a way that explores the thrust of material violence.
Of the impetus of using code to write the book, Bertram explained over email via a visual poem, “I wanted to use computation—the language and constraints and structure of code—to investigate this and only this: the experiences and historical significations of Black life.” If anti-Blackness is an “infinite algorithm,” it is one which a “body count high enough to ‘break’ the code has yet to be reached.” As iterations-as-poems, indicated with code notation of n=1, n=2, and so on, become increasingly obscure to traditional interpretation despite their recognizable English sounds, words, or syntax, they glitch readers’ expectations and call attention to the systematic disavowal of Black life that is built into American culture. In other words, Bertram uses the mathematics of anti-Blackness against itself.
Though today’s AI and algorithmic tech present no shortage of threats or concerns—ranging from wooden prose to automated racism to excessive energy usage—they probably won’t automate away the novelist or the poet. At the same time, collaborations with AI could aid writers in throwing a wrench in dominant algorithms and dominant languages, expanding the domains of computation, consciousness, and literature alike. AI and algorithmic literature could reproduce digital technologies’ issues and inequities, or, as innovative writing has long done against prevailing or hegemonic linguistic codes, it might show us a way to challenge them.