Can the “Literary” Survive Technology?
Sven Birkerts on On Our Changing Brains and What Comes Next
Speaking at the Lahore Literary Festival early in 2014, Pakistani novelist Mohsin Hamid, author of The Reluctant Fundamentalist and How to Get Filthy Rich in Rising Asia, made the observation that modern technology and social media were having an effect on the styles of young writers, at one point asserting: “The way they write today is very different from how people used to write ten or fifteen years ago.”
When I read this I did a double take. Not at Hamid’s basic point about change—I think we all understand that a major transformation is underway—but at his time frame. Ten or fifteen years. I reflexively think of that period as part of the cultural present, a version of “just yesterday,” right here in ready reach. But now I suspect that my thinking might be a holdover from an earlier way of measuring, that my impulse marks me out as old school right from the start. Those few years represent a newly significant increment on the timeline, and the fact that they do is one of the main background things we need to think about in considering literary publishing in the 21st century. Change itself is changing.
In truth, the whole of Hamid’s simple sentence is worth study, proposing as it does both the extraordinary rate of technological change and the idea that new media might be exerting very real pressure on how young people write—which is also to say, on what writing will look like in years to come. Certainly these technological changes are altering the platforms on which that work will be written, published, and read.
* * * *
Technological change, the rate of. Before we conjecture anything about this future, we need a way to conceive of the momentum that is affecting everything, most especially our interactions with others: our communications. A bit of historical context is useful and eye-opening.
Let’s agree that the time of a generation equals twenty-five years and do some quick computations. One accepted approximation holds that language developed some two hundred thousand years ago, while the harnessing of electricity happened in the middle of the eighteenth century, and the telephone was only invented in the middle of the nineteenth century. This means that over 8,070 generations came and went in succession between the earliest uses of language and the first transmission of the spoken word through a wire.
Eight thousand and seventy generations. Take a minute to ponder the weight of that interval. Ponder, too, the incalculable impact exerted by those eons of repetition on the collective human substrate (I hesitate to say “collective unconscious”), if there can be said to be such a thing.
It was only around 3000 BCE that the written sign was discovered. With this we can start to picture a more specific succession, a history: from the marks made in clay and then papyrus and vellum, to scrolls, the codex, the early laboriously printed books, and then those that began to pour forth as the press was mechanized; on to the paperback proliferation, and to the variety of screen devices.
The one basic constant of our processing of signs has been that the eyes move left to right and back (for most scripts, anyway), but we ought to ask how meaningful or determining is that constant? For it seems that, eye movements notwithstanding, a great deal—almost everything—has changed about reading. That is, about the mind that performs the deciphering action. And that much of that change has been caused by the mind’s interaction with the various media.
We can narrow the context further in this matter of the rate of change. Nine words: “Mr. Watson, come here! I want to see you!” Thomas Edison patented the telephone in 1876. In other words, it was only five generations ago that we first mastered the art of flinging our voices across space. But telephones themselves did not become common in American homes until the early 1950s. Two and a half generations ago.
Changes, improvements: rotary dials gave way to touch-tone buttons, long-distance calling—at first a costly and involved procedure—became increasingly common. Wireless mobile phones, first pioneered in the early 1970s, were not commonly adopted until the 1990s. One generation ago. And consider how basic those instruments were when compared with the most up-to-date smartphones. What has happened in this latest generational increment, just in the last few years, has been the incorporation of digital computing power—the microprocessor, itself an epoch-making invention—into an instrument whose marvels we have barely begun to assess. Suddenly, in a historical flash, we have developed an affordable, pocket-sized instrument that we can use to make calls, text, take and send pictures, record sound and video, watch movies, find directions, look up information on the Internet, locate our friends, engage in social networking, listen to customized music streams, and read whatever we want. . .
* * * *
I was on the subway the other morning, on my way to the airport, immersed in doing the crossword in Boston’s little giveaway paper. At one point I looked up, and right then—before I could formulate any logical account or narrative—I got it. I realized that even in my persistent brooding about the onrush of all things digital, I had been naively conservative. I had pitched my thinking in terms of some pending “future.” What I saw in that unprepared moment was that it was already in place. Every person in that car was either staring at a phone or reading pad, or standing with a faraway look in her eyes and a wire in her ear. The transformation is no longer just pending. Any person who puts words to the page—or any surface—must reckon with this. I mean the technology itself, before even looking to see what the various devices are delivering to their various consumers.
So far I’ve been talking about smartphones and iPads and other devices and not about the transformations of books and reading. But the nature of those technologies and our acclimatization to them has everything to do with this discussion of the outlook for literary books and literary reading. For like it or not, it is in the field of these devices and adaptations that all reading increasingly happens.
We are shaped by our reading, of course, but our reading is in every way shaped by who we are, and this is, at least in part, the product of how we live. We inhabit a great tangle of causes and effects, all crosshatched with the feedback loops of influence. The point is, too, that we don’t yet know how these myriad causes and effects will transform books and reading. Certainly, though, one effect will be to disabuse us of the notion that the old model—to which some of us so fixedly cling—was a kind of given, rather than what now seems clearly to have been a remarkably long and stable phase.
We don’t know how things will change, only that they will. We can try to anticipate, of course, based on the trends and tendencies of the present. But we have no way of being sure that the vector of these tendencies will not start twisting this way and that at the least technological provocation. Nor do we know—how can we?—how to factor in what is happening to us in our own systems. Neurobiology: the field is inventing itself with great velocity. How are we to guess possible new variables in a world that has been bringing forth surprises with each rotation?
What has emerged and is becoming increasingly clear is that the brain exhibits extraordinary plasticity, that it does not take the time of a generation to effect substantial changes in our mental processing. “Neurons that fire together, wire together”—we sing it like a nursery rhyme, and will until our attention spans shrink to the point where we can just recollect the keywords: fire, wire. But the odds are that we won’t need to use our memories in that way. There will surely be digital prostheses that will make sure the words are there when we need them.
* * * *
Though his observation was general, I think this is what Hamid is referring to when he talks about change. He means not just the technology, but changes in the writers’ sensibilities because of their interactions with the technology; not just in their sensibilities, but—and maybe I’m adding this layer myself—in the whole conception and subject matter of literature. The literary.
* * * *
Already we know that people don’t read in the same way. Writer Nicholas Carr, we may remember, made a big splash, first with a cover article in the Atlantic and then in his book The Shallows: What the Internet Is Doing to Our Brains. In the article, which had the catchy title “Is Google Making Us Stupid?,” he began on a note of personal worry. He had noticed in recent years an increasing difficulty in sitting down and reading a book in the old way. He found himself having a hard time focusing on any single text; he felt impatient, skittish, frustrated. He theorized that the difficulty might have to do with the fact that he spent the better part of every day working on a computer, doing all of the usual multitasking behaviors so familiar to all of us. Looking further, he began to test his hypothesis using all kinds of studies coming out in the burgeoning field of neuroscience. What he found and documented in his book was the striking corroboration of his intuitions. Our astonishingly “plastic” neural system adapts with great rapidity to its behavioral environment. Fire: wire. Which is to say: we are changing in tandem with the media that are bringing about the changes.
* * * *
One reads the tea leaves. Or the newspaper—a similar process. Two articles reflecting on certain contemporary trends will at least suggest the kinds of developments I’m talking about—developments that, singly, exert impact, but that may also affect the psyche in unforeseeable combinations and further underscore Hamid’s assertion about transformations of the practice of writing.
In a 2014 article in the Washington Post, “Books Are Losing the War for Our Attention,” writer Matt McFarland opens by saying: “Technology has reshaped everything from how we communicate to how we find a mate or a job. Yet the experience of reading books remains largely untransformed, and the popularity of books has suffered in the face of flashier media formats that are perfected for our busy world.”
McFarland follows with statistics on the declining number of book readers, and voices the common fear that though books can still bring us essential kinds of knowledge and understanding, they cannot do so if they sit unopened on shelves. He underscores his concern by quoting Russell Grandinetti, vice president of Amazon Kindle content: “Most people walk around with some kind of device or have access to some kind of device that allows them to choose how to use their time. . . In a world with that much choice, books need to continue to evolve to compete for someone’s time and interest.”
What to do? “One possibility,” writes the author, “would be to integrate more multimedia into e-books, which don’t offer the same dynamic story-telling as HTML and Web sites.” That has always seemed to me to be the obvious and inevitable next thing, a development from which going back will be very difficult, especially given what we are learning about our highly supple neural networks. The integration of videos, graphics, and sound—not to mention other kinds of interactive links—into standard text content would obviously affect the neural dynamics of the reading act in untold ways, most of them challenging further our already challenged faculty of attention. But McFarland does not linger here—though one easily could—to spell out a range of possible scenarios that might not finally feel all that futuristic. The real question, not addressed in the article, is the extent to which such amped-up e-presentations would undermine the ostensibly separate process of reading traditional—literary—books in the traditional way.
Considering possible industry responses to the loss of readers, McFarland goes on to discuss Spritz, a new Boston-based start-up devoted to the speeding up of reading through a process that combines rapid-fire flashing of words in what have been shown to be “optimal recognition points”—the gist being that the eye stays steady while the words flash past at strategically calculated rates. Research claims that comprehension is not “negatively affected at speeds up to 400 words per minute.” A novel like The Catcher in the Rye, he writes, “could be read in three hours and four minutes.” The contested word here, obviously, is “read,” for the impact on literary reading would certainly be considerable. Salinger’s colloquially based Catcher in the Rye is one thing, and a story by, say, Lydia Davis is quite another. And even if the work in question is not forbidding in its structure or syntactical expression, we should remember that all literary writing asks us to savor the nuances of diction.
Just a few days after McFarland’s piece appeared, David Streitfeld published an article in the business section of the New York Times on the new growth industry of fiction serialized for readers who read on their phones. Wattpad is a new storytelling app that seems to be having huge success. Streitfeld writes: “Wattpad is a leader in this new storytelling environment, with more than 2 million writers producing 100,000 pieces of material a day for 20 million readers on a intricate international social network.” The material he documents is entertainment, mainly, goods served up for quick consumption that have little to do with the literary, but. . .
Though I am not necessarily thinking in terms of any direct influences on the future of literary reading and writing, these kinds of developments cannot but have an effect, insofar as they condition the reflexes and expectations of readers and the thinking of the strategists at publishing houses, etc. On this front, Wattpad represents several things, and they can be itemized. One is the further imposition of assembly-line rhythms and dynamics on at least one sector of the book culture. A second is the intensification of the new ethos of ephemerality, bringing the former conception of book reading—wherein a book was a means of preservation as well as dissemination of text—up against the assumption of steady evanescence that is a feature of most Internet writing.
A third—and possibly most nervous-making—element of Wattpad is its demolition of the hitherto fairly sacrosanct division between author and reader, a division that is in at least one way analogous to the therapist/client divide. A key feature of the latter—and, to a degree, the former, too—is that the separation enables, or encourages, a certain active projection on the part of client and reader, one that vitally informs both the therapy and the act of reading. But the author/reader division of labor appears to be under threat as well, at least in this one sphere.
Writing about a woman named Anna Todd, who uses Wattpad to post episodes of a saga called “After,” Streitfeld notes how after posting Chapter 278, “the first comment appeared 13 seconds after the chapter was uploaded.” And: “By the next day there were 10,000 comments: always brief, overwhelmingly positive, sometimes coherent.”
We need to consider the eventual likelihood of the merging of realms—and how the culture of social media is already pressing up against at least one fringe of the reading world. How long, and on what grounds, can literary publishing keep itself apart from changes moving through adjacent parts of its shrinking domain? Or maybe the real question is: what can we expect from literary publishing as market pressures and changed reading habits bring their inevitable transformations? Will the author be reduced to being a privileged component in what will start to resemble a communal narrative-making enterprise? And, given the basic level of discourse of screen media and the movement toward easy access and disposability, what will be the fate of those undertakings that present more complex material or venture artistically ambitious approaches?
There are so many imponderables. We can’t be sure about the platforms, the staying power of the printed book, or the interests and aptitudes of the generations of future consumers of the word. We also don’t know what those who persist in writing will be writing about, or what forms their expressions might take. Fragmented weaves, interactive hypertexts, narratives that make digital process itself the core of their presentation. . . will writers continue to treat character in familiar ways, or will the saturations of connectivity spawn new modes? The old constants of time and space—how will they signify in a virtual screen culture? What now-unimaginable hybridizations will flourish? What linguistic variations? How will the canonic works of the past be reprocessed? I won’t make any guesses here.
* * * *
But I do believe that insofar as there are truths and expressions that cannot be compromised, and that at least some people deem essential (in the spirit of William Carlos Williams on poetry: “Men die miserably every day for lack of what is found there”), the literary/artistic will survive in some interesting form(s). And if it were to die out, I’m confident that certain independent spirits would invent it again, and when they did it would feel like a revolution was at hand. At the same time, I don’t see the literary as we have known it prevailing or even flourishing. With luck, it will survive for some time yet at the present scale, which is, in terms of societal influence and prestige, already much diminished from former times. But we should keep in mind that those were times when the seemingly sedate verbal art was not yet beset on every side by the seductions of easily accessible entertainment. In the future, literature will likely not command enough marketplace attention to make it commercially viable at any corporate level, but might rather become (and this is not a bad thing) an artisanal product that functions either as a vital inner resource or else as a status marker for its reduced population of consumers. What we might think of as old-school “serious” literature may come to function as a kind of code among initiates. At that point charges of elitism will not have to be defended against—they will have been fully earned.
Illustration by Janusz Kapusta.