On the Behavioral Economy of the Book World
Robert Frank Shines a Light on the Winner-Take-All Approach
An economist’s attempt to explain behavior in publishing or any other domain typically begins with the cost-benefit principle: an action should be taken if and only if the benefits of taking it exceed the corresponding costs. The principle sounds transparently simple, but because costs and benefits are often difficult even to conceptualize, much less measure with any precision, it is often challenging to apply. One thing the principle makes clear, however, is that if monetary rewards were all that mattered to potential authors, the list of people for whom writing a non-fiction book might make sense would be vanishingly small—mainly a handful of celebrities and a few others with inside knowledge of particular topics of keen public interest. Works of fiction are even less likely to succeed financially.
The market for books has become an extreme example of what Philip Cook and I called a winner-take-all market. In our 1995 book, The Winner-Take-All Society, we likened such markets to tournaments in which only a small number of contestants reap any significant financial rewards. Not all financially successful books are good (have you read any of Ann Coulter’s?), but most of them are very good indeed. Even so, the overwhelming majority of good books never generate significant royalties for their authors.
Winner-take-all markets have become more common in part because technology has so greatly reduced transportation and communication costs. Like their modern counterparts, for example, early piano buyers wanted to buy from the best manufacturers, but because the cost of transporting pianos was so high, almost all buyers lived within a relatively short distance of wherever their pianos were built. But with transport costs having fallen so dramatically over the last centuries, most pianos today are bought from producers in only a handful of locations worldwide.
As with pianos, so with stories. People naturally want to listen to the best storytellers on offer. If you were the best storyteller in a small village, you were once assured of a rapt audience for your services. But then came the printing press, and now the internet. Now, unless you are currently among just a handful of the world’s best storytellers in a given category, few listeners will find time for you.
Some of the forces that have concentrated rewards in publishing have also exerted countervailing effects. As Chris Anderson explained in his 2006 book, The Long Tail, digital technology in general and social media in particular have been making music, books, movies, and many other goods economically viable on a much smaller scale than ever before. Consumers have diverse tastes, and modern search algorithms enable them to find what they’re looking for with astonishing ease. So if there’s something out there with the potential to appeal to even a very narrow niche audience, it is more likely than ever that someone will produce it and that others will call it to our attention. Anderson’s account thus helps us understand why a small proportion of back list titles have experienced more durable sales than in the past.
Yet as Anita Elberse recounts in her 2013 book, Blockbusters, the share of total book sales accounted for by bestsellers has continued to grow steadily over time. What economists call the network effect helps explain why. The term refers to the force that causes a thing’s value to increase with the number of others who also have it. A word-processing program, for example, becomes more valuable as its adoption becomes more widespread, because that makes it easier to share files with collaborators. Similarly, belonging to a social network becomes more valuable with growth in its total membership, because that makes it more likely that people you care about will also be members.
Because we are social creatures, an important motive for reading a book is to enjoy the experience of discussing it with others. Opportunities for such exchanges are of course more numerous when you read bestselling titles. That’s perhaps the most important network effect responsible for more highly skewed book sales. The Facebook exchanges cited by Chris Anderson to explain the growing success of backlist titles are far more likely to be stimulated by posts on bestselling titles. And of the many thousands of titles released in any given year, only a handful find their way onto the most widely circulated bestseller lists.
*
Whether a book becomes a bestseller depends on many factors, perhaps the most important of which is whether it’s any good. But as millions of authors are painfully aware, many good books never achieve bestseller status. By far the strongest predictor of whether a book of given quality will become a bestseller is whether it was written by an author of earlier bestsellers.
The idea that success breeds success was dubbed the Matthew Effect by the late sociologist Robert Merton, after the verse in the Book of Matthew that reads, “For unto everyone that hath shall be given, and he shall have abundance; but from him that hath not shall be taken away even that which he hath.” If an author’s book succeeds, she becomes a more attractive client for a high-profile literary agent. That means her next cash advance will exceed her previous one by an even larger amount than it would have, which will create additional pressure for her publisher to publicize her new title more aggressively. And so on.
That luck plays a dramatic role in this process was vividly illustrated by an experiment carried out by the sociologist Duncan Watts and collaborators. On a website they called MusicLab, they posted the names of 48 obscure indie bands and a link to one song from each. MusicLab visitors could download any of the linked songs on the condition that they provide a rating. The researchers averaged the resulting responses. These ratings were highly variable. A handful of songs got high marks from most listeners, and a few got low marks from most. But for the substantial majority, there was no consistent response.
Because we are social creatures, an important motive for reading a book is to enjoy the experience of discussing it with others.With those “objective” ratings in hand, the researchers then created eight independent websites that contained the same 48 songs as before. But each of these new sites also displayed some additional information: visitors could now see how many times each song had been downloaded and the average quality-rating it had received thus far. One of the songs in the experiment, “Lockdown,” had landed roughly in the middle of the “objective” scores, ranking 26th out of 48. Its subsequent fate varied dramatically across the eight websites that incorporated social feedback. On one it ranked #1, but it was only #40 on another. The song’s fate, it turned out, depended largely on how the first people to download it happened to react to it. If they liked it a lot, that created a halo effect that made others more likely to download it and respond favorably. But if early downloaders happened not to like it, things went downhill.
The same factors that shape which bands succeed are also in play in the book business. Books that go on to become hits owe much of their success to the fact that the first people to review them just happened to like them. Works of unambiguously high quality are of course more likely to earn positive early reviews, but even the best works typically elicit a broad range of subjective evaluations. Some go on to succeed simply because the first people to express their opinions about them publicly just happened to come from the right tail of the opinion distribution. Which is to say, many authors owe their commercial success, at least in part, to pure dumb luck.
*
Given the skewness of the reward structure in publishing, the hope of earning a significant sum of money is almost certainly a poorly considered reason for writing a book. Yet the desire to be heard runs deep, and there seems little risk that authors will stop submitting manuscripts to publishers. What’s more, an unintended side-effect of the uncertainty inherent in winner-take-all markets also ensures that profit-seeking firms will continue to publish those manuscripts. From the firm’s perspective, publishing new works is like buying cheap lottery tickets. It is next to impossible to predict whether a specific title will succeed. But the odds are that at least some titles will, and the gains from those are typically sufficient to carry the rest of the list.
Are there persuasive reasons to believe that market forces guide publishing and reading decisions optimally? Since the time of Ronald Reagan and Margaret Thatcher, public policy choices have been heavily shaped by the claim that private markets harness unfettered self-interest to produce the greatest good for all. This view is often erroneously attributed to the 18th-century Scottish philosopher Adam Smith, whose Wealth of Nations is widely viewed as the treatise that launched economics as a discipline. According to Smith’s celebrated invisible hand theory, markets often promote broad social ends. But as he also knew well, individual interests and collective interests do not always coincide. In his earlier Theory of Moral Sentiments, for example, he explained why market exchange was possible only in the presence of a comprehensive set of moral norms and legal regulations.
Skepticism is well advised, then, about claims that unbridled market forces can be counted upon to safeguard important cultural and social values. At every turn, after all, we’re confronted with vivid examples in which individual and collective interests clearly diverge. When all stand to see better at a crowded event, for example, no one sees any better than if all had remained comfortably seated. Yet no one regrets standing, since the alternative is not to see at all.
Books that go on to become hits owe much of their success to the fact that the first people to review them just happened to like them.As the economics of publishing have changed in the digital age, the tension between individual and collective interest, always common in publishing, has become ever more severe. In the markets described in economics textbooks, producers expand output until the additional cost of the last unit produced is equal to what the last buyer is willing to pay for it. Stopping short of that level would leave cash on the table, since an additional unit could be sold at a price greater than its marginal cost. Exceeding that level would also be wasteful since the last buyer would then value her purchase at less than its marginal cost. If we have reasonably complete information and robust competition, then, economics textbooks say that market incentives will lead to socially optimal levels of both price and quantity.
*
That description doesn’t apply at all to publishing in the digital age. Once a text has been created, the marginal cost of distributing it to an additional reader is essentially zero. To allocate it efficiently, its price should therefore also be zero. But although the marginal cost of distributing existing text is zero, there are likely to have been substantial fixed costs of producing that text in the first place. And since a commercial publisher’s first goal is to remain solvent, books and other texts cannot be given away. The fact remains, however, that if closely competing texts sell at positive prices when their marginal costs are zero, their publishers face strong incentives to cut prices. And this, in a nutshell, is the publisher’s dilemma in the digital age.
This dilemma helps explain why a growing share of published content has been migrating to digital aggregators like Facebook. These firms make money not by charging members for viewing content, but by showing them content in combination with finely targeted ads based on the content they’ve chosen to view. The more we learn about this business model, the more reason we have for viewing it as a grave threat to the market for ideas. The algorithm that chooses the content directed to member newsfeeds is purpose-built to maximize engagement, ignoring any other measure of value.
A promising alternative is the subscription model, whereby members gain access to a publisher’s inventory in return for a modest recurring fee. For those willing to pay the fee, this model satisfies the economist’s efficiency criterion, since they enjoy unlimited content access at a zero marginal charge. Major newspapers like the Times in the UK and the Washington Post and New York Times in the US have done well under this model, and there are hints that it is also poised to make inroads into book publishing. For example, both Audible, the leading audiobook seller, and Kindle, the leading ebook seller, offer subscription options.
Society’s interest in having an informed citizenry has been a longstanding rationale for government support of public schooling. The same logic would appear to justify similar attempts to support better outcomes in the market for published content. Public money already supports a substantial volume of academic scholarship and publication. And in the United States, the National Endowment for the Arts offers an informative example of public support for other forms of artistic endeavor.
Regulation is another possible remedy for imperfections in the market for information. The network effect that cements Facebook’s dominant position in the social media space makes it unlikely that competitive pressure will induce the company to abandon its most troublesome current practices. This realization has led even some traditionally conservative business publications to advocate the regulation or even breakup of companies like Facebook.
Such steps, or any others that reduce the prevalence of ad-supported business models, would indirectly boost demand for subscription-supported content. But the subscription model itself isn’t perfect. Some worry, for example, that demand will focus on a few big players who can offer the widest range of content, with potentially worrisome monopolistic tendencies. The subscription model would also exclude at least some who would value access to a publisher’s content, and those thus excluded would come disproportionately from the lower reaches of the income distribution. Exclusive reliance on the subscription model would inevitably elicit objections from proponents of the ad model, which enables low-income people to access content free of charge. Such objections might be addressed specifically—perhaps by giving each household a modest tax credit that could be used to offset subscription fees—or in a more general way, by making the social safety net more generous.
This dilemma helps explain why a growing share of published content has been migrating to digital aggregators like Facebook.The digital revolution has been particularly devastating to local newspapers. When news was delivered primarily on the printed page, most of these organizations were essentially local monopolists. The heavy capital investments required to produce, print, and distribute content in physical form meant that few cities could support more than a single local paper. As the only purveyors of local print advertising, their revenues were typically more than sufficient to support in-depth news and investigative reporting. But the ad revenue that supported these activities has fallen sharply under competitive pressure from more effective targeted digital advertising. Many newspapers have gone out of business, and most of the survivors are struggling.
But while traditional sources of financial support for local news coverage and investigative journalism have been disappearing, the public value of these activities has remained strong. To the extent that the targeted ad model has spawned false beliefs and polarization that make corruption more widespread, the payoff to additional investigative journalism may well be higher than in the recent past.
Non-profit independent news organizations, such as the American investigative journalism newsroom ProPublica, have been attempting to bridge the funding gap. Since ProPublica began in 2008, it has received many awards, including six Pulitzer Prizes, but it is small ( just over 100 journalists) and financed largely by private donors. There are many more issues worthy of investigation than its staff can attend to. As one possible response to the continuing decline of investigative journalism, then, governments might consider the establishment of a public endowment to which groups like ProPublica could apply for additional funding. Similar support might be extended to encourage other forms of expression.
That private markets for information are riddled with serious imperfections is no longer in doubt. But government interventions are also imperfect, and few of us want bureaucrats deciding which novels get published and which films get made. The more interesting question going forward will be not whether intervention is warranted, but rather what forms it might take and how extensive it ought to be.
__________________________________
This article was first published in The Author, the journal of the Society of Authors.