• Can Fiction Introduce Empathy Into AI? Do We Want It To?

    Flynn Coleman on the Potential for Ethical Technology

    Storytelling gives us meaning and common values, and it’s how we understand ourselves and the world around us. We can feel it in our bones when we have been told a great story. It also cultivates empathy. A data point representing one million people doesn’t penetrate our minds or hearts, and humans remember little from this type of information. But the story of one person—a face, a visual, and a narrative that stands for millions, the humanity in the data—is how we understand and feel compelled to engage in the world around us.

    One idea that researchers are exploring is teaching our intelligent machines about empathy by having them read our great works of literature. Immersing ourselves in fiction allows us to enter and better understand the worlds of others. Maryanne Wolf, in her book Reader Come Home, makes the connection: “Reading at the deepest levels may provide one part of the antidote to the noted trend away from empathy. But make no mistake: empathy is not solely about being compassionate toward others; its importance goes further. For it is also about a more in-depth understanding of the Other, an essential skill in a world of increasing connectedness among divergent cultures.”

    Reading literary fiction has been scientifically proven to increase empathy. For instance, a 2014 study of Americans showed that reading a story about a Muslim woman decreased their expressions of racism. But storytelling’s large-scale benefits to our ethical selves can only be realized when we are exposed to a diverse array of stories. As author Chimamanda Ngozi Adichie reminds us, we are all prone to ignorance and worse if we only learn a “single story.”

    It isn’t possible to become more empathetic just by scanning tweets and texts or reading short-form writing alone; we have to immerse ourselves in longer narrative forms and character arcs to grow in our capacity for empathy and improve our theory of mind. It takes time to meander through the recesses of human experience, from fragility to intrepidity and everything in between. When we read literary fiction and engage ourselves in these works of art, we not only enhance our ability to understand others, to see that others have their own different experiences and ideas, but we also mature in our understanding of ourselves. Psychologists have found that in the art of storytelling, it’s the characters’ beliefs, intentions, and acts that affect us most. Other research has shown that those who read a Chekhov story were inspired to consider themselves and their personalities in a new way. Fairy tales and fables evolve alongside us, emboldening our imaginations to take flight. Steven Pinker has called fiction “empathy technology.”

    Many feel that all of the wonderful animals depicted in literature and other storytelling media promote learning in children and foster empathy. Author Kate Bernheimer says “the kind of sensitivity to beauty and terror that permeates fairy tales—the very stories that often introduce writers to the reading sublime—should motivate us to read ethically, to be inclusive, and kind.” Others have strongly argued that reading literary fiction improves theory of the mind—that essential ability our intelligent machines would need to achieve to be aware of others’ feelings, intentions, and thoughts.

    Trying on another’s consciousness lets us enter our “moral laboratory.”

    Science fiction is a didactic genre for imagining future scenarios and dissecting our relationships with technology, helping us to think about what could be, stimulating us to wrestle with and reconcile our past, present, and future. Margaret Atwood’s The Handmaid’s Tale, a dystopian novel (and now TV show) focusing on women in a near-future, religious totalitarian society in the northeastern United States, endures because it tells a haunting tale of the future while also feeling eerily familiar. In describing how she wrote the book, Atwood said that she only included events that really happened—all the laws, atrocities, technologies, and historical precedent were real. She says that the book is not a prediction but rather “an antiprediction: If this future can be described in detail, maybe it won’t happen. But such wishful thinking cannot be depended on either.”

    Similarly, Philip K. Dick’s novels, such as Do Androids Dream of Electric Sheep?, have helped us to rethink what it means to be a human, questioning our concepts of empathy, intelligence, and the nature of being alive. 1984 feels like a future universe, as well as a past and present one, as it was born from George Orwell’s reality at the time of its publication in 1949. Silicon Valley and advertising firms are recognizing the advantages of this kind of insight and now hire science fiction writers to help them imagine the future. The US military has worked with writers to help envision worst-case scenarios. As well as playing out dystopian possibilities, science fiction can help us better visualize a future shared with intelligent machines.

    For writer Neil Gaiman, “fiction is a lie that tells us true things, over and over.” When we engross ourselves in the full narrative arc of a good story, our brains synthesize oxytocin, which leads us to be more generous, empathetic, compassionate, and attuned to social cues. Trying on another’s consciousness lets us enter what social scientist and author Jèmeljan Hakemulder calls our “moral laboratory.” Humans can explore other worlds and the inner lives of others; we can learn to connect, to empathize, to care. Maybe our machines can, too?

    Joseph Campbell sees human interconnectedness and empathy at the foundation of all human myths and stories: “You and the Other are one.” And scholars such as the Roman Catholic nun Karen Armstrong believe that compassion is a core value at the center of all religions. Writer and Tibetan Buddhist teacher Matthieu Ricard believes that “interdependency is at the root of altruism and compassion.” Even if we may not agree on a full list of definitive, universal values, we can see that empathy and compassion are the cornerstone of all major religions. Isn’t this what we ideally want our technological creations to have when faced with moral and ethical choices?

    As we continue to offload tasks, memories, and responsibilities to our smart devices, we may have to rely on them more and more to remind us of who we are. Just as an AI can digest the thousands of medical journals a single doctor could never get to in a lifetime in order to help her diagnose and treat patients, AI can assist us in learning to be more compassionate and connected, synthesizing and imparting the courage of Arundhati Roy, the thoughtfulness of Virginia Woolf, the bravery of Anne Frank, or the wisdom of Toni Morrison. And whereas humans have only a limited capacity to digest great literature, our intelligent machines do not. AI could become a digital library for the world’s knowledge to date—and we will all have library cards.

    Can AI versed in our quintessential human stories help us find a way back to one another?

    Georgia Tech researchers Mark Riedl and Brent Harrison have been experimenting with a system they call Quixote to teach “value alignment” to robots through stories. They based this concept on the idea that humans learn about social responsibility and culturally sensitive behavior through reading stories. By instructing AI through “reverse engineering” vital lessons about morality from stories, AI can more closely align its goals with human values and be rewarded for compatible actions.

    This kind of innovative scientific investigation is more urgent than ever. Many of us almost always have a phone with us these days. Five and a half billion of us will own mobile devices by 2022. While the inquiry is ongoing, and mobile technology is indispensable for those with less access to resources, a 2011 study of American college students found that they were cultivating and expressing less empathy, due in large part to their digital lifestyles. The more computer interaction, apparently the fewer opportunities to practice relating to the emotions of ourselves and others.

    Narcissism in the Digital Age is on the rise. Other research affirms that we favor and are primed to mimic what we are repeatedly connected and exposed to. This may explain why Silicon Valley executives often restrict their kids’ use of the very platforms they are making billions of dollars developing. If digital addiction is eroding our children’s emotional intelligence, how can we cultivate the kindness, compassion, social connections, fairness, and global citizenship we will need for our future? Can AI versed in our quintessential human stories help us find a way back to one another?

    Immersing our intelligent technology in literature obviously isn’t the whole answer to establishing ethical AI. From a machine’s perspective, there is nothing mathematically self-evident to indicate that all life is important—yet. Creating compassionate artificial intelligence is a much greater challenge than will be solved by just enabling AI to perceive sadness on a human face or in human speech. We all have our own backgrounds, patterns, stories, strokes of fortune or misfortune, and histories. We see the world through our own filters, blurring the perspectives of others. While we strive to empathize and find affinity with others, it’s also valuable to remember that we can never fully inhabit the experiences of others; that the universe doesn’t revolve around and exist for our points of view.

    But if, alongside our intelligent technological partners, we humans can better embrace the valor and ideals of Yuri Kochiyama, Jean Valjean, Albus Dumbledore, Maya Angelou, or Martin Luther King Jr., this will be an important step toward developing AI that will have a better and more nuanced way of discerning the human condition. Could AI help tutor and spiritually embolden us, find the best in each of us? Could it show us the courage of the hobbits in The Lord of the Rings, the joys and sorrows in the poetry of Pablo Neruda, Langston Hughes, Rumi, or Mary Oliver, the power in “Letter from Birmingham Jail,” or the wisdom in the Vedas?

    It matters what matters we use to think other matters with; it matters what stories we tell to tell other stories with; it matters what knots knot knots, what thoughts think thoughts, what descriptions describe descriptions, what ties tie ties. It matters what stories make worlds, what worlds make stories.  –Donna Haraway

    The answers to whether it’s possible to code something akin to empathy into our intelligent machines may not be too far into the future. What was once seen as a formidable barrier to advancing AI capability, its tendency toward “catastrophic forgetting,” may be imminently resolved. Computer scientists at Google DeepMind are building AI that can remember. And if AI can retain its aggregate memories, as humans have evolved to do, but with far more precision and accuracy, we should be able to build on the library of knowledge as allies.

    Storytelling may be our most essential human technology.

    Google DeepMind founder, AI expert, neuroscientist, and game designer Demis Hassabis believes that the key to AI lies in connecting the fields of AI and neuroscience. Understanding the brain will help unlock AI and help it acquire attributes like intuition. In turn, building AI will help us understand more about who we are. Cognitive scientist Gary Marcus has expressed a similar idea, suggesting that studying children’s cognitive development is key to advancing machine learning. Though it will be difficult to achieve, such a goal also underlines the need to communicate and translate ideas across fields and disciplines.

    As for the power of the human intellect, our extraordinary mental gift of cognitive time travel, back and forth into our past and into the future, is core to the civilizations we inhabit. Scientists have found that cerebrally journeying through time may be linked to humans’ development of language as well as to comprehending others—our theory of mind. Cognitive neuroscience has also shown that mind-wandering is critical to our ability to navigate our past or imagine the future. Some go so far as to say that cognitive time travel, and our capability to conceive of the future, is the distinguishing component of human intelligence. Others argue that animals also have the capacity for mental travel, albeit on a less sophisticated scale. Either way, storytelling may be our most essential human technology.

    In our minds, we can build worlds and imagine universes, traveling back to ancient history and fast-forwarding to the end of time. We remember past hurts and we dream of triumph in the future. With this ability, we can envision future outcomes based on past patterns, just as the many great writers of time-travel literature have imagined technological possibilities that later came true.

    AI, with its unprecedented capacity to consume and sift through data, can already record and instantly retrieve information, document history, and predict consequences far beyond anything heretofore envisaged. As we encode more creative features into AI, it is inevitable it will also compose new narratives in ways we cannot yet foresee. It is already making art, composing music, and writing books. Authors are experimenting with software that finishes typing their sentences, with far more sophisticated tools on the horizon.

    Lest we think that creative storytelling is a uniquely human attribute that a machine could never acquire, what we have long thought of as other unique, divinely endowed human characteristics may have actually just been learned over time. For example, in The Enigma of Reason, the authors suggest that human reason itself is an evolved trait, an “adaptation to hyper social niches humans have evolved for themselves.” And “subject-centered” reasoning is a mechanism we developed, incrementally, to prevent us from being taken advantage of by the group—a relatively new theory dating back to the 17th century.

    From classical philosophy to law to economics, the singular ability of the human brain to reason has historically been of paramount importance. Human reasoning has allowed us to dominate the earth thus far, and acknowledging it as something we learned over time should not devalue it; rather, it illustrates our capacity to grow. We know today that our brains are neuroplastic and that we can rewire ourselves and our thinking. Although we are not blank slates, we can change our path. We can acknowledge the rational and emotional parts of ourselves, face our darkness and our light. And we can train our intelligent machines to help get us there.

    We already partner with our smart machines, animals, and the environment. We have, however, historically regarded them as inferior. They are not. Hierarchizing intelligence allows us to assign lesser dignities to others. The way forward into our new age is to cease insisting on absolute supremacy in all things. Our survival does not depend on maintaining control, for we ultimately have very little of that anyway. It lies, instead, in acceptance, tolerance, and collaboration. History reminds us time and time again that first we fight against one another, for the throne, the power, the glory, the victory—yet to truly triumph, we need to band together.

    The AI wave is already beginning to crest. Predicting with certainty how it will take shape is not possible, nor should it be the goal. We are already hurtling through the artificially intelligent space not knowing where we might land, endeavoring to build upon our collective imagination. We could yet fail.

    In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed.  –Charles Darwin

    With the introduction of new forms of intelligence that already exceed our own, consider the possibility that we are actually the robots, albeit carbon-based ones; not mechanical but both programmed to survive and destined to fail to live up to our own principles. To flourish, we cannot put technology ahead of humanity—or human before the rest. The AI is us, made of our star-stuff. We and our intelligent new mechanical partners have already begun to merge our storylines. As much as we would like to have some certainty about what our future will look like alongside our brilliant new creations in what now still sounds like science fiction, we can only find our true north by following the set of instructions we hold inside ourselves. A human algorithm that reminds us to value all forms of intelligence and hold dear all living things.

     

    REFERENCES

    Jonathan Gottschall, The Storytelling Animal: How Stories Make Us Human (New York: Houghton Mifflin Harcourt, 2012)

    Paul J. Zak, “Why Your Brain Loves Good Storytelling,” Harvard Business Review, Octo-ber 28, 2014, hbr.org/2014/10/why-your-brain-loves-good-storytelling

    Cody C. Delistraty, “The Psychological Comforts of Storytelling,” Atlantic, November, 2, 2014, www.theatlantic.com/author/cody-c-delistraty

    Alison Flood, “Robots Could Learn Human Values by Reading Stories, Research Sug-gests,” Guardian, February 18, 2016, www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggests

    Liz Bury, “Reading Literary Fiction Improves Empathy, Study Finds,” Guardian, October 8, 2013, www.theguardian.com/books/booksblog/2013/oct/08/literary-fiction-improves-empathy-study

    Maryanne Wolf, Reader, Come Home (New York: Harper Collins, 2018).

    Keith Oatley, Such Stuff as Dreams: The Psychology of Fiction (New York: John Wiley & Sons, 2011)

    Tom Jacobs, “Reading Literary Fiction Can Make You Less Racist,” Pacific Standard, March 10, 2014, psmag.com/social-justice/reading-literary-fiction-can-make-less-racist-76155

    Chimamanda Ngozi Adichie, “The Danger of a Single Story,” July 2009, TEDGlobal, 18:43, www.ted.com/talks/chimamanda_adichie_the_danger_of_a_single_story

    Kidd and Castano’s research shows that reading popular works doesn’t have the same effect as long-form literary fiction. David Comer Kidd and Emanuele Castano, “Reading Liter-ary Fiction Improves Theory of Mind,” Science, Vol. 342, Issue 6156 (October 5, 2013): 377–380, DOI: 10.1126/science.1239918

    Maja Djikic, Keith Oatley, Sara Zoeterman, and Jordan B. Peterson, “On Being Moved by Art: How Reading Fiction Transforms the Self,” Creativity Research Journal, Vol. 21, Issue 1 (2009): 24–29, doi.org/10.1080/10400410802633392

    McMaster University, “The Art of Storytelling: Researchers Explore Why We Re-late to Characters,” ScienceDaily, September 13, 2018, www.sciencedaily.com/releases /2018/09/180913113822.htm

    Keith Oatley and Maja Djikic, “How Reading Transforms Us,” New York Times, Decem-ber 19, 2014, www.nytimes.com/2014/12/21/opinion/sunday/how-writing-transforms-us .html

    Jon Henley, “Philip Pullman: ‘Loosening the Chains of the Imagination,’” Guardian, August 23, 2013, www.theguardian.com/lifeandstyle/2013/aug/23/philip-pullman-dark-materials-children

    Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined (New York: Viking Books, 2011)

    “The Strange, Beautiful, Subterranean Power of Fairy Tales,” A forum moderated by Kate Bernheimer, Center for Fiction, Issue 3, accessed September 12, 2018, www.centerforfiction .org/why-fairy-tales-matter

    Kidd and Castano, “Reading Literary Fiction Improves Theory of Mind”

    Andrew Maynard, “Sci-fi Movies Are the Secret Weapon That Could Help Silicon Valley Grow Up,” The Conversation, November 15, 2018, theconversation.com/sci-fi-movies-are-the-secret-weapon-that-could-help-silicon-valley-grow-up-105714

    Margaret Atwood, “Margaret Atwood on What ‘The Handmaid’s Tale’ Means in the Age of Trump,” New York Times, March 20, 2017, www.nytimes.com/2017/03/10/books /review/margaret-atwood-handmaids-tale-age-of-trump.html

    Jill Galvan, “Entering the Posthuman Collective in Philip K. Dick’s ‘Do Androids Dream of Electric Sheep?’” Science Fiction Studies, Vol. 24, No. 3 (November 1997): 413–429, www.jstor.org/stable/4240644

    Eliot Peper, “Why Business Leaders Need to Read More Science Fiction,” Harvard Business Re-view, July 14, 2017, hbr.org/2017/07/why-business-leaders-need-to-read-more-science-fiction

    Brian Nichiporuk, “Alternative Futures and Army Force Planning: Implications for the Future Force Era,” RAND Corporation, 2005, www.rand.org/content/dam/rand/pubs /monographs/2005/RAND_MG219.pdf

    Emmanuel Tsekleves, “Science Fiction as Fact: How Desires Drive Discoveries,” Guardian, August 13, 2015, www.theguardian.com/media-network/2015/aug/13/science-fiction-reality-predicts-future-technology

    Neil Gaiman, The View from the Cheap Seats: Selected Nonfiction (New York: William Mor-row, 2016)

    Zak, “Why Your Brain Loves Good Storytelling”

    “Oatley and his York University colleague Raymond Mar suggest that the process of taking on another’s consciousness in reading fiction and the nature of fiction’s content—where the great emotions and conflicts of life are regularly played out—not only contribute to our em-pathy, but represent what the social scientist Jèmeljan Hakemulder called our ‘moral labora-tory.’” Wolf, Reader Come Home; Jèmeljan Hakemulder, The Moral Laboratory: Experiments Examining the Effects of Reading (Amsterdam, Netherlands: John Benjamins, 2000)

    “Human Empathy & Interconnectedness,” Joseph Campbell, 1986, YouTube, accessed September 19, 2018, www.youtube.com/watch?v=_CGb-p_0gvY

    Karen Armstrong, “Do unto Others,” Guardian, November 14, 2008, www.theguardian .com/commentisfree/2008/nov/14/religion

    Krista Tippett, “The Happiest Man in the World,” On Being, November 12, 2009,  onbeing .org/programs/matthieu-ricard-happiest-man-world

    Karen Armstrong and Archbishop Desmond Tutu, “Compassion Unites the World’s Faiths,” CNN, November 10, 2009, www.cnn.com/2009/OPINION/11/10/armstrong .tutu.charter.compassion/index.html

    Alison Flood, “Robots Could Learn Human Values by Reading Stories, Research Suggests,” Guardian, February 18, 2016,www.theguardian.com/books/2016/feb/18/robots-could-learn-human-values-by-reading-stories-research-suggest

    Craig Wigginton, “Mobile Continues Its Global Reach into All Aspects of Consumers’ Lives,” Deloitte Global Mobile Consumer Trends Second Edition, 2017, www2.deloitte .com/global/en/pages/technology-media-and-telecommunications/articles/gx-global-mobile-consumer-trends.html

    “Mobile Biometric Market Forecast to Exceed $50.6 Billion in Annual Revenue in 2022 as Installed Base Grows to 5.5 Billion Biometric Smart Mobile Devices,” PR Newswire, September 14, 2017, www.prnewswire.com/news-releases/mobile-biometric-market-forecast-to-exceed-506-billion-in-annual-revenue-in-2022-as-installed-base-grows-to-55-billion-          biometric-smart-mobile-devices-300519359.html

    Sara H. Konrath, Edward H. O’Brien, and Courtney Hsing, “Changes in Dispositional Empathy in American College Students over Time: A Meta-Analysis,” Personality and So-cial Psychology Review, 15.2 (2011): 180–198, DOI: 10.1177/1088868310377395

    Jean M. Twenge and W. Keith Campbell, The Narcissism Epidemic: Living in the Age of Entitlement (New York: Atria, 2010)

    Ap Dijksterhuis and John A. Bargh, “The Perception-Behavior Expressway: Automatic Effects of Social Perception on Social Behavior,” Baillement, accessed September 20, 2018, www.baillement.com/texte-perception-behavior.pdf

    Chris Weller, “Silicon Valley Parents Raising Their Kids Tech Free,” Business Insider, Feb-ruary 18, 2018, www.businessinsider.com/silicon-valley-parents-raising-their-kids-tech-free-red-flag-2018-2

    “Catastrophic forgetting occurs when a neural network loses the information learned in a previous task after training on subsequent tasks. This problem remains a hurdle for artificial intelligence systems with sequential learning capabilities.” Joan Serrà, Dídac Surís, Marius Miron, and Alexandros Karatzoglou, “Overcoming Catastrophic Forgetting with Hard Attention to the Task,” Cornell University Library, last revised May 29, 2018, arxiv .org/abs/1801.01423

    Dan Robitzski, “Artificial Intelligence Cho, New Artificial Intelligence Does Some-thing Extraordinary—It Remembers,” Futurism, August 31, 2018, futurism.com /artificial-intelligence-remember-agi

    Demis Hassabis, “The Mind in the Machine: Demis Hassabis on Artificial Intelligence,” Fi-nancial Times, April 21, 2017, www.ft.com/content/048f418c-2487-11e7-a34a-538b4cb30025

    Jamie Condliffe, “Google’s AI Guru Says That Great Artificial Intelligence Must Build on Neuroscience,” MIT Technology Review, July 20, 2017, www.technologyreview.com/s/608317 /googles-ai-guru-says-that-great-artificial-intelligence-must-build-on-neuroscience

    Dan Falk, In Search of Time: The History, Physics, and Philosophy of Time (New York: St. Martin’s Griffin, 2010).

    William A. Roberts, “Mental Time Travel: Animals Anticipate the Future,” Current Bi-ology, Vol. 17, Issue 11 (June 5, 2007): R418–R420, doi.org/10.1016/j.cub.2007.04.010

    Kevin Charles Fleming, “Our Stories Bind Us,” Pacific Standard, January 26, 2018, psmag .com/news/our-stories-bind-us

    David Streitfeld, “Computer Stories: A.I. Is Beginning to Assist Novelists,” New York Times, October 18, 2018, www.nytimes.com/2018/10/18/technology/ai-is-beginning-to-assist-novelists.html

    Dan Sperber and Hugo Mercier, The Enigma of Reason (Boston: Harvard University Press, 2017)

    ______________________________

    A Human Algorithm

    Excerpted from A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are by Flynn Coleman. Published with permission from Counterpoint Press. Copyright © 2019 by Flynn Coleman.

    Flynn Coleman
    Flynn Coleman
    Flynn Coleman is a writer, international human rights attorney, public speaker, professor, and social innovator. She has worked with the U.N., the U.S. federal government, international corporations, and human rights organizations, and has written about global citizenship, the future of work and purpose, political reconciliation, war crimes, genocide, human and civil rights, humanitarian issues, innovation and design for social impact, and improving access to justice and education. She lives in New York City. A Human Algorithm is her first book.





    More Story
    To the Manor Born: On the Rise of Fred C. Trump, Homebuilder Accommodating the motorcar, in a discrete way beneficial to the urbanism of the street, was the hallmark of Brooklyn’s...
  • Become a Lit Hub Supporting Member: Because Books Matter

    For the past decade, Literary Hub has brought you the best of the book world for free—no paywall. But our future relies on you. In return for a donation, you’ll get an ad-free reading experience, exclusive editors’ picks, book giveaways, and our coveted Joan Didion Lit Hub tote bag. Most importantly, you’ll keep independent book coverage alive and thriving on the internet.

    x