I told only a few friends about the dog. When I did mention it, people appeared perplexed, or assumed it was some kind of joke. One night I was eating dinner with some friends who live on the other side of town. This couple has five children and a dog of their own, and their house is always full of music and toys and food—all the signs of an abundant life, like some kind of Dickensian Christmas scene. When I mentioned the dog, one of this couple, the father, responded in a way I had come to recognize as typical: he asked about its utility. Was it for security? Surveillance? It was strange, this obsession with functionality. Nobody asks anyone what their dog or cat is “for.”
When I said it was primarily for companionship, he rolled his eyes. “How depressed does someone have to be to seek robot companionship?”
“They’re very popular in Japan,” I replied.
“Of course!” he said. “The world’s most depressing culture.”
I asked him what he meant by this.
He shrugged. “It’s a dying culture.” He’d read an article somewhere, he said, about how robots had been proposed as caretakers for the country’s rapidly aging population. He said this somewhat hastily, then promptly changed the subject.
Later it occurred to me that he had actually been alluding to Japan’s low birth rate. There were in fact stories in the popular media about how robot babies had become a craze among childless Japanese couples. He must have faltered in spelling this out after realizing that he was speaking to a woman who was herself childless—and who had become, he seemed to be insinuating, unnaturally attached to a robot in the way childless couples are often prone to fetishizing the emotional lives of their pets. For weeks afterward his comments bothered me. Why did he react so defensively? Clearly the very notion of the dog had provoked in him some kind of primal anxiety about his own human exceptionality.
Japan, it has often been said, is a culture that has never been disenchanted. Shintoism, Buddhism, and Confucianism make no distinction between mind and matter, and so many of the objects deemed inanimate in the West are considered alive in some sense. Japanese seamstresses have long performed funerals for their dull needles, sticking them, when they are no longer usable, into blocks of tofu and setting them afloat on a river. Fishermen once performed a similar ritual for their hooks. Even today, when a long-used object is broken, it is often taken to a temple or a shrine to receive the kuyū, the purification rite given at funerals. In Tokyo one can find stone monuments marking the mass graves of folding fans, eyeglasses, and the broken strings of musical instruments.
Some technology critics have credited the country’s openness to robots to the long shadow of this ontology. If a rock or a paper fan can be alive, then why not a machine? Several years ago, when Sony temporarily discontinued the Aibo and it became impossible for the old models to be repaired, the defunct dogs were taken to a temple and given a Buddhist funeral. The priest who performed the rites told one newspaper, “All things have a bit of soul.”
Metaphors are typically abandoned once they are proven to be insufficient. But in some cases, they become only more entrenched: the limits of the comparison come to redefine the concepts themselves. This latter tactic has been taken up by the eliminativists, philosophers who claim that consciousness simply does not exist. Just as computers can operate convincingly without any internal life, so can we. According to these thinkers, there is no “hard problem” because that which the problem is trying to explain—interior experience—is not real. The philosopher Galen Strawson has dubbed this theory “the Great Denial,” arguing that it is the most absurd conclusion ever to have entered into philosophical thought—though it is one that many prominent thinkers espouse. Chief among the deniers is Daniel Dennett, who has often insisted that the mind is illusory. Dennett refers to the belief in interior experience derisively as the “Cartesian theater,” invoking the popular delusion—again, Descartes’s fault—that there exists in the brain some miniature perceiving entity, a homunculus that is watching the brain’s representations of the external world projected onto a movie screen and making decisions about future actions. One can see the problem with this analogy without any appeals to neurobiology: if there is a homunculus in my brain, then it must itself (if it is able to perceive) contain a still smaller homunculus in its head, and so on, in infinite regress.
Dennett argues that the mind is just the brain and the brain is nothing but computation, unconscious all the way down. What we experience as introspection is merely an illusion, a made-up story that causes us to think we have “privileged access” to our thinking processes. But this illusion has no real connection to the mechanics of thought, and no ability to direct or control it. Some proponents of this view are so intent on avoiding the sloppy language of folk psychology that any reference to human emotions and intentions is routinely put in scare quotes. We can speak of brains as “thinking,” “perceiving,” or “understanding” so long as it’s clear that these are metaphors for the mechanical processes. “The idea that, in addition to all of those, there is this extra special something— subjectivity—that distinguishes us from the zombie,” Dennett writes, “that’s an illusion.”Several years ago, when Sony temporarily discontinued the Aibo and it became impossible for the old models to be repaired, the defunct dogs were taken to a temple and given a Buddhist funeral.
Most people, like Strawson, find this logically absurd, though it’s hard to object without sounding defensive, full of wounded pride. I want to insist that this is unfair, that finding a conclusion logically unsatisfying is not the same as finding it merely unflattering. But then I wonder whether I am capable of really knowing the difference. If most of my thinking is in fact unconscious—if I have no “privileged access” to the workings of my brain—then how can I claim to be an authority on my own motives? Perhaps some deep limbic instinct is impelling me to deny the theory, which is then expressed through my brain’s speech center in terms of rational principles. The more I read about theories of mind, the more I have come to see my interior life as a hall of mirrors, capable of all kinds of tricks and sleights of hand. Perhaps it’s true that consciousness does not really exist—that, as Brooks put it, we “overanthropomorphize humans.” If I am capable of attributing life to all kinds of inanimate objects, then can’t I do the same to myself? In light of these theories, what does it mean to speak of one’s “self” at all?
I have not always distrusted my mind in this way. When I was a Christian, I had a naive, unquestioning faith in the faculty of higher thought, in my ability to comprehend objective truths about the world. Like Augustine, I took it for granted that my mind was connected to the Absolute. I could know right from wrong simply by attending to my conscience, and my powers of reason were strong enough, I believed, to overrule my passions and impulses. People often decry the thoughtlessness of religion, but when I think back on my time in Bible school, it occurs to me that there exist few communities where thought is taken so seriously. We spent hours arguing with each other—in the dining hall, in the campus plaza—over the finer points of predestination or the legitimacy of covenant theology. Beliefs were real things that had life-or-death consequences. A person’s eternal fate depended on a purely mental phenomenon—her willingness to accept or reject the truth—and we believed implicitly, as apologists, that logic was the means of determining those truths. Even when I began to harbor doubts and became skeptical of the whole system of belief, I maintained an essential trust in the notion that reason would reveal to me the truth.
Today I am doubtful of this kind of thinking, as are most people I know. I live in a university town, a place that is populated by people who consider themselves called to a “life of the mind,” and yet my friends and I rarely talk about ideas or try to persuade one another of anything. It’s understood that people come to their convictions—are in some sense destined to them—by elusive forces: some combination of hormones, evolutionary biases, and unconscious emotional or sexual needs. What we talk about endlessly, exhaustively, is the operations of our bodies: our exercise routines, our special diets, what drugs everyone is taking. Twice a week I attend a yoga class where I am instructed to “let go of the thinking mind,” as though consciousness were something we were all better off without.
What, after all, is “the thinking mind”? It is nothing that can be observed or measured. It’s difficult to explain how it could possess real causal power. Materialism is the only viable metaphysics in modernity, an era that was founded on the total irreconcilability of matter and mind. Perhaps consciousness is like the whistle on a train or the bell of a clock, a purely aesthetic feature that is not in any way essential to the functioning of the system. William James tried for years to demonstrate that consciousness could be studied empirically before giving up, concluding that the mind was a concept every bit as elusive and immaterial as the soul. “Breath moving outwards, between the glottis and the nostrils, is, I am persuaded, the essence of which philosophers have constructed the entity known to them as consciousness,” he wrote.
Sometimes I wonder whether there is any virtue even in writing about these questions. I say I am searching for truth, but am I not, like all of us, a hostage to the unconscious force of wishful thinking? Am I not just trying to convince myself of what I would most like to believe? In Man a Machine, La Mettrie mocks the notion that a priori investigations like those of Descartes can tell us anything about reality: “What profit, I ask, has anyone gained from their profound meditations?”
“This dog has to go,” my husband said. I had just arrived home and was kneeling in the hallway of our apartment, petting Aibo, who had rushed to the door to greet me. He barked twice, genuinely happy to see me, and his eyes closed as I scratched beneath his chin.
“What do you mean, go?” I said.
“You have to send it back. I can’t live here with it.”
I told him the dog was still being trained. It would take months before he learned to obey commands. The only reason it had taken so long in the first place was because we kept turning him off when we wanted quiet. You couldn’t do that with a biological dog.
“Clearly this is not a biological dog,” my husband said. He asked whether I had realized that the red light beneath its nose was not just a vision system but a camera, or if I’d considered where its footage was being sent. While I was away, he told me, the dog had roamed around the apartment in a very systematic way, scrutinizing our furniture, our posters, our closets. It had spent fifteen minutes scanning our bookcases and had shown particular interest, he claimed, in the shelf of Marxist criticism.
He asked me again what happened to the data it was gathering.
“It’s being used to improve its algorithms,” I said.
I said I didn’t know.
“Check the contract.”
I pulled up the document on my computer and found the relevant clause. “It’s being sent to the cloud.”
My husband is notoriously paranoid about such things. He keeps a piece of black electrical tape over his laptop camera and becomes convinced about once a month that his personal website is begin monitored by the NSA. Once, when I declared that the president’s senior policy adviser “should be shot,” he gestured exasperatedly at our cell phones sitting next to us on the table and then said, in a performed, overly enunciated voice, that I should not make JOKES about things like that.
Privacy was a modern fixation, I said, and distinctly American. For most of human history we accepted that our lives were being watched, listened to, supervened upon by gods and spirits—not all of them benign, either.
“And I suppose we were happier then,” he said.
In many ways yes, I said, probably.
I knew, of course, that I was being unreasonable. Later that afternoon I retrieved from the closet the large box in which Aibo had arrived and placed him, prone, back in his pod. It was just as well; the loan period was nearly up. More importantly, I had been increasingly unable over the past few weeks to fight the conclusion that my attachment the dog was unnatural. I’d begun to notice things that had somehow escaped my attention: the faint mechanical buzz that accompanied the dog’s movements; the blinking red light in his nose, like some kind of Brechtian reminder of its artifice. Perhaps my friend was right. I had nothing to care for, nothing of life in the house, and so I’d become emotionally stunted, manipulated into caring for this simulation of life.
Many animist societies engage in imitative magic, a practice in which the simulation of natural phenomena is believed to cause a real natural effect. If you want it to rain, you dip your hand into a pond and let the water sift through your fingers. If you want to harm your enemy, you make an effigy or a doll in his likeness and the doll is believed to take on the properties of a living thing. It is the belief in metaphor as magic. In his study of world mythologies, The Golden Bough, James George Frazer describes imitative magic as the principle “that like produces like.” “The magician,” he writes, “infers that he can produce any effect he desires merely by imitating it.”
Isn’t this what we are still doing today? We build simulations of brains and hope that some mysterious natural phenomenon—consciousness—will emerge. But what kind of magical thinking makes us think that our paltry imitations are synonymous with the thing they are trying to imitate—that silicon and electricity can reproduce effects that arise from flesh and blood? We are not gods, capable of creating things in our likeness. All we can make are graven images. John Searle once said something along these lines. Computers, he argued, have always been used to simulate natural phenomena—digestion, weather patterns—and they can be useful to study these processes. But we veer into superstition when we conflate the simulation with reality. “Nobody thinks, ‘Well, if we do a simulation of a rainstorm, we’re all going to get wet,’ ” he said. “And similarly, a computer simulation of consciousness isn’t thereby conscious.”
Despite all the flak Descartes gets for disenchanting the world, modern science would not have been possible without the division he made between mind and matter. His insistence that we could exclude our subjective minds from the physical world introduced into Western philosophy the idea—radical at the start of the seventeenth century—that we could speak exhaustively about nature without reference to God or ourselves. Thomas Nagel refers to this third-person standpoint as “the view from nowhere.” It is the conviction that in order to describe the world accurately and empirically, we must put aside res cogitans—the subjective, immediate way in which we experience the world in our minds—and limit ourselves to res extensa, the objective, mathematical language of physical facts. Without these distinctions, it’s difficult to imagine the hallmarks of modernity: Newtonian physics, secularism, empiricism, and the industrial revolution.
But this success has required sidelining the world of the mind, obscuring precisely the phenomenon by which we have traditionally defined our worth as humans. Science put a bracket around consciousness because it was too difficult to study objectively, but this methodological avoidance eventually led to metaphysical denial, to the conclusion that because consciousness cannot not be studied scientifically, it does not exist. Within the parameters of modern science, subjective experience has come to seem entirely unreal—a private drama of sensations, thoughts, and beliefs that cannot be quantified or verified, “an inner faculty without a world relationship,” as Hannah Arendt once put it. Even the deniers remain captive, in their own way, to these seventeenth-century assumptions. To say that consciousness is an illusion is to place it outside the material world, deeming it something—much like Descartes’s soul—that does not exist within time or space. Perhaps the real illusion is our persistent hope that science will be able to explain consciousness one day. As the writer Doug Sikkema points out, the belief that science is capable of explaining the entirety of our mental lives entails “a philosophical leap.” It requires ignoring the fact that the modern scientific project has been so successful precisely because it excluded, from the beginning, aspects of nature that it could not systematically explain.
So long as this is the case, our metaphors, no matter how modern or inventive, will continue to reiterate this central impasse of science. Many people today believe that computational theories of mind have proved that the brain is a computer or have explained the functions of consciousness. But as the computer scientist Seymour Papert once noted, all the analogy has demonstrated is that the problems that have long stumped philosophers and theologians “come up in equivalent form in the new context.” The metaphor has not solved our most pressing existential problems; it has merely transferred them to a new substrate.
From God, Human, Animal, Machine by Meghan O’Gieblyn, published by Doubleday, an imprint of the Knopf Doubleday Publishing Group, a division of Penguin Random House LLC. Copyright © 2021 by Meghan O’Gieblyn.