The Octopus: An Alien Among Us

Michael S. A. Graziano on the Evolution of Animal Consciousness

Self-replicating, bacterial life first appeared on Earth about 4 billion years ago. For most of Earth’s history, life remained at the single-celled level, and nothing like a nervous system existed until around 600 or 700 million years ago (MYA). In the attention schema theory, consciousness depends on the nervous system processing information in a specific way. The key to the theory, and I suspect the key to any advanced intelligence, is attention—the ability of the brain to focus its limited resources on a restricted piece of the world at any one time in order to process it in greater depth.

I will begin the story with sea sponges, because they help to bracket the evolution of the nervous system. They are the most primitive of all multicellular animals, with no overall body plan, no limbs, no muscles, and no need for nerves. They sit at the bottom of the ocean, filtering nutrients like a sieve. And yet sponges do share some genes with us, including at least 25 that, in people, help structure the nervous system. In sponges, the same genes may be involved in simpler aspects of how cells communicate with each other. Sponges seem to be poised right at the evolutionary threshold of the nervous system. They are thought to have shared a last common ancestor with us between about 700 and 600 MYA.

In contrast, another ancient type of animal, the sea jelly, does have a nervous system. Sea jellies don’t fossilize very well, but by analyzing their genetic relationship to other animals, biologists estimate that they may have split from the rest of the animal kingdom as early as 650 MYA. These numbers may change with new data, but as a plausible, rough estimate, it seems that neurons, the basic cellular components of a nervous system, first appeared in the animal kingdom somewhere between sponges and sea jellies, a little more than half a billion years ago.

A neuron is, in essence, a cell that transmits a signal. A wave of electrochemical energy sweeps across the membrane of the cell from one end to the other, at about 200 feet per second, and influences another neuron, a muscle, or a gland. The earliest nervous systems may have been simple nets of neurons laced throughout the body, interconnecting the muscles. Hydras work on this nerve-net principle. They are tiny water creatures—transparent, flowerlike animals with sacs for bodies attached to many arms—and belong to the same ancient category as sea jellies. If you touch a hydra in one place, the nerve net spreads the signals indiscriminately, and the hydra twitches as a whole.

A nerve net doesn’t process information—not in any meaningful sense. It merely transmits signals around the body. It connects the sensory stimulus (a poke on the hydra) to a muscle output (a twitch). After the emergence of the nerve net, however, nervous systems rapidly evolved a second level of complexity: the ability to enhance some signals over others. This simple but powerful trick of signal boosting is one of the basic ways that neurons manipulate information. It is a building block of almost all computations that we know about in the brain.

The eye of the crab is one of the best-studied examples. The crab has a compound eye with an array of detectors, each with a neuron inside it. If light falls on one detector, it activates the neuron inside. So far so good. But in an added pinch of complexity, each neuron is connected to its nearest neighbors, and because of those connections, the neurons compete with each other. When a neuron in one detector becomes active, it tends to suppress the activity of the neurons in the neighboring detectors, like a person in a crowd who is trying to shout the loudest while shushing the people nearest to him.

The mechanism in the eye of a crab is arguably the simplest and most fundamental example of attention. Our human attention is merely an elaborated version of it, made of the same building blocks.

The result is that if a blurry spot of light shines on the crab’s eye, with the brightest part of the spot hitting one detector, the neuron in that detector becomes highly active, wins the competition, and shuts down its neighbors. The pattern of activity across the set of detectors in the eye not only signals a bright spot, but also signals a ring of darkness around it. The signal is, in this way, enhanced. The crab eye takes a fuzzy, gray-scale reality and sharpens it into a high-contrast image with exaggerated, brighter peaks and darker shadows. This signal enhancement is a direct consequence of neurons inhibiting their neighbors, a process called lateral inhibition.

The mechanism in the eye of a crab is arguably the simplest and most fundamental example—the model A case—of attention. Signals compete with each other, the winning signals are boosted at the expense of the losing signals, and those winning signals can then go on to influence the animal’s movements. That is the computational essence of attention. Our human attention is merely an elaborated version of it, made of the same building blocks. You can find the crab-eye method of lateral inhibition at every stage of processing in the human nervous system, from the eye to the highest levels of thought in the cerebral cortex. The origin of attention lies deep in evolutionary time, more than half a billion years ago, with a surprisingly simple innovation.

Crabs belong to an extensive group of animals, the arthropods, which includes spiders and insects and other creatures with hard, jointed exoskeletons and which branched off from other animals about 600 MYA. The most famous extinct arthropod, the one with the biggest fan club today, is the trilobite—a leggy, jointed creature almost like a miniature horseshoe crab, which crawled about the bottom of Cambrian seas as early as 540 MYA. When trilobites died and sank into very fine silt on the ocean floor, their faceted eyes were sometimes fossilized in amazing detail. If you look at a trilobite fossil and examine its bulging eyes through a magnifying glass, you can often still see the orderly mosaic of individual detectors. Judging from these fossilized details, the trilobite’s eye must have closely resembled a modern crab’s eye in its organization and is likely to have used the same trick of competition between neighboring detectors to sharpen its view of the ancient seabed.

Imagine an animal built piecemeal with “local” attention. In that animal, each part of the body would function like a separate device, filtering its own information and picking out the most salient signals. One of the eyes might say, “This particular spot is especially bright. Never mind the other spots.” Meanwhile, independently, one of the legs says, “I’ve just been poked hard right here. Ignore the lighter touches nearby!” An animal with only this capability would act like a collection of separate agents that happen to be physically glued together, each agent shouting out its own signals, triggering its own actions. The animal’s behavior would be, at best, chaotic.

For a coherent response to its environment, the animal needs a more centralized attention. Can many separate sources of input—the eyes, the body, the legs, the ears, the chemical sensors— pool their information together in one place for a global sorting and a competition among signals? That convergence would allow the animal to select the most vivid object in its environment, the one that seems most important at the moment, and then generate a single, meaningful response.

Nobody knows when that type of centralized attention first appeared, partly because nobody is certain which animals have it and which ones don’t. Vertebrates have a central attention processor. But the mechanisms of attention have not been as thoroughly studied in invertebrates. Many types of animals, such as segmented worms and slugs, do not have a central brain. Instead they have clusters of neurons, or ganglia, scattered throughout their bodies to perform local computations. They probably don’t have centralized attention.

Arthropods, such as crabs, insects, and spiders, are better candidates for centralized attention. They have a central brain, or at least an aggregate of neurons in the head that is larger than any of the others in their bodies. That large ganglion may have evolved partly because of the requirements of vision. The eyes being in the head, and vision being the most complicated and information-intensive sense, the head gets the largest share of neurons. Some aspects of smell, taste, hearing, and touch also converge on that central ganglion.

Insects are brainier than people think. When you swat at a fly and it manages to escape—as it almost always does—it isn’t just darting away on a simple reflex. It probably has something that we can call central attention, or the ability to rapidly focus its processing resources on whatever part of its world is most important at the moment, in order to generate a coordinated response.

Octopuses, squid, and cuttlefish are true aliens with respect to us. No other intelligent animal is as far from us on the tree of life.

Octopuses are the superstars of the invertebrates because of their astonishing intelligence. They’re considered mollusks, like clams or snails. Mollusks probably first appeared about 550 MYA and remained relatively simple, at least in the organization of their nervous systems, for hundreds of millions of years. One branch, the cephalopods, eventually evolved a complex brain and sophisticated behavior and may have reached something close to the modern form of an octopus around 300 MYA.

Octopuses, squid, and cuttlefish are true aliens with respect to us. No other intelligent animal is as far from us on the tree of life. They show us that big-brained smartness is not a one-off event, because it evolved independently at least twice—first among the vertebrates and then again among the invertebrates.

Octopuses are excellent visual predators. A good predator must be smarter and better coordinated than its prey, and using vision to locate and recognize prey is especially computationally intensive. No other sensory system has such a fire hose of varied information pouring in and such a need for an intelligent way to focus on useful subsets of that information. Attention, therefore, is the name of the game for a visual predator. Maybe that lifestyle has something to do with the expansion of octopus intelligence.

Whatever the reason, the octopus evolved an extraordinary nervous system. It can use tools, solve problems, and show unexpected creativity. In a now classic demonstration, octopuses can learn to open a glass jar by unscrewing the top in order to get to a tasty morsel within. The octopus has a central brain and also an independent, smaller processor in each arm, giving it a unique mixture of centralized and distributed command.

The octopus also probably has self models—rich, constantly updated bundles of information to monitor its body and behavior. From an engineering perspective, it would need self models to function effectively. For example, it might have some form of a body schema that keeps track of the shape and structure of its body in order to coordinate movement. (Perhaps each arm has its own arm schema.) In that sense, you could say that an octopus knows about itself. It possesses information about itself and about the outside world, and that information results in complex behavior.

But all of these truly wonderful traits do not mean that an octopus is conscious.

Consciousness researchers sometimes use the term objective awareness to mean that the information has gotten in and is being processed in a manner that affects behavioral choice. In that rather low-bar definition, one could say that a microwave is aware of the time setting and a self-driving car is aware of the looming obstacle. Yes, an octopus is objectively aware of itself and of the objects around it. It contains the information.

But is it subjectively aware? If it could talk, would it claim to have a subjective, conscious experience the same way that you or I do?

Let’s ask the octopus. Imagine a somewhat improbable thought experiment. Suppose we’ve gotten hold of a crazy science fiction device—let’s call it the Speechinator 5000 that serves as an information-to-speech translator. It has a port that can be plugged into the octopus’s head, and it verbalizes the information found in the brain.

It might say things like “There is a fish” if the octopus’s visual system contains information about a nearby fish. The device might say, “I am an entity with a bunch of limbs that move in this and that way.” It might say, “Getting a fish out of a jar requires turning that circular part.” It would say many things, reflective of the information that we know is contained inside the octopus’s nervous system. But we don’t know if it would say, “I have a subjective, private experience—a consciousness—of that fish. I don’t just process it. I experience it. Seeing a fish feels like something.” We don’t know if its brain contains that type of information because we don’t know what the octopus’s self models tell it. It may lack the machinery to model what consciousness is or to attribute that property to itself. Consciousness could be irrelevant to the animal.

The octopus conundrum is an instructive example of how an animal can be complex and intelligent, and yet we are, so far, unable to answer the question of its subjective experience or even whether the question has any meaning for that creature.

Yes, an octopus is objectively aware of itself and of the objects around it. But is it subjectively aware? If it could talk, would it claim to have a subjective, conscious experience the same way that you or I do?

Maybe one source of confusion here is the automatic and powerful human urge to attribute consciousness to the objects around us. We are prone to see consciousness in puppets and other, even less likely objects. People sometimes believe that their houseplants are conscious. An octopus, with its richly complex behavior and its large eyes filled with focused attention, is a far more compelling inkblot test, so to speak, triggering a strong social perception in us. Not only do we know, intellectually, that it gathers objective information about its world, but we can’t help feeling that it must have a subjective awareness as well emanating out of those soulful eyes.

But the truth is, we don’t know, and the sense we get of its conscious mind says more about us than about the octopus. The experts who study octopuses risk becoming the least reliable observers on this point, because they are the ones most likely to be entranced by these wonderful creatures.

Just to be clear, I’m not saying that octopuses are not conscious. But the octopus nervous system is still so incompletely understood that we can’t yet compare its brain organization with ours and guess how similar it might be in its algorithms and self models. To make those types of comparisons, we will need to examine animals in our own lineage, the vertebrates.

__________________________________

Rethinking Consciousness by Michael S. A. Graziano

Excerpted from Rethinking Consciousness by Michael S. A. Graziano. Copyright © Michael S. A. Graziano 2019. Reprinted with permission from Norton.

Michael S. A. Graziano
Michael S. A. Graziano
Michael S. A. Graziano is professor of psychology and neuroscience at Princeton University, where he teaches and heads a lab. He is the author of Rethinking Consciousness, as well as four other books on neuroscience books. Graziano has written for The Atlantic, The New York Times, The Huffington Post, and Aeon, and lives in Princeton, New Jersey. His hobbies include writing fiction, composing music, and ventriloquism.





More Story
For Millennials, Self-Help is More About 'We' Than 'Me' Contemporary books defined as “self-help” have mostly served the Boomer generation. Millennials, and the Generation...