James Bridle Considers the Possibilities of Ecological AI
This Week from the Emergence Magazine Podcast
Emergence Magazine is an online publication with annual print edition exploring the threads connecting ecology, culture, and spirituality. As we experience the desecration of our lands and waters, the extinguishing of species, and a loss of sacred connection to the Earth, we look to emerging stories. Our podcast features exclusive interviews, narrated essays, stories and more.
In this expansive interview, writer, artist, and technologist James Bridle seeks to widen our thinking beyond human-centric ways of knowing. In questioning our fundamental assumptions about intelligence, they explore how radical technological models can decentralize power and become portals into deeper relationship with the living world.
From the conversation:
Emmanuel Vaughan-Lee: In your latest book, Ways of Being, you explore the many types of intelligences that exist in the more-than-human worlds—intelligences that we need to learn from and integrate into our consciousness and technologies if we are to learn to live in balance with the living world. And you write that for far too long, at least in our dominant Western society, we’ve had a very limited definition and understanding of intelligence that you describe in the book as “what humans do,” and that this definition has played a profound role in shaping technology and how we use it, from computers to, most recently, artificial intelligence. Can you talk about this human-centered definition of intelligence and its impact on technology and AI?
James Bridle: I come to this sphere, to this area, to this thinking, from a background in technology. That’s mostly what I’ve worked on for the last decade or more. And a little bit of that focus throughout has been on artificial intelligence. In the last few years, I’ve tried to consciously reframe my practice around more ecological interests while seeing what I could bring from what I know about already. And so the cultural dominance of AI seemed like a really interesting thing to think through, particularly as in my own life I was starting to broaden my own interests and pay more attention to the things around me. And intelligence was an interesting place to start.
I knew, setting out to do this, that I would have to at some point, as a writer about intelligence, define what I meant by intelligence. But I was very frustrated by the lack of what seemed to me to be clear, good definitions of what it is we’re all talking about. You can get all these lists of what people mean when they talk about intelligence, and it’s a kind of grab bag of different qualities that changes all the time: things like planning, counterfactual imagining or coming up with scenarios, theories of mind, tool use, all these different qualities.
People pick from them according to whatever their particular field is, but they all come from a human perspective. That seemed to me to be what actually united almost all our common discussions about intelligence: that it was just whatever humans did. And so all our discussions about other potential forms of intelligence, other intelligences that we encountered in the world, or intelligences that we imagined, were all framed in terms of how we understood ourselves and our own thinking.
It really struck me that this became an incredibly limiting factor in how we were thinking about intelligence more broadly—and not just intelligence, really, but all relationships we have in the world that are so often mediated by our own intelligence. On the one hand this has restricted our ability to recognize the intelligences of other beings—and I think we’ll probably come to that—but it’s also deeply shaped our history of technology, and particularly AI.
What I find fascinating about AI is its cultural weight, the fact that we just seem to be so endlessly fascinated with it. And this goes all the way back to long before the development of modern computers, but really takes off with the development of what we now call computers in the 1940s and 1950s. It goes right back to Alan Turing and the definition of the early computer, when he’s already talking about how intelligent computers might be.
And then it extends all the way through the last sixty, seventy years of research, when there’s always this tendency to take whatever the current form of computation is and extrapolate it into what it might be if it was intelligent. And so we’re always trying to build these intelligences, but what we think intelligence is really shapes that. All the different ways we’ve tried to build AI over the years have always been shaped by that definition of human intelligence. And increasingly that’s looked damaging and dangerous, for all the ways that I explore in the book.
James Bridle is a writer and an artist. Their writing on art, politics, culture, and technology has appeared in magazines and newspapers including The Guardian, The Observer, Wired, The Atlantic, the New Statesman, frieze, Domus, and ICON. New Dark Age, their book about technology, knowledge, and the end of the future, was published in 2018 and has been translated into more than a dozen languages. In 2019, they wrote and presented New Ways of Seeing, a four-part series for BBC Radio 4. Their artworks have been commissioned by galleries and institutions including the V&A, Whitechapel Gallery, the Barbican, Hayward Gallery, and the Serpentine and have been exhibited worldwide and on the internet.