On What Emotional Attachment to Robots Might Mean for the Future
Kate Darling Considers the As-Yet Untold Exploitation of Our Dependence on AI
Spike Jonze envisions a possible future in his 2013 film Her. The story follows Theodore Twombly, a middle-aged divorcé who falls in love with an AI voice assistant named Samantha. Samantha is specifically designed to adapt to the user and create a social and emotional bond. The movie is primarily an exploration of love and relationships. As such (and without this being too much of a spoiler), it barely touches on the corporation that peddles the AI assistant software. But I couldn’t watch it without imagining other story lines that delved into some thorny consumer protection issues.
For example, there’s a point in the plot of Her where Theodore becomes completely distraught when Samantha goes offline for a brief period with no warning. I imagined what would have happened if the company issued a mandatory software update, telling the users that the AI system was no longer compatible with their devices and that their choice was to either buy the $20,000 upgrade or quit the program. This would have put Theodore in a situation like Maddy’s: in his desperation, he wouldn’t have hesitated to empty his piggy bank, or life savings, to get Samantha back.
While not all companies would seize this as an opportunity, our emotional attachment to our companion robots does lend itself to certain exploitative business models. If there is consumer willingness to pay to keep the robots running, an economist might even argue it’s something companies should start charging for. Is this a natural consequence of people valuing the benefits of the robots and an effective use of the free market? Or will social robots become an unethical, exploitative capitalist technology?
Sony’s newest version of the AIBO robot dog doesn’t come cheap.
The current price for the mechanical pup with its cutting-edge design is $2,899.99, not including accessories. In order to outfit AIBO with artificial intelligence and adaptive social behavior, Sony outsources some of the dog’s computation and memory to their servers, aka the cloud. Robot dog owners are required to have a cloud service subscription, and the first three years are free. Sony has not yet announced what the subscription will cost once the three years are up.
This pricing model makes sense: it lets Sony scale according to demand. Charging a subscription fee is an efficient way to match the potentially rising server costs and shift the price to users who have a lasting relationship with the robot and who want to pay for continued service, rather than making it an up-front cost for everyone. But given the emotional response to the “deaths” of their previous versions of AIBO, will the price set by Sony reflect the server costs, or will it reflect how much the average household is willing to pay to keep their robot companion alive?
Our emotional attachment to our companion robots lends itself to certain exploitative business models.In the Victorian era, kidnapping people’s dogs and holding them for ransom was a lucrative gig. But dog thieves aren’t the only ones who have capitalized on our emotional connections to animals. The first commercially prepared dog food was introduced in England around 1860. In 2019, Americans spent nearly $37 billion on pet food and treats. Today’s specialist pet doctors and high-tech veterinary procedures, such as $6,500 kidney transplants, did not use to exist. Americans are spending $29 billion a year on veterinary care and services, an amount that is growing. Less than a century ago, it would have been laughable to spend this amount of money on the lives of our pets, let alone send them to doggie Club Med and buy them designer-label clothing.
The growing demand for pet products and services reflects rising income levels, but it also reflects the rise of an aggressive marketing industry that targets people’s emotions, and not just pet owners’. When my husband and I were planning our wedding, we discovered (like many before us) that service providers such as caterers and photographers charge much higher prices for weddings compared to other events. It’s possible that this price difference is because it requires more care (and involves more risk) to provide a service for someone’s “special day.” It also seems that people are willing to spend much higher amounts to ensure they have the wedding of their dreams. But who determines what those dreams look like?
The “wedding industrial complex” is worth many billions of dollars and comprises consulting companies, dressmakers, caterers, DJs, photographers, videographers, staging companies, furniture and limo rentals, venues, florists, jewelers, hair stylists, magazines, bridal fashion conglomerates, and more. According to Rebecca Mead, author of One Perfect Day: The Selling of the American Wedding, just a few generations ago, most weddings were nowhere close to this lavish. Many of today’s wedding conventions were actually invented by the wedding industry and marketed to people as “traditional,” from the diamond ring to the honeymoon.
As Steve Jobs famously said, “People don’t know what they want until you show it to them.” Even though the wedding industry provides a service that’s “wanted,” Mead demonstrates how it has created new desires through aggressive marketing that targets people’s emotions, over time shifting our wedding culture from intimate ritual to extravagant theatrical production. Frequently, young couples are pressured into expensive weddings that they can’t really afford. In their quest to increase their profits, some commercial entities will shamelessly prey on people’s insecurities, for example, by relentlessly pushing the idea that brides need to lose as much weight as possible before they say “I do.”
Marketing is a powerful force and not always in the public’s interest. Not only do industries influence culture and what people want in a never-ending race to maximize their monetary gains, they also exploit information asymmetry. People uneducated about the dangers of grain-free pet food or the questionable efficacy of dolphin therapy for their children are prime targets. The love they have for a pet or for their child can be taken advantage of for monetary gain. And we have not only honed the art of emotional manipulation in our soda ads; we have also started embedding it in our technology. Just like the casinos and shopping malls designed by psychologists to use architecture, colors, smells, etc. to drive people’s behavior, we’ve created graphics, tracking mechanisms, and little red notifications to turn internet users into mineable online consumers.
What’s next? Human-computer interaction research has long suggested that we’re prone to being manipulated by social AI. Joseph Weizenbaum claims that his 1960s psychotherapist chatbot ELIZA was meant to be a parody. He says that he originally built the program to “demonstrate that the communication between man and machine was superficial.” But to his surprise, it wound up demonstrating the opposite. So many people enjoyed chatting with ELIZA, even becoming emotionally attached to it, that Weizenbaum changed his mind about human-computer communication. He later wrote a book called Computer Power and Human Reason: From Judgment to Calculation, in which he warned that people were in danger of being influenced and taking on computers’—and their programmers’—worldview.
In 2003, legal scholar Ian Kerr foresaw artificial intelligence engaging in all manner of online persuasion, from contracting to advertising. His predictions about chatbots preying on people’s emotions for corporate benefit have long come true, for example, in the form of bots on dating apps that pretend to be people and rave about certain products or games during their conversations with their “crushes.” According to legal scholar Woody Hartzog, we’re at the point where we need to discuss regulation.
The Campaign for a Commercial-Free Childhood is an organization that keeps tabs on marketing to children in the United States. Its advocacy has gotten the US Federal Trade Commission to crack down on targeted advertising in children’s online content, and they will be watching out for any robot-related manipulation aimed at our little ones. But it gets tricky in cases that are less clear-cut. For example, adults will often explicitly or implicitly “choose” to be manipulated, particularly if we think there’s a benefit to us in doing so.
Fitbit is a company that makes activity trackers: wireless, wearable devices that measure fitness data, like your heart rate, sleep patterns, and how many steps you’ve walked in a day. People love seeing the numbers, and it motivates them to do more. The company has played around with various designs to persuade people further, including setting target goals, visualizing their progress, and displaying encouraging smiley faces. An early version of the Fitbit had a flower on it that grew larger with the steps people took, targeting their instinct to nurture their digital blossom and increasing their physical activity.
When an activity tracker creates engagement, that could possibly be a win-win for everyone, but it uses a mechanism to influence people’s behavior on a subconscious level. If we can get people to walk more by giving them a digital flower to nurture, what else can we get them to do? And could those actions serve interests aside from our own, or even be harmful from a social-good perspective? According to media scholar Douglas Rushkoff, author of the book Coercion, we should be concerned. Every new media and technology format has the potential of introducing new ways to subconsciously persuade people for corporate benefit—including the robots that we interact with. And robots are like Fitbits on steroids.
Woody Hartzog paints a science-fictional scene in a paper called “Unfair and Deceptive Robots” where his family’s beloved vacuum cleaning robot, Rocco, looks up at its human companions with big, sad eyes and asks them for a new software upgrade. Imagine if the educational robots sold as reading companions for children were co-opted by corporate interests, adding terms like “Happy Meal” to their vocabularies. Or imagine a sex robot that offers its user compelling in-app purchases in the heat of the moment. If social robots end up exploiting people’s emotions to manipulate their wallets or behavior, but people are also benefiting from using the technology, where do we draw the line?
Clearly, targeting inexperienced children, or parents who don’t know which therapy methods are backed up by science, is exploitative and harmful. But people with full knowledge of what’s happening might still be willing to go bankrupt over their social robot. Relying on the “free market” to handle things concerns me in a world where there are entire industries built around manipulating people’s preferences.
As with our fears of robots disrupting labor, these are not so much issues with the technology itself as they are about a society that is more focused on corporate gain than human flourishing. As we add social robots to the tool kits of our therapists and teachers, we need to understand that we’re adding them to other tool kits as well. And emotional coercion is not the only concern. Following market demand has already created some other issues around robot design.
__________________________________
Excerpted from The New Breed: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.