How Can Machines Learn Human Values?
Brian Christian in Conversation with Andrew Keen on Keen On
The coronavirus pandemic is dramatically disrupting not only our daily lives but society itself. This show features conversations with some of the world’s leading thinkers and writers about the deeper economic, political, and technological consequences of the pandemic. It’s our new daily podcast trying to make longterm sense out of the chaos of today’s global crisis.
On today’s episode, Andrew Keen talks with Brian Christian about his new book, The Alignment Problem, and the questions at the intersection of computer science, ethics, and the law that determines whether a statistical tool can be fair.
From the episode:
Andrew Keen: I was looking through the internet, as I sometimes do when I’m preparing for these sorts of interviews, and I found that an alternative title to The Alignment Problem was How Can Machines Learn Human Values? I assume you nixed that subtitle because it’s kind of confusing. Can we separate these machines, which we of course create, from human values, which are part of us?
Brian Christian: I mean, I think at some level, the field of machine learning, as a whole, is about getting systems to do what we want. The question is, how do we move from a world in which what we want is extremely narrowly defined to one in which these systems are part of the world, sharing society with us. And this is broadly the field of what’s called AI alignment.
Typically, the way that machine learning systems are built is you have something called a training set, which is a set of examples, and then something called an objective function, which is a mathematical way of specifying what you want the system to minimize or maximize. And there is this real question of, has the system learned what you think it’s learned? Is the system going to behave in the real world the way that you intend for it to behave, the way you expect it to? There are any number of things that can go wrong. In some sense, the book is a catalog of these kind of harrowing tales of all the ways that that can go wrong.
Andrew Keen: I was just reading The Economist this morning—fighter aircraft will have AI pilots. We know about Prop 25. The Los Angeles police just banned the use of commercial facial recognition. And Amazon did the same thing when it came to an AI recruiting tool that was biased against women. A lot of this stuff seems to touch on our particular concerns in 2020 with discrimination, racial and gender and sexual discrimination. Is that fair?
Brian Christian: Yeah, I think that’s one of the main currents in machine learning and the ethical as well as safety issues. And I think part of that is again going back to the basic framework with which these systems are trained. You have a set of examples called the training data, and then you turn the system loose and you hope that what it actually encounters in the real world resembles its training data. And so, I think one of the first questions that one can ask is, are those training data sufficiently diverse? Do they actually represent the world as it is? And we’ve seen a number of, I would say, scandals in the AI community.
There was, for example, a very widely used academic dataset of faces called “Labeled Faces in the Wild.” And the team that put this together, there was no ill will whatsoever, but if you’re trying to assemble a database of millions of faces, you go to the internet. And so, they looked at news articles, and they were scraping these faces and the captions of the image that would identify who the people were from the front page of the newspaper. And this was happening towards the end of the 2000s. And so, what you end up with is a data set that contains the people that would have appeared on the front page of a newspaper in the late 2000s. The number one person in that data set was then-president George W. Bush. In fact, there are twice as many pictures of George W. Bush as there are of all Black women combined.
________________________
Subscribe now on iTunes, Spotify, Stitcher, or wherever else you find your podcasts!
Brian Christian is the author of The Most Human Human, which was named a Wall Street Journal bestseller, a New York Times Editors’ Choice, and a New Yorker favorite book of the year. He is the author, with Tom Griffiths, of Algorithms to Live By, a #1 Audible bestseller, Amazon best science book of the year and MIT Technology Review best book of the year. His third book, The Alignment Problem, has just been published in the US and is forthcoming in the UK and in translation in 2021.