Artificial intelligence (AI) has the potential to inform the way we answer society’s greatest challenges, and can help us enact real change. But AI-based solutions can come with risks we may not understand and aren’t prepared to face. Timnit Gebru is a research scientist and the co-lead of the Ethical Artificial Intelligence Team at Google whose work centers around the intersection of AI and ethics, and how it can inform public policy. Gebru was on the Wharton campus to give a talk to students, but stopped into the Wharton Business Daily studio to speak with host Dan Loney before her lecture to share some thoughts about the state of AI.
Interview Highlights
1. We need greater education about how data is being collected and used.
“The thing I have been most concerned about is the ‘move fast and break things’ attitude. And with my research, that’s one of the things I’m trying to change. When you have software and these tools that are available for people to just download really quickly and collect data really quickly, you tend to not think about certain things that you should think about. And so we need to have incentive structures out there to slow down a little bit and educate people on what kind of things they should think about when they’re collecting data.”
2. Companies aren’t aware of issues related to AI, and aren’t prepared to address them.
“I don’t know if people are truly aware. For example, I just learned about a company called Clearview AI from a New York Times article. It was this little company that’s scraping billions of photos from Facebook and law enforcement is using it to identify suspects. And it’s not just a database of criminals that they’re using; it’s a database of anybody. And so the fear is that privacy as we know it would be over — that you could just walk down the street one day, they can take a picture of you, and that’s what’s going to happen. I knew that this possibility existed, but I didn’t know this company exists.”
3. Technology and innovation are outpacing regulation and public policy.
“One of the things that’s happening is that technology and innovation are outpacing regulation and policy. All of these little things are popping up that we didn’t know about, and so I don’t think everyone is aware. One of my papers with Joy Buolamwini showed that there were high disparities in error rates across different groups of people for automated facial analysis tools. That paper just came out in 2018, and we showed, for the first time, how high the disparities and error rates were between darker-skinned women and lighter-skinned men, and that spurred a lot of changes in industry and policy. That’s the fastest that I’ve seen from learning about something to policy, but there are many such problems that people are not aware of.”
4. There’s still a gap in diverse voices in the tech industry.
“The overall state (of diversity in tech) is not very good. It’s just not. Rachel Thomas is someone I really admire because she writes so clearly about some of these topics. One thing she wrote was how diversity branding hurts diversity. A lot of corporations and institutes talk so much about diversity, and so you’d think that there’s a lot going on, but that hurts actual diversity because it creates a backlash. I don’t like so much of the talk about diversity and I don’t know if it’s getting better. But one positive thing I can talk about is that I started a group called Black in AI with my cofounder, Rediet Abebe. We’ve poured our heart and soul into it.”
One of the group’s initiatives is working to make The Conference on Neural Information Processing Systems, the largest academic machine learning and AI conference, more inclusive.
“Many times your experiences at these conferences will make or break whether you want to be in a particular field. If you feel super isolated and you don’t feel welcome, you’ll think, ‘I don’t think this is for me. I’m going to do something else.’”
“When I went to NeurIPS in 2016, there were about 5,500 people, and I counted five black people internationally. And, after all of this work we did, we increased the presence to, let’s say, 300 or 400 black people out of 15,000. That’s still a really low number, but it makes a huge difference. You see when people go to the conference, they don’t feel as isolated as they did before. It makes a huge difference, even if it’s a small number.”
5. Silver lining: there’s growing interest in how artificial intelligence intersects with ethics.
“There’s been a lot more activity and conversations in this area than when I first started working on this kind of topic. (Back then) it was very difficult for me to even explain to people why it’s important to think about these things. I’ve been noticing that this year there are many more conversations about the intersection of labor rights, people working in low wages, annotating images, and the intersection of that data collection with ethics.”
— Emily O’Donnell
Posted: January 31, 2020