Insider Q&A: Ex-Google AI skeptic Timnit Gebru starts anew

Mar 25, 2022, 3:00 PM | Updated: Mar 26, 2022, 3:01 am
Timnit Gebru poses for photos in Stanford, Calif., Monday, March 21, 2022. When she co-led Google's...

Timnit Gebru poses for photos in Stanford, Calif., Monday, March 21, 2022. When she co-led Google's Ethical AI team, Gebru was a prominent insider voice questioning the tech industry's approach to artificial intelligence. (AP Photo/Jeff Chiu)

(AP Photo/Jeff Chiu)

When she co-led Google’s Ethical AI team, Timnit Gebru was a prominent insider voice questioning the tech industry’s approach to artificial intelligence.

That was before Google pushed her out of the company more than a year ago. Now Gebru is trying to make change from the outside as the founder of the Distributed Artificial Intelligence Research Institute, or DAIR.

Born to Eritrean parents in Ethiopia, Gebru spoke with The Associated Press recently about how poorly Big Tech’s AI priorities — and its AI-fueled social media platforms — serve Africa and elsewhere. The new institute focuses on AI research from the perspective of the places and people most likely to experience its harms.

She’s also co-founder of the group Black in AI, which promotes Black employment and leadership in the field. And she’s known for co-authoring a landmark 2018 study that found racial and gender bias in facial recognition software. The interview has been edited for length and clarity.

Q: What was the impetus for DAIR?

A: After I got fired from Google, I knew I’d be blacklisted from a whole bunch of large tech companies. The ones that I wouldn’t be — it would be just very difficult to work in that kind of environment. I just wasn’t going to do that anymore. When I decided to (start DAIR), the very first thing that came to my mind is that I want it to be distributed. I saw how people in certain places just can’t influence the actions of tech companies and the course that AI development is taking. If there is AI to be built or researched, how do you do it well? You want to involve communities that are usually at the margins so that they can benefit. When there’s cases when it should not be built, we can say, ‘Well, this should not be built.’ We’re not coming at it from a perspective of tech solutionism.

Q: What are the most concerning AI applications that deserve more scrutiny?

A: What’s so depressing to me is that even applications where now so many people seem to be more aware about the harms — they are increasing rather than decreasing. We’ve been talking about face recognition and surveillance based on this technology for a long time. There are some wins: a number of cities and municipalities have banned the use of facial recognition by law enforcement, for instance. But then the government is using all of these technologies that we’ve been warning about. First, in warfare, and then to keep the refugees — as a result of that warfare — out. So at the U.S.-Mexico border, you’ll see all sorts of automated things that you haven’t seen before. The number one way in which we’re using this technology is to keep people out.

Q: Can you describe some of the projects DAIR is pursuing that might not have happened elsewhere?

A: One of the things we’re focused on is the process by which we do this research. One of our initial projects is about using satellite imagery to study spatial apartheid in South Africa. Our research fellow (Raesetje Sefala) is someone who grew up in a township. It’s not her studying some other community and swooping in. It’s her doing things that are relevant to her community. We’re working on visualizations to figure out how to communicate our results to the general public. We’re thinking carefully about who do we want to reach.

Q: Why the emphasis on distribution?

A: Technology affects the entire world right now and there’s a huge imbalance between those who are producing it and influencing its development, and those who are are feeling the harms. Talking about the African continent, it’s paying a huge cost for climate change that it didn’t cause. And then we’re using AI technology to keep out climate refugees. It’s just a double punishment, right? In order to reverse that, I think we need to make sure that we advocate for the people who are not at the table, who are not driving this development and influencing its future, to be able to have the opportunity to do that.

Q: What got you interested in AI and computer vision?

A: I did not make the connection between being an engineer or a scientist and, you know, wars or labor issues or anything like that. For a big part of my life, I was just thinking about what subjects I liked. I was interested in circuit design. And then I also liked music. I played piano for a long time and so I wanted to combine a number of my interests together. And then I found the audio group at Apple. And then when I was coming back to doing a master’s and Ph.D., I took a class on image processing that touched on computer vision.

Q: How has your Google experience changed your approach?

A: When I was at Google, I spent so much of my time trying to change people’s behavior. For instance, they would organize a workshop and they would have all men — like 15 of them — and I would just send them an email, ‘Look, you can’t just have a workshop like that.’ I’m now spending more of my energy thinking about what I want to build and how to support the people who are already on the right side of an issue. I can’t be spending all of my time just trying to reform other people. There’s plenty of people who want to do things differently, but just aren’t in a position of power to do that.

Q: Do you think what happened to you at Google has brought more scrutiny to some of the concerns you had about language learning models? Could you describe what they are?

Q: Part of what happened to me at Google was related to a paper we wrote about large language models — a type of language technology. Google search uses it to rank queries or those question-and-answer boxes that you see, machine translation, autocorrect and a whole bunch of other stuff. And we were seeing this rush to adopt larger and larger language models with more data, more compute power, and we wanted to warn people against that rush and to think about the potential negative consequences. I don’t think the paper would have made waves if they didn’t fire me. I am happy that it brought attention to this issue. I think that it would have been hard to get people to think about large language models if it wasn’t for this. I mean, I wish I didn’t get fired, obviously.

Q: In the U.S., are there actions that you’re looking for from the White House and Congress to reduce some of AI’s potential harms?

A: Right now there’s just no regulation. I’d like for some sort of law such that tech companies have to prove to us that they’re not causing harms. Every time they introduce a new technology, the onus is on the citizens to prove that something is harmful, and even then we have to fight to be heard. Many years later there might be talk about regulation — then the tech companies have moved on to the next thing. That’s not how drug companies operate. They wouldn’t be rewarded for not looking (into potential harms) — they’d be punished for not looking. We need to have that kind of standard for tech companies.

Copyright © The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

AP

File - Credit cards as seen July 1, 2021, in Orlando, Fla. A low credit score can hurt your ability...
Associated Press

What the Fed rate increase means for your credit card bill

The Federal Reserve raised its key rate by another quarter point Wednesday, bringing it to the highest level in 15 years as part of an ongoing effort to ease inflation by making borrowing more expensive.
24 hours ago
police lights distracted drivers shooting...
Associated Press

Authorities: Missing mom, daughter in Washington found dead

A missing Washington state woman and her daughter were found dead Wednesday, according to police.
24 hours ago
Google...
Associated Press

Google’s artificially intelligent ‘Bard’ set for next stage

Google announced Tuesday it's allowing more people to interact with “ Bard,” the artificially intelligent chatbot the company is building to counter Microsoft's early lead in a pivotal battleground of technology.
2 days ago
Evelyn Knapp, a supporter of former President Donald, waves to passersby outside of Trump's Mar-a-L...
Associated Press

Trump legal woes force another moment of choosing for GOP

From the moment he rode down the Trump Tower escalator to announce his first presidential campaign, a searing question has hung over the Republican Party: Is this the moment to break from Donald Trump?
3 days ago
FILE - The Silicon Valley Bank logo is seen at an open branch in Pasadena, Calif., on March 13, 202...
Associated Press

Army of lobbyists helped water down banking regulations

It seemed like a good idea at the time: Red-state Democrats facing grim reelection prospects would join forces with Republicans to slash bank regulations — demonstrating a willingness to work with President Donald Trump while bucking many in their party.
3 days ago
FILE - This Sept. 2015, photo provided by NOAA Fisheries shows an aerial view of adult female South...
Associated Press

Researchers: Inbreeding a big problem for endangered orcas

People have taken many steps in recent decades to help the Pacific Northwest's endangered killer whales, which have long suffered from starvation, pollution and the legacy of having many of their number captured for display in marine parks.
4 days ago

Sponsored Articles

SHIBA volunteer...

Volunteer to help people understand their Medicare options!

If you’re retired or getting ready to retire and looking for new ways to stay active, becoming a SHIBA volunteer could be for you!
safety from crime...

As crime increases, our safety measures must too

It's easy to be accused of fearmongering regarding crime, but Seattle residents might have good reason to be concerned for their safety.
Comcast Ready for Business Fund...
Ilona Lohrey | President and CEO, GSBA

GSBA is closing the disparity gap with Ready for Business Fund

GSBA, Comcast, and other partners are working to address disparities in access to financial resources with the Ready for Business fund.
SHIBA WA...

Medicare open enrollment is here and SHIBA can help!

The SHIBA program – part of the Office of the Insurance Commissioner – is ready to help with your Medicare open enrollment decisions.
Lake Washington Windows...

Choosing Best Windows for Your Home

Lake Washington Windows and Doors is a local window dealer offering the exclusive Leak Armor installation.
Anacortes Christmas Tree...

Come one, come all! Food, Drink, and Coastal Christmas – Anacortes has it all!

Come celebrate Anacortes’ 11th annual Bier on the Pier! Bier on the Pier takes place on October 7th and 8th and features local ciders, food trucks and live music - not to mention the beautiful views of the Guemes Channel and backdrop of downtown Anacortes.
Insider Q&A: Ex-Google AI skeptic Timnit Gebru starts anew