Before joining Google in 2018, Gebru worked with MIT researcher Joy Buolamwini on a project called Gender Shades that revealed face analysis technology from IBM and Microsoft was highly accurate for white men but highly inaccurate for Black women. It helped push US lawmakers and technologists to question and test the accuracy of face recognition on different demographics, and contributed to Microsoft, IBM, and Amazon announcing they would pause sales of the technology this year. Gebru also cofounded an influential conference called Black in AI that tries to increase the diversity of researchers contributing to the field.

Gebru’s departure was set in motion when she collaborated with researchers inside and outside of Google on a paper discussing ethical issues raised by recent advances in AI language software.

article image

The WIRED Guide to Artificial Intelligence

Supersmart algorithms won’t take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

Researchers have made leaps of progress on problems like generating text and answering questions by creating giant machine learning models trained on huge swaths of the online text. Google has said that technology has made its lucrative, eponymous search engine more powerful. But researchers have also shown that creating these more powerful models consumes large amounts of electricity because of the vast computing resources required, and documented how the models can replicate biased language on gender and race found online.

Gebru says her draft paper discussed those issues and urged responsible use of the technology, for example by documenting the data used to create language models. She was troubled when the senior manager insisted she and other Google authors either remove their names from the paper or retract it altogether, particularly when she couldn’t learn the process used to review the draft. “I felt like we were being censored and thought this had implications for all of ethical AI research,” she says.

Gebru says she failed to convince the senior manager to work through the issues with the paper; she says the manager insisted that she remove her name. Tuesday Gebru emailed back offering a deal: If she received a full explanation of what happened, and the research team met with management to agree on a process for fair handling of future research, she would remove her name from the paper. If not, she would arrange to depart the company at a later date, leaving her free to publish the paper without the company’s affiliation.

Gebru also sent an email to a wider list within Google’s AI research group saying that managers’ attempts to improve diversity had been ineffective. She included a description of her dispute about the language paper as an example of how Google managers can silence people from marginalized groups. Platformer published a copy of the email Thursday.

Wednesday, Gebru says, she learned from her direct reports that they had been told she had resigned from Google and that her resignation had been accepted. She discovered her corporate account was disabled.

An email sent by a manager to Gebru’s personal address said her resignation should take effect immediately because she had sent an email reflecting “behavior that is inconsistent with the expectations of a Google manager.” Gebru took to Twitter, and outrage quickly grew among AI researchers online.

Many criticizing Google, both from inside and outside the company, noted that the company had at a stroke damaged the diversity of its AI workforce and also lost a prominent advocate for improving that diversity. Gebru suspects her treatment was in part motivated by her outspokenness around diversity and Google’s treatment of people from marginalized groups. “We have been pleading for representation, but there are barely any Black people in Google Research, and, from what I see, none in leadership whatsoever,” she says.