Business

Why Timnit Gebru Isn’t Waiting for Big Tech to Fix AI’s Problems

THree Hundred and Sixty-four Days after she lost her job as a co-lead of Google’s ethical artificial intelligence (AI) team, Timnit Gebru is nestled into a couch at an Airbnb rental in Boston, about to embark on a new phase in her career.

Gebru was hired by Google in 2018 in order to ensure its AI products didn’t perpetuate racism and other social inequalities. Gebru worked as a researcher and speaker at conferences, while also hiring prominent people of color. Her personal experiences with racism and sexism in work led her to speak out internally. It was actually one of her research papers which led to her resignation. “I had so many issues at Google,” Gebru tells TIME over a Zoom call. “But the censorship of my paper was the worst instance.”

[time-brightcove not-tgx=”true”]

Gebru and her coauthors challenged the ethics of large-language AI models which attempt to comprehend and replicate human language. PwC estimates that Google will be a leader in AI research and that the industry could contribute $15.7 trillion annually to the global economy by 2030. But Gebru’s paper suggested that, in their rush to build bigger, more powerful language models, companies including Google weren’t stopping to think about the kinds of biases being built into them—biases that could entrench existing inequalities, rather than help solve them. There are also concerns regarding the effects of AIs on the environment, as they consume huge amounts of energy. According to the authors, Big Tech companies seemed to be prioritizing profit over safety in their battle for AI supremacy. The industry should therefore slow down. “It was like, You built this thing, but mine is even bigger,” Gebru recalls of the atmosphere at the time. “When you have that attitude, you’re obviously not thinking about ethics.”

Gebru’s departure from Google set off a firestorm in the AI world. The company appeared to have forced out one of the world’s most respected ethical AI researchers after she criticized some of its most lucrative work. It was met with fierce opposition.

The dispute didn’t just raise concerns about whether corporate behemoths like Google’s parent Alphabet could be trusted to ensure this technology benefited humanity and not just their bottom lines. The dispute also raised questions about who benefits from artificial intelligence being trained with data from the actual world and who is affected by systemic injustices. Are the AI companies really listening to those they have hired to reduce these harms? Who decides how much collateral damage to accept in pursuit of AI dominance?


The past decade has seen a significant increase in the number of people who are able to read and write.AI has slowly been incorporated into our daily lives, including facial recognition and digital assistants like Siri and Alexa. Although these unregulated AI applications are extremely lucrative for their owners, AI is already causing serious harms to the individuals who use them.

Gebru is one of the leading figures in an alliance that includes activists, scientists, regulators, technologists and scholars who work together to reconsider what AI means. Her fellow travellers remain in Big Tech and mobilize those insights to help companies move towards AI that is ethical. Some others are making policies on both the Atlantic and preparing new rules that will set stricter limitations on which companies can most benefit from automatic abuses of power. Gebru herself is seeking to push the AI world beyond the binary of asking whether systems are biased and to instead focus on power: who’s building AI, who benefits from it, and who gets to decide what its future looks like.

Gebru, who was leaving Google the day before our Zoom call launched Distributed AI Research Institute (DAIRI) Institute. This independent research group will examine how AI can be made work for all. “We need to let people who are harmed by technology imagine the future that they want,” she says.


Gebru was just a teenage boy.War broke out in Ethiopia between her family, from which she lived her entire life, and Eritrea where her parents were born. She was unable to stay in Addis Ababa (the Ethiopian capital). After a “miserable” experience with the U.S. asylum system, Gebru finally made it to Massachusetts as a refugee. Gebru began to experience racism within the American school system. Even though she was a highly-achieving teenager, some teachers tried to stop her from taking AP classes. She would later have to share that pivotal moment with the police. It led her down the road of ethical technology. Her friend, who was Black, had been attacked at a bar. She called the police. When they arrived, the police handcuffed Gebru’s friend and later put her in a cell. She claims that the assault wasn’t filed. “It was a blatant example of systemic racism.”

Patrick T. Fallon—The Washington Post/Getty ImagesA discussion about predictive policing, L.A. 2016

When Gebru was still a Stanford Ph.D. student in 2010, tech companies from Silicon Valley began investing massive amounts of money in machine learning, an obscure area of AI. With enough data and processing speed, computers were able to learn a variety of tasks such as speech recognition, facial recognition, or targeting individuals with ads that are based on past behaviour. For decades, AI research was based on hard-coded rules that were written by humans. It is not capable of handling such difficult tasks at scale. But by feeding computers enormous amounts of data—now available thanks to the Internet and smartphone revolutions—and by using high-powered machines to spot patterns in those data, tech companies became enamored with the belief that this method could unlock new frontiers in human progress, not to mention billions of dollars in profits.

They were correct in many respects. Many of the best-known and most profitable businesses in the 21st Century were built on machine learning. It powers Amazon’s recommendation engines and warehouse logistics and underpins Google’s search and assistant functions, as well as its targeted advertising business. It also promises to transform the terrain of the future, offering tantalizing prospects like AI lawyers who could give affordable legal advice or AI doctors who could diagnose patients’ ailments within seconds, or even AI scientists.

Learn more Millions of Americans Have Lost Jobs in the Pandemic—And Robots and AI Are Replacing Them Faster Than Ever

When Gebru left Stanford in 2004, she knew that she wanted to apply her newly acquired expertise to improve ethics within this field which had been dominated by white men. ProPublica’s 2016 investigative into predictive police revealed that U.S. courts were using software to determine the likely outcome of defendants going to prison in the future. This was to assist judges when sentencing. By looking at actual reoffending rates and comparing them with the software’s predictions, ProPublica found that the AI was not only often wrong, but also dangerously biased: it was more likely to rate Black defendants who did not reoffend as “high risk,” and to rate white defendants who went on to reoffend as “low risk.” The results showed that when an AI system is trained on historical data that reflects inequalities—as most data from the real world does—the system will project those inequalities into the future.

Gebru was struck both by her experience as a police officer and the fact that there is so little diversity in AI. She wrote an article in which she expressed her feelings shortly after she attended a 2015 conference, at which she was among the few Black participants. “I am very concerned about the future of AI,” she wrote. “Not because of the risk of rogue machines taking over. But because of the homogeneous, one-dimensional group of men who are currently involved in advancing the technology.”

Gebru had become an AI researcher for Microsoft in 2017, where she also co-authored a paper titled Gender Shades. This paper demonstrated that facial-recognition software developed by Microsoft and IBM was almost perfect in detecting images of people with white skin. However, it failed to detect people with darker skins, especially Black women. Although the data used for training the algorithm had many images of white men and very few images of Black women, it contained a lot of images. Joy Buolamwini, MIT Media Lab’s Gebru, had been involved in the research that led to IBM and Microsoft updating their data sets.

Google employed Gebru soon after Gender Shades had been published. This was at a moment when Big Tech companies were being scrutinized for ethical issues in their AI research. While Gebru was interviewing, a group of Google employees were protesting the company’s agreement with the Pentagon to build AI systems for weaponized drones. Google eventually cancelled the contract. However, several workers who participated in worker activism following the protests claim that they were fired or expelled. Gebru initially had reservations about joining Google. However, she believed that her participation could be a benefit to the company. “I went into Google with my eyes wide open in terms of what I was getting into,” she says. “What I thought was, This company is a huge ship, and I won’t be able to change its course. But maybe I’ll be able to carve out a small space for people in various groups who should be involved in AI, because their voices are super important.”

David Paul Morris—Bloomberg/Getty Images2020 Demonstration of an AI from Google that recognizes hands

After several years of experience, Gebru realized publishing research papers is more efficient at driving change than convincing her Google superiors, who she found intransigent. After coworkers asked her about ethics in large language model design, Gebru decided to join forces with them and write a paper. The hype surrounding large language models was causing a lot of excitement in Silicon Valley over the years that preceded this decision. A few months prior, OpenAI, a Microsoft-backed company that owns OpenAI, published an opinion piece in the Guardian. This entire article was written by a robot. Do you feel scared, yet, humans? The headline asked. Investment was flooding into tech firms’ AI research teams, all of which were competing to build models based on ever bigger data sets.

Gebru and her fellow colleagues felt the industry was heading in the wrong direction because of the hype around language modeling. They knew from the outset that these AIs weren’t sentient, despite their appearances. The paper compared the systems to “parrots” that were simply very good at repeating combinations of words from their training data. They were therefore more susceptible to bias. One problem was the fact that companies began to create programs to scrape data from the Internet in order to train their employees. “This means that white supremacist and misogynistic, ageist, etc., views are overrepresented,” Gebru and her colleagues wrote in the paper. At its core was the same maxim that had underpinned Gebru and Buolamwini’s facial-recognition research: if you train an AI on biased data, it will give you biased results.

Learn more Artificial Intelligence Can Now Craft Original Jokes—And That’s No Laughing Matter

The paper that Gebru and her colleagues wrote is now “essentially canon” in the field of responsible AI, according to Rumman Chowdhury, the director of Twitter’s machine-learning ethics, transparency and accountability team. She says it cuts to the core of the questions that ethical AI researchers are attempting to get Big Tech companies to reckon with: “What are we building? What are we doing to make it happen? And who is it impacting?”

But Google’s management was not happy. Gebru received an email from a vice-president stating that she had concerns about the paper after it was submitted to internal review. Gebru claims that Google raised some concerns at first, such as the fact that it painted a negative picture of technology. Google would claim that the research didn’t account for any safeguards its teams built to prevent biases or advances in energy efficiency. For this article, the company declined to comment.

Google asked Gebru whether she would retract the paper and/or remove her name from it along with those of her Google colleagues. Gebru says she replied in an email saying that she would not retract the paper, and would remove the names only if the company came clean about its objections and who exactly had raised them—otherwise she would resign after tying up loose ends with her team. She then emailed a group of women colleagues in Google’s AI division separately, accusing the company of “silencing marginalized voices.” On Dec. 2, 2020, Google’s response came: it could not agree to her conditions, and would accept her resignation. In fact, the email said, Gebru would be leaving Google immediately because her message to colleagues showed “behavior that is inconsistent with the expectations of a Google manager.” Gebru says she was fired; Google says she resigned.

In an email to staff after Gebru’s departure, Jeff Dean, the head of Google AI, attempted to reassure concerned colleagues that the company was not turning its back on ethical AI. “We are deeply committed to continuing our research on topics that are of particular importance to individual and intellectual diversity,” he wrote. “That work is critical and I want our research programs to deliver more work on these topics—not less.”


Today the idea is a realityAI is able to encode human biases. This fact is not new. This is a fact that AI professionals, including those at Big Tech companies, have accepted. But to some who are of the same mind as Gebru, it is only the first epiphany in a much broader—and more critical—worldview. These new schools of thought argue that AI is more than just the problems with individual programs. They are also responsible for the power dynamics of the tech sector. This belief is based on the fact that the founders and CEOs of companies like Amazon and Google have more wealth than almost anyone in human history. They see AI as the latest and greatest tool in a series of tools used by capitalist elites in order to accumulate wealth, create new markets and gain access to the private lives of humans in search of profit and data.

David McNew—AFP/Getty ImagesOn display at the 2019 CES Convention in Las Vegas is a facial-recognition artificial intelligence that can recognize individuals from a crowd.

To others in this emerging nexus of resistance, Gebru’s ouster from Google was a sign. “Timnit’s work has pretty unflinchingly pulled back the veil on some of these claims, that are fundamental to these companies’ projections, promises to their boards and also to the way they present themselves in the world,” says Meredith Whittaker, a former researcher at Google who resigned in 2019 after helping lead worker resistance to its cooperation with the Pentagon. “You saw how threatening that work was, in the way that Google treated her.”

Whittaker recently was appointed to be a Senior Advisor on AI for the Federal Trade Commission (FTC). “What I am concerned about is the capacity for social control that [AI] gives to a few profit-driven corporations,” says Whittaker, who was not speaking in the capacity of her FTC role. “Their interests are always aligned with the elite, and their harms will almost necessarily be felt most by the people who are subjected to those decisions.”

It’s a viewpoint that Big Tech could not disagree with more, but to which European regulators are also paying attention. E.U. A wide-ranging AI draft act is being examined by the E.U. If passed, it could restrict forms of AI that lawmakers deem harmful, including real-time facial recognition, although activists say it doesn’t go far enough. San Francisco is one of many cities that have banned facial-recognition in America. Gebru supports regulation to define what AI uses are acceptable and set better safeguards for the remaining. She recently told European lawmakers scrutinizing the new bill: “The No. 1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it.”

In order to ensure that companies don’t create AI harmful, it was essential for them to increase legal protections of tech workers. Workers are usually the first line defense in cases like hers. There is progress on this issue too. The Silenced No More Act was passed in California on October 20, 2021. It prohibits big corporations from using NDAs for employees complaining about harassment and discrimination. Hundreds of Google employees joined the union in January 2021. Frances Haugen, a whistleblower for Facebook, disclosed thousands upon pages of documents internal to authorities in the fall. She sought federal protection as a whistleblower.

Read more: Inside Frances Haugen’s Decision to Take on Facebook

Gebru regards DAIR her research institute as another part of the larger effort to make tech more responsible. This means putting communities’ needs before the profits incentive. At DAIR, Gebru will work with researchers around the world across multiple disciplines to examine the outcomes of AI technology, with a particular focus on the African continent and the African diaspora in the U.S. One of DAIR’s first projects will use AI to analyze satellite imagery of townships in South Africa, to better understand legacies of apartheid. DAIR also works to create an industry standard for data set quality that can help reduce bias. This will require researchers to provide documentation explaining how their data was gathered, its limitations, and the best way it could be used. Gebru says DAIR’s funding model gives it freedom too. DAIR has been awarded $3.7 million by a number of large philanthropists, including MacArthur, Ford and Open Society foundations. It’s a novel way of funding AI research, with few ties to the system of Silicon Valley money and patronage that often decides which areas of research are worthy of pursuit, not only within Big Tech companies, but also within the academic institutions to which they give grants.

Gebru believes that DAIR is capable of conducting a limited number of studies. Its funding is not sufficient to match the large amount Big Tech has committed to AI development. Gebru has shown the value of working with a group of committed collaborators to make AI a better option for everyone. They’re still the underdogs, but the impact of their work is increasing. “When you’re constantly trying to convince people of AI harms, you don’t have the space or time to implement your version of the future,” Gebru says. “So we need alternatives.”

—With reporting by Nik Popli

Tags

Related Articles

Back to top button