Google doesn’t like being shown inconvenient problems, ex-ethical AI co-lead ‘fired’ by the tech giant over her research tells RT

Google clamped down on Timnit Gebru, the former co-lead of ethical AI team, because she not only revealed bias in its large language models, but also called for structural changes in the AI field, Gebru told ‘Going Underground.’

Her controversial departure from Google was a shocker. Dr. Gebru was Google’s first female black research scientist. It followed her refusal to fulfill the company’s demand to retract a paper on ethical problems arising from the large language models (LLMs) that are used by Google Translate and other apps. Gebru says that she was fired due to her views, while Google insists that she filed her resignation.

In Monday’s episode of RT’s ‘Going Underground’ program, she told the show’s host, Afshin Rattansi, why she split with Google.

It is possible to use ethical AI. “a field that tries to ensure that, when we work on AI technology, we’re working on it with foresight and trying to understand what the negative potential societal effects are and minimizing those,”Gebru

And this was exactly what she had been pursuing at Google, before she was – in her view – fired by the company. “I’m never going to say that I resigned. That’s not going to happen,”Gebru

This was highlighted in the 2020 paper that the expert prepared with her collaborators. “environmental and financial costs” The large-sized language models were criticized and she advised against oversizing them. The LLMs “consume a lot of computer power,” she explained. “So, if you’re working on larger and larger language models, only the people with those kinds of huge compute powers are going to be able to use them … Those benefiting from the LLMs aren’t those who are paying the costs.”That was it. “environmental racism,”She said. also available
Facebook apologizes after its AI put ‘primates’ label on video about black men

The large language models use data from the internet to learn, but that doesn’t necessarily mean they incorporate all the opinions available online, Gebru, an Ethiopian American, said. It highlighted two things in the paper. “dangers”Such an approach could have resulted in AI being trained with bias and hatredful content.

LLMs allow you to create something fluid, coherent and completely incorrect.

The most vivid example of that, she said, was the experience of a Palestinian man who was allegedly arrested by the Israeli police after Facebook’s algorithms mistakenly translated his post that read, “Good morning,”As “Attack them.”

Gebru said she discovered her Google bosses really didn’t like it “whenever you showed them a problem and it was inconvenient” Or they are unwilling to acknowledge it. The company demanded that she retract the academic peer-reviewed paper from which she was writing, as it was due to be published at an international conference. She insists this demand wasn’t supported by any reasoning or research, with her supervisors just saying it “showed too many problems” With the LLMs.

The strategy of major players such as Google, Facebook, and Amazon is to pretend AI’s bias is “purely a technical issue, purely an algorithmical issue … [that]It has nothing to do with power dynamics, antitrust laws, monopoly or labor issues. It’s just purely that technical thing that we need to work out,”Gebru also available
New Netflix doc ‘Coded Bias’ is so keen to show AI is racist that it ignores how tech tyranny is dehumanizing EVERYONE

“We need regulations. And I think that’s … why all of these organizations and companies wanted to come down hard on me and a few other people: because they think what we’re advocating for isn’t a simple algorithmic tweak – it’s for larger structural changes,”She said.

Gebru explained that with other whistleblowers following her example, society finally understands the necessity to regulate AI developments. Gebru also stated that public should be ready to help those corporations who disclose wrongdoings from the pressure of the press.

Gebru founded the Black in AI group after she left Google. This is to unify scientists of colour working in AI. She’s also assembling an interdisciplinary research team to continue her work in the field. She said she won’t be looking to make a lot a money with the project – which is a non-profit – because “if your number-one goal is to maximize profits, then you’re going to cut corners”They end up creating the exact opposite of ethical AI. also available
‘Blood on my hands’: Second Facebook ‘whistleblower’ comes forward to call for content crackdown, says she’ll testify in Congress

You liked this story? Please share this story with friends!



Related Articles

Back to top button