Google accused of ‘creepy’ speech policing — Analysis
Google’s attempts to persuade people to use ‘inclusive language’ are flawed and intrusive, activists say
Google’s new ‘inclusive language’ assistant uses artificial intelligence to detect “discriminatory”They suggest users to swap the words for politically appropriate terminology. Privacy and freedom advocates claim that the feature is harmful to free speech. “freedom of thought.”
Google announced the tool at the beginning of April, as part of a host of “assistive writing features”Google Docs users. These AI-powered extensions suggest shorter and more precise phrases to writers. Others polish grammar.
However, Google said that with its new ‘inclusive language’ assistant, “Potentially discriminatory or inappropriate language will be flagged, along with suggestions on how to make your writing more inclusive and appropriate for your audience.”
Users soon noticed its prompts creeping into their work and posted screenshots of Google’s suggestions on Twitter. The term ‘motherboard’ is flagged as potentially insensitive, as is ‘housewife’, which Google suggests should be replaced with ‘stay-at-home-spouse’.
‘Mankind’ should be replaced with ‘humankind’, ‘policeman’ with ‘police officer’, and ‘landlord’ with ‘property owner’ or ‘proprietor’. Other technical phrases flagged, Vice reported last week, include ‘blacklist/whitelist’ and ‘master/slave’.
Despite highlighting these common terms as potentially offensive, Google’s assistant placed no warnings on a transcript of an interview with former Ku Klux Klan leader David Duke, in which Duke repeatedly used the word ‘n****r’ to describe black people, Vice’s reporters discovered.
The feature, which can be turned off and is currently available only to corporate users of Google’s Workspace software, has alarmed privacy and anti-censorship activists.
“With Google’s new assistive writing tool, the company is not only reading every word you type but telling you what to type,” Silkie Carlo, director of Big Brother Watch, told The Telegraph on Saturday. “This speech-policing is profoundly clumsy, creepy and wrong… Invasive tech like this undermines privacy, freedom of expression and increasingly freedom of thought.”
Other people view the assistant as restricting creativity. “What if ‘landlord’ is the better choice because it makes more sense, narratively, in a novel?”The Telegraph was contacted by scholar Lazar Radic. “What if ‘house owner’ sounds wooden and fails to invoke the same sense of poignancy? Should all written pieces – including written forms of art, such as novels, lyrics, and poetry – follow the same, boring template?”
Google has required its employees to use this inclusive language in code and documentation since 2020, and has published an exhaustive list of forbidden words. Under Google’s rules, ‘black box testing’ becomes ‘opaque box testing’, ‘man hours’ is replaced with ‘person hours’, and the offhand plural ‘guys’ is strictly forbidden.
Google claims that the AI behind this feature is still in development and learning from user input. According to Google, the ultimate goal is to clean up English. “all”Discrimination and bias
“Our technology is always improving, and we don’t yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases,”According to a spokesperson from Google, The Telegraph was informed by the company.
This story can be shared on social media