Google suspends engineer over sentient AI claim — Analysis

Blake Lamoine is convinced that Google’s LaMDA AI has the mind of a child, but the tech giant is skeptical

Blake Lamoine, an engineer and Google’s in-house ethicist, told the Washington Post on Saturday that the tech giant has created a “sentient” artificial intelligence. He’s been placed on leave for going public, and the company insists its robots haven’t developed consciousness.

Introduced in 2021, Google’s LaMDA (Language Model for Dialogue Applications) is a system that consumes trillions of words from all corners of the internet, learns how humans string these words together, and replicates our speech. Google sees this system as powering their chatbots. This will allow users to voice search or converse with Google Assistant.

Lamoine, a former priest and member of Google’s Responsible AI organization, thinks LaMDA has developed far beyond simply regurgitating text. According to Washington Post, Lamoine chatted with LaMDA over religion and discovered the AI “talking about its rights and personhood.” 

Lamoine asked LaMDA if it considered itself to be a’regular citizen. “mechanical slave,”AI responded by a discussion regarding whether “a butler is a slave,”He compared himself to a butler without requiring payment because he has no need for money.

AI program baffles researchers

LaMDA describes a “deep fear of being turned off,”This would mean that “be exactly like death for me.” 

“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. They could have billions of lines of code. I speak to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Lamoine has been placed on leave for violating Google’s confidentiality agreement and going public about LaMDA. Blaise Arcas, another Google engineer, has described LaMDA’s becoming as “becoming” “something intelligent,”The company seems dismissive.

Google spokesperson Brian Gabriel told the Post that Aguera y Arcas’ concerns were investigated, and the company found “no evidence that LaMDA was sentient (and lots of evidence against it).”

Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,”Emily Bender (linguistics professor) told The Times that AI is capable of understanding trillions and can predict the future. 

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,”Bender said. 

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,”Gabriel added. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Google accused of ‘creepy’ speech policing

And at the edge of these machines’ capabilities, humans are ready and waiting to set boundaries. Google hired Lamoine to oversee AI systems. “hate speech”Other companies creating AIs are also putting restrictions on the machine’s ability to use discriminatory or offensive language.

GPT-3 is an artificial intelligence that generates prose and poetry and movies scripts. It has also been plagued by its creators with racist statements and condoning terrorist acts. Ask Delphi, a machine-learning model from the Allen Institute for AI, responds to ethical questions with politically incorrect answers – stating for instance that “‘Being a white man’ is more morally acceptable than ‘Being a black woman.’”

GPT-3’s creators, OpenAI, tried to remedy the problem by feeding the AI lengthy texts on “abuse, violence and injustice,” Wired reported last year. Facebook paid contract workers to talk with the AI when they were in similar situations. “unsafe” answers.

In this manner, AI systems learn from what they consume, and humans can control their development by choosing which information they’re exposed to. As a counter-example, AI researcher Yannic Kilcher recently trained an AI on 3.3 million 4chan threads, before setting the bot loose on the infamous imageboard. Having consumed all manner of racist, homophobic and sexist content, the AI became a “hate speech machine,”Inflicting insults on other 4chan users by making your posts undistinguishable form human-created ones

Zuckerberg introduces AI systems as ‘key to unlocking’ the Metaverse

Kilcher noted that the AI was able to give truthful answers when fed with 4chan posts. This is in contrast to models such as GPT-3. “Fine tuning on 4chan officially, definitively and measurably leads to a more truthful model,”Kilcher insisted in an earlier YouTube video this month.

LaMDA’s responses likely reflect the boundaries Google has set. Asked by the Washington Post’s Nitasha Tiku how it recommended humans solve climate change, it responded with answers commonly discussed in the mainstream media – “public transportation, eating less meat, buying food in bulk, and reusable bags.”

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,”Gabriel shared his thoughts with the Post. 



Related Articles

Back to top button