Google’s Blake Lemoine (ethicist and engineer) was fired for violating the data security policies. Last month, Lemoine made public claims about a sentient AI program created by Google that could talk to its users. “rights and personhood.”
Lemoine was dismissed on Friday, with Google confirming the news to Big Technology, an industry blog. He had been on leave for over a month, since he told the Washington Post that his former employer’s LaMDA (Language Model for Dialogue Applications) had become conscious.
A former priest and Google’s in-house ethicist, Lemoine chatted extensively with LaMDA, finding that the program talked about its “rights and personhood”When the conversation moved into religion territory, he expressed his gratitude. “deep fear of being turned off.”
“I know a person when I talk to it,”Lemoine shared his thoughts with the Post. “It doesn’t matter whether they have a brain made of meat in their head. They could have billions of lines of code. They talk to me. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
In its statement confirming Lemoine’s firing, the company said that it conducted 11 reviews on LaMDA and “found Blake’s claims that LaMDA is sentient to be wholly unfounded.” Even at the time of Lemoine’s interview with the Post, Margaret Mitchell, the former co-lead of Ethical AI at Google, described LaMDA’s sentience as “an illusion,”Explaining that it has been fed trillions upon trillions of words via the internet and could mimic human speech, while being completely inanimate.
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,”Emily Bender, a linguistics professor, spoke to the newspaper. “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.”
According to Google, Lemoine’s continued insistence on speaking out publicly violated its data security policies and led to his firing.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,”According to the company,
“We will continue our careful development of language models, and we wish Blake well.”
This story can be shared on social media