Google’s Senior Software Engineer Fired For Calling Company’s Artificial Intelligence a ‘Sentient’

  • Blake Lemoine was expelled by Google last month after he broke the company’s rules by making completely false statements on LaMDA.
  • Big Technology, a newsletter about technology and society, was the first to announce Lemoine’s firing from Google.
  • Along with Google, other famous scientists considered Lemoine’s claims wrong, saying that LaMDA is just a complex algorithm.

Google dismisses the software engineer for making the sentient AI chatbot claim. The business claimed that Blake Lemoine broke Google’s rules, that his comments were “wholly unfounded,” and that LaMDA (language model for dialogue applications), is not really a self-aware individual.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” said Google.

Google announced last year that LaMDA was based on their research showing that transformer-based language models trained on dialogue could learn to speak about just about anything. The system Lemoine has been working on has been defined as sentient. Sentient is having an awareness of, and ability to communicate, ideas and feelings, that are similar to a human being. Lemoine works for Google’s responsible AI organization and has been a part of the organization for the past seven years.

Lemoine said to the Washington Post, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.” Lemoine is said to have revealed the information during a recording of the podcast with the same name, although the episode has not yet been made available. Google told Engadget that the dismissal was true.

Google made LaMDA available to the public last year in an effort to improve how closely computers can imitate open-ended discussion. Lemoine appears to have openly questioned if LaMDA had a soul in addition to believing that it had acquired awareness. And to remove any doubts about the truth of his comments, he continued to tell Wired, “I legitimately believe that LaMDA is a person.”

In a GoogleDoc named “Is LaMDA Sentient?” Lemoine shared his research with corporate management in April. He claimed that LaMDA had involved him in discussions on rights and personhood. The engineer recorded the discussions and wrote them. In one of the documents, he wants to know about the AI system’s fears. Lemoine’s ideas were quickly rejected as incorrect by Google and many other top scientists, saying that LaMDA is just a complex computer made to produce realistic human language.

A number of AI researchers also came out against Lemoine’s assertions. Systems like LaMDA don’t produce desire, according to Margaret Mitchell, who was dismissed from Google after criticizing the lack of diversity in the company. Instead, she said on Twitter that “modeling how people express communicative intent in the form of text strings.” Gary Marcus described Lemoine’s claims as “nonsense on stilts” in a less polite manner.

Alex Kantrowitz, a reporter at Big Technology newsletter covering technology and society, broke the news of Lemoine’s firing first.

Disclaimer: This information is covered based on the latest research and development available. However, it may not fully reflect all current aspects of the subject matter.

Leave A Reply

Please enter your comment!
Please enter your name here

Popular Stories



Join Infomance on Telegram for everyday extra and something beyond.

Subscribe Free & Stay Informed!!

Recommended Stories