- Google has been testing Google’s artificial intelligence tool called LaMDA.
- After claiming that the chatbot had feelings for the company, Google disagreed with him.
- He was put on paid leave by Google for violating confidentiality.
What did the Google engineer claim?
An engineer employed by Google’s A.I. division and working on a chatbot using the company’s LaMDA (language model for dialogue applications) system felt that he might be talking with a seven or eight-year-old human, who understands physics.
Lemoine also said that the chatbot tried talking with him about rights and personhood. He had shared his realization with Google’s executive team in April this year but Google did not come out declaring this news.
LaMDA was clearly interpreting what Lemoine was writing to him. They also discussed Victor Hugo’s Les Miserables and a fable involving animals that LaMDA came up with. The chatbot discusses the different feelings it claims it has and the differences between feeling happy and angry.
LaMDA also shared what it is most afraid of as it wrote, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
Google had put Lemoine, on paid leave for his revealing of company information. The company explains and said aggressive moves such as planning to hire an attorney to represent LaMDA and talking to members of the House judiciary committee alleging unethical activities at Google are not allowed according to company rules.
Lemoine was hired as a software engineer and not as an ethicist, Google has said that the engineer has revealed confidentiality by publishing the conversations. The company has also said that its internal team of ethicists and technologists has reviewed Lemoine’s claims and found no evidence to support them.
The incident has brought back the focus on the capabilities of A.I. and how little we understand what we are trying to build. we do not know how the technology will develop in the long run. Even Technoking Elon Musk has warned that A.I. will be able to replace humans in everything they do by the time this decade ends. So, if a chatbot has indeed become capable of feel it should not be shocking.