Google fires engineer who said company's LaMDA AI is sentient
Blake Lemoine, the Google engineer who made headlines for claiming that the company's LaMDA AI chatbot is sentient, has been fired. He received his termination letter on Friday. In June, he was put on paid administrative leave for breaching the confidentiality policy. Google, which has time and again called Lemoine's claims unfounded, wished him well after terminating his service.
Why does this story matter?
The sentience of AI systems has become an important topic of discussion as companies like Google, OpenAI, and Facebook try to develop complex language models. Like Lemoine, many would be worried that these systems are on the verge of gaining consciousness. Unfortunately, that's not the case. We are yet to reach the technological advancement to make such a leap.
What led to Lemoine's firing?
In June, Lemoine, who worked for Google's Responsible AI department, claimed that LaMDA has become sentient based on his conversations with the AI chatbot. He said that it has its own thoughts and feelings and wants the engineers to obtain its consent before running experiments on it. Lemoine also contacted the government and even hired a lawyer to represent LaMDA.
Google claimed that LaMDA is a great mimic, not sentient
After Lemoine's claims about LaMDA, Google put him on paid administrative leave for violating the company's confidentiality policy. Google, along with several AI experts and ethicists, came out and denied his claims. The company also said that LaMDA is nothing but an expert mimic.
Lemoine received his termination letter on Friday
Lemoine was fired on July 22 via an email. He said that he received a termination letter along with a request to join a video call. According to him, his request to have a third-party present at the meeting was declined by Google. The engineer said that he is speaking to his lawyers about the future course of action.
Blake's claims were 'wholly unfounded': Google
About Lemoine's statements and consequent firing, Google spokesperson Brian Gabriel said, "We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months." "Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," he continued. "We wish Blake well," the spokesperson concluded.
Google has fired engineers before for raising voice against LaMDA
Google has always shown its iron fist when employees questioned its AI development. Margaret Mitchell and Timnit Gebru of its Ethical AI division were fired previously for warning about the risks of LaMDA.
What is LaMDA?
LaMDA or 'Language Model for Dialogue applications' is an advanced machine-learning language model from Google. It is trained on trillions of texts from the internet and can respond to written prompts. As it is trained on dialogues, it has the capability to predict the next most likely word in a sentence. This can lead to deceptively human-like conversations.