Has Google's LaMDA become sentient? Understanding the futuristic AI
Google's AI-based language model LaMDA or 'Language Model for Dialogue Applications,' has been in the news after Blake Lemoine, a Google engineer, claimed that it had become a sentient program. The engineer has been put on administrative leave and field experts have disregarded his claims. Now, let's have a look at this controversial AI chatbot and understand what it actually does.
Why does this story matter?
The question of whether artificial intelligence systems are close to acquiring consciousness or not has been haunting humanity for a while. As the field and research widen, the questions and claims become bolder. LaMDA is at the eye of the storm now. Has this highly advanced language model really become sentient? Well, Google says that it has become a rather great mimic.
First, what is LaMDA?
LaMDA is an unreleased machine-learning language model from Google that is built on Transformer, a neural network architecture invented by Google. It has been trained on trillions of texts from the internet and can respond to written prompts. Unlike other AI systems like BERT and GPT-3, LaMDA is trained on dialogue, which makes it capable to engage in a free-flowing conversation about several topics.
LaMDA was introduced by Google in 2021
LaMDA was introduced by Google at its annual I/O conference in 2021. Sundar Pichai, the company's CEO, demonstrated its capabilities with a conversation between the Google team and LaMDA. The AI system, which is supposed to mimic humans in a conversation, took on the role of the dwarf planet Pluto then. At the I/O 2022, Google introduced LaMDA 2.
LaMDA is becoming better at staying on a topic
When Google first demonstrated LaMDA in 2021, despite showing its ability to give sensible answers, it was far from perfect, as it gave several nonsensical answers. In 2022, Google announced that LaMDA is being developed to become better at staying on a topic.
The quality-gap between LaMDA human level interaction is narrow
In January 2022, in a Google AI blog post, the company published LaMDA's progress. It was evaluated based on the following metrics: sensibleness, safety, specificity, groundedness, interestingness, and informativeness. In the safety metric which measures how it avoids possibly harmful or biased responses, LaMDA's score was close to human-generated dialogue. Researchers noted that "with fine-tuning, quality gap to human levels can be narrowed."
LaMDA can benefit Google Assistant, Search, and Workspace
A language model like LaMDA, which is built on dialogue, has several applications. It can help in the easy translation of one language to another, summarizing a long document into a brief highlight, and answering informative questions. With its expansive database, it can take Google Assistant and Search to the next level. It may also enhance the workspace experience, along with developer use.
Engineer claims that LaMDA has acquired consciousness
Lemoine, a senior software engineer at Google, has been working on LaMDA to determine whether the AI used discriminatory or hate speech. After a recent conversation with the AI, Lemoine said, "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."
LaMDA wants to be considered as a person: Lemoine
According to Lemoine, during his conversation with LaMDA, the AI talked about its rights and personhood. He said, "It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it."
Google and AI practitioners have disregarded Lemoine's claims
In response to Lemoine's claims, Google said, "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims." The company attributes LaMDA's sentient-like behavior to the large database it has access to. Similarly, most academicians and AI practitioners have also refused to give credibility to Lemoine's claims.