UCSC Researcher Shares Opinion on Google's ‘Sentient’ Bot
Artificial Intelligence is a reoccurring hot topic in the tech community that has gotten even hotter as of recent. As of Monday, June 13th, Blake Lemoine, a software engineer at Google, announced that he had been placed on paid leave for violating Google’s confidentiality policy after he publicly claimed that Google’s AI chatbot, LaMDA, which stands for "Language Model for Dialog Applications” is “sentient”.
According to The New York Times, prior to his suspension, Lemoine had published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA chatbot system. The documents were turned over to a U.S. senator’s office, claiming they provided evidence that Google’s chatbot is sentient and its technology engaged in religious discrimination.
The conversation between Lemoine and the LaMDA chatbot included questions to the chatbot regarding feelings, the concept of self, learning, fears, and more. You can read the interview here.
Brian Gabriel, a spokesman from Google, has denied the claims that the system is sentient.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims.”
Lemoine’s claims have re-ignited the conversation regarding AI, its potential for consciousness, the consequences of a sentient AI, and the long term impact on humanity and technology. AI researchers from across the globe have chimed in, including a computational media researcher at UCSC, Max Kreminski. Here’s what Kreminski had to say:
“[The architecture of LaMDA] simply doesn’t support some key capabilities of human-like consciousness…If LaMDA is like other large language models…it wouldn’t learn from its interactions with human users because the neural network weights of the deployed model are frozen.”
Do you think LaMDA is sentient? Why or why not?