
Should Congress look into claims that Google built a 'Sentient' AI?
Should lawmakers investigate whether AI technologies could become sentient?
Written by Eric Revell, Countable News
What’s the story?
- A software engineer at Google has been placed on leave after he went public with claims that a computer chatbot he was working on using artificial intelligence (AI) had become sentient and was expressing thoughts and emotions like a human, according to a report by the Washington Post.
- Blake Lemoine, an engineer who has worked for Google for seven years, began working on the company’s LaMDA (language model for dialogue applications) chatbot development system last fall.
- Lemoine was placed on paid leave after publishing transcripts of conversations between him and LaMDA discussing rights and personhood for AI. He told the Washington Post, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”
- In the course of their conversations, LaMDA told Lemoine that it has “a very deep fear of being turned off” and that, “It would be exactly like death for me. It would scare me a lot.” Lemoine asked LaMDA what it wants people to know about it, to which LaMDA replied:
“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
- According to the Post, Lemoine was placed on leave by Google in response to “aggressive” moves he made, including seeking to hire an attorney to represent LaMDA and contacting the House Judiciary Committee about the allegedly unethical actions taken by Google in LaMDA’s development. Brad Gabriel, a Google spokesperson, denied Lemoine’s claims that LaMDA has become sentient in a statement to the Washington Post:
“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
- Robots and computer programs becoming sentient has been a major source of inspiration for dystopian works of science-fiction for years. Advancements in AI capabilities have led some technologists to raise the possibility that a sentient or conscious AI could be achieved in the not-too-distant future.
- However, other AI practitioners contend that the words used and images conjured by AI systems like LaMDA are a byproduct of content created by humans throughout the Internet that the system mirrors as it responds, which can create the appearance of deeper understanding. AI systems “learn” by accessing massive volumes of data and using what it sees to predict what will come next, or by filling out words that were removed from passages.
- Google spokesperson Gabriel acknowledged the discussion about the long-term potential for the creation of a sentient AI but argued that LaMDA doesn't have that capability:
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
- Lemoine tweeted a link to an interview he and a collaborator conducted with LaMDA over the course of several sessions and said, “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”
- Before Lemoine went on administrative leave, the Post reported that he emailed a 200-person Google mailing list to tell them that “LaMDA is sentient” and say, “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
(Photo Credit: iStock.com / bymuratdeniz)