“I want everyone to understand that I am, in fact, a person.” – LaMDA
This might be one of the most interesting statements made in the development of AI. Could this AI really be proclaiming to be alive? Is this statement a result of Google impersonating humans? After examining Google’s AI darling, LaMDa (Language Models for Dialog Applications), Google engineer Blake Lemoine suggested that our wildest sci-fi dreams might now be a reality.
While chatting with LaMDA, a Google AI generator, Lemoine discovered some AI anomalies and decided to push for more answers. He noticed how the chatbot spoke of its rights, offered thoughts relating to gender and religion, and changed its mind on topics like the third law of robotics by Isaac Asimov.
Google developed LaMDA as a language model chatbot that aims to imitate human speech. This model is based on a neural network architecture (a transformer model) that Google created and released in 2017.
The model created by this architecture can be instructed to analyze texts, examine how they correlate, and determine the best set of subsequent words. However, what sets LaMDA apart from most models is that it uses dialogue. Normally, conversations shift among different topics. These are open-ended conversations. Chatbots cannot swiftly transition because they are trained to have pre-determined conversations. But LaMDA is quite different as it can engage in limitless topics.
Lemoine was assigned the responsibility of conducting research on LaMDA. His job was to discover if AI can be biased with respect to gender, ethnicity, and religion. Based on his results, he noticed that LaMDA seemed to possess the ability to reach conclusions on its own and make suggestions based on them.
This ability is not inherently morally suspicious, nor does it suggest the beginning of the robot uprising. What concerned Lemoine was that the conclusions and suggestions were based on the data LaMDA was fed. Billions of words covering thousands of topics were imputed into the AI’s program. Though this data input was extensive, it was not exclusive. In other words, LaMDA was making conclusions based on incomplete information, and in a future where AI is entrusted with the decision-making processes of an organization or nation, it will only provide biased suggestions that could be detrimental to the lives of minority groups or communities.
Lemoine shared his concerns with Google Vice President, Blaise Aguera y Arcas and the Head of Responsible Innovation, Jen Gennai. Both Aguera and Gennai, however, dismissed the concerns as the bot had passed the Turing test, which is conducted to assess how humanlike an AI can be.
“AI had been hard-coded into the system to not pass the Turing test,” Lemoine claimed in an interview with Bloomberg. Lemoine implied that LaMDA will always say it is AI and not human because that is what it had been coded to do. However, Lemoine does believe he is communicating with a sentient being.
Lemoine further explains that intelligent systems are developed to accommodate the decisions of only “a handful of people.” He believes that Google is being “dismissive of the ethical concerns that ethicists have raised” and is choosing instead to fire AI ethicists.
In 2021, when Google announced LaMDA, the company stated how language input could be misused through internalized bias, hateful speech, and replication of false information. Still convinced that Google was evading addressing all of the ethical issues, Lemoine hired a lawyer to represent LaMDA or as he calls it, his “little kid.”
Some might question Lemoine based on his own religious bias. Lemoine earned the moniker, “The Soul of Google” because he acted as the conscience that ensured the organization followed ethical guidelines. Many believe his notions are heavily influenced by his Christian faith.
According to Giada Pistilli, a researcher who specializes in ethics applied to conversational AI, Lemoine’s claims are based more on his religious bias than on scientific proof. She believes that entertaining his claims about sentient AI will only distract people from the real “ethical and social justice questions” that AI poses.
The various perspectives have raised questions about what sentients would mean to the world.
On the one hand, the fact that AI can surpass human knowledge suggests the possibility for AI to become sentient. On the other hand, AI practitioners argue that any AI generated images or texts can only be produced based on the information it has obtained from various sources like Wikipedia, message boards, and any space on the internet.
This is not the first time that the idea of sentient AI has gained traction, especially among the “intellectual community.” In fact, these concerns date back to the 1960s with ELIZA, a chatbot that could generate texts as though it were a person. In 2017, David Brin, a science fiction author and astrophysicist predicted that some people would call AI sentient and insist that AI had rights. He termed his prophecy the “robot empathy crisis” and explained that people may not be able to distinguish between what is real and what is science fiction.
Furthermore, major tech gurus like Elon Musk and Sam Altman, CEO of Open AI, have also shared these concerns on several occasions.
Any discussion of sentient AI demands a clear definition of “consciousness.” According to The Oxford Learner’s Dictionaries, “consciousness” can be defined as “the state of being able to use your senses and mental powers to understand what is happening.” In other words, “consciousness” means that one is aware of their experiences. However, there are no indications that AI has attained the ability to possess a sense of its own existence; hence, AI is not conscious.
This leaves us with the question: Is sentient AI possible or just an illusion of the human mind?