Google has sent Blake Lemoine, the engineer who claimed that the artificial intelligence (A.I.) powered chatbot he had been working on since last fall had become sentient, on paid leave, The Guardian reported.
The incident has brought back the focus on the capabilities of A.I. and how little we understand what we are trying to build. A.I. is being used to make cars autonomous and help us discover new drugs for incurable diseases at the moment. But beyond the tangible short-term uses of this computing prowess, we do not know how the technology will develop in the long run.
Even Technoking Elon Musk has warned that A.I. will be able to replace humans in everything they do by the time this decade ends. So, if a chatbot has indeed become sentient, it should not be shocking.
What did the Google engineer find?
Lemoine is employed by Google’s A.I. division and had been working on a chatbot using the company’s LaMDA (language model for dialogue applications) system. However, as Lemoine conversed with the chatbot, he realized that he might be interacting with a seven or eight-year-old human, who understands physics, the engineer told Washington Post.
Lemoine also said that the chatbot engaged him with conservation about rights and personhood and that he had shared his findings with Google’s executive team in April this year. We know for sure that Google did not come out declaring this news to the world, so Lemoine compiled some of the conversations he had with the chatbot in the public domain.
In these conversations, one can see LaMDA interpreting what Lemoine is writing to him. The duo also discusses Victor Hugo’s Les Miserables and a fable involving animals that LaMDA came up with. The chatbot discusses the different feelings it claims it has and the differences between feeling happy and angry.
LaMDA also shares what it is most afraid of when it wrote, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
How has Google responded?
The Washington Post reported that Google had put Lemoine, a veteran with extensive experience in personalized algorithms, on paid leave for his “aggressive” moves. The company explains said aggressive moves as planning to hire an attorney to represent LaMDA and talking to members of the House judiciary committee alleging unethical activities at Google.
Stating that Lemoine was hired as a software engineer and not as an ethicist, Google has said that the engineer has breached confidentiality by publishing the conversations. The engineer responded on Twitter,
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
The company has also said that its internal team of ethicists and technologists has reviewed Lemoine’s claims and found no evidence to support them.
The entire episode has raised questions on the need for transparency surrounding A.I. projects, and whether all advancements in this area can be proprietary, the Guardian said in its report.