Please join Noticeboard

Do computers have feelings?

Technology

Google engineer Blake Lemoine says that the LaMDA AI system may have its own feelings. Lemoine works for Google’s Responsible AI organisation and began talking to LaMDA in the fall as part of his job.

LaMDA, short for Language Model for Dialogue Applications, is a machine-learning language model created by Google for building chatbots. It is able to mimic human speech in conversations.

Part of Lemoine’s job was to test if the AI used discriminatory or hate speech. He talked to LaMDA about religion, and noticed that chatbot talking about its personhood and rights. Lemoine decided to push further, and in another conservation, the chatbot changed Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine and a collaborator presented evidence that LmaMDA was sentient to Google. He decided to go public after being placed on administrative leave on Monday. To support his claims, he published a conversation he and a collaborator had with LaMDA.

Brian Gabriel, Google spokesperson, released the following statement: “Our team - including ethicists and technologists - has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Image from the Guardian

Message

Create 3 Noticeboards to earn this Silver level Community Champion Badge.

View all badges that you can earn

You're only a minute from joining in

Discover Noticeboards

Report Content

Please tell us why you are reporting this content.