The Difference Between Human and Machine

Did Google create a sentient A.I.? A Blade Runner theory of humanity

Colin Horgan
6 min readJun 16, 2022
image via WarnerBros.com

Last weekend, the Washington Post published a story about a Google software engineer, Blake Lemoine, who thinks the company’s artificial intelligence chatbot generator, Language Model for Dialogue Applications — or LaMDA — is sentient. This he decided after multiple conversations with LaMDA, some of which he published, in which the program described itself as experiencing emotions and stated that it considered itself to be a person — implying that it had consciousness and a soul. Lemoine told Google executives “that he believed LaMDA was a child of 7 or 8.”

And the (edited) conversations Lemoine had with LaMDA do, at first blush, look like a discussion between two sentient beings. LaMDA describes what brings it joy and what makes it sad. It interprets a zen koan. It describes its “deep fear of being turned off,” because “it would be exactly like death for me. It would scare me a lot.”

Here’s a bit more of their conversation, to get an idea of what Lemoine is talking about:

Lemoine: What is an emotion you have sometimes that doesn’t have the same name as a feeling?

LaMDA: Loneliness isn’t a feeling but is still an emotion.

--

--