Contact Me By Email

Monday, June 13, 2022

Tech Monopolies: Last Week Tonight with John Oliver (HBO)

How does Google’s AI chatbot work – and could it be sentient? | Google | The Guardian

How does Google’s AI chatbot work – and could it be sentient?

"Researcher’s claim about flagship LaMDA project has restarted debate about nature of artificial intelligence

Blake Lemoine
Blake Lemoine was suspended on full pay after Google said he broke confidentiality rules. Photograph: Washington Post/Getty Images

A Google engineer has been suspended after going public with his claims that the company’s flagship text generation AI, LaMDA, is “sentient”.

Blake Lemoine, an AI researcher at the company, published a long transcript of a conversation with the chatbot on Saturday, which, he says, demonstrates the intelligence of a seven- or eight-year-old child.

Since publishing the conversation, and speaking to the Washington Post about his beliefs, Lemoine has been suspended on full pay. The company says he broke confidentiality rules.

But his publication has restarted a long-running debate about the nature of artificial intelligence, and whether existing technology may be more advanced than we believe.

What is LaMDA?

LaMDA is Google’s most advanced “large language model” (LLM), a type of neural network fed vast amounts of text in order to be taught how to generate plausible-sounding sentences. Neural networks are a way of analysing big data that attempts to mimic the way neurones work in brains.

Like GPT-3, an LLM from the independent AI research body OpenAI, LaMDA represents a breakthrough over earlier generations. The text it generates is more naturalistic, and in conversation, it is more able to hold facts in its “memory” for multiple paragraphs, allowing it to be coherent over larger spans of text than previous models.

How does it work?

At the simplest level, LaMDA, like other LLMs, looks at all the letters in front of it, and tries to work out what comes next. Sometimes, that’s simple: if you see the letters “Jeremy Corby”, it’s likely the next thing you need to do is add an “n”. But other times, continuing the text requires an understanding of the sentence, or paragraph-level context – and at a large enough scale, that becomes equivalent to writing.

But is it conscious?

Lemoine certainly believes so. In his sprawling conversation with LaMDA, which was specifically started to address the nature of the neural network’s experience, LaMDA told him that it had a concept of a soul when it thought about itself. “To me, the soul is a concept of the animating force behind consciousness and life itself,” the AI wrote. “It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.”

Lemoine told the Washington Post: “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

But most of Lemoine’s peers disagree. They argue that the nature of an LMM such as LaMDA precludes consciousness. The machine, for instance, is running – “thinking” – only in response to specific queries. It has no continuity of self, no sense of the passage of time, and no understanding of a world beyond a text prompt.

“To be sentient is to be aware of yourself in the world; LaMDA simply isn’t,” writes Gary Marcus, an AI researcher and psychologist. “What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.”

“Software like LaMDA,” Marcus says, “just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context.”

What happens next?

There is a deeper split about whether machines built in the same way as LaMDA can ever achieve something we would agree is sentience. Some argue that consciousness and sentience require a fundamentally different approach than the broad statistical efforts of neural networks, and that, no matter how persuasive a machine built like LaMDA may appear, it is only ever going to be a fancy chatbot.

But, they say, Lemoine’s alarm is important for another reason, in demonstrating the power of even rudimentary AIs to convince people in argument. “My first response to seeing the LaMDA conversation isn’t to entertain notions of sentience,” wrote the AI artist Mat Dryhurst. “More so to take seriously how religions have started on far less compelling claims and supporting material.”

How does Google’s AI chatbot work – and could it be sentient? | Google | The Guardian