Press "Enter" to skip to content

Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch

Words can have a powerful effect on people, even when they’re generated by an unthinking machine.

It is easy for people to mistake fluent speech for fluent thought.

When you read a sentence like this one, your past experience leads you to believe that it’s written by a thinking, feeling human. And, in this instance, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by AI systems that have been trained on massive amounts of human text.

People are so accustomed to presuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to comprehend. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an artificial intelligence model can express itself fluently, that means it also thinks and feels just like humans do.

As a result, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking, feeling, and experiencing.

The question of what it would mean for an AI model to be sentient is actually quite complicated (see, for instance, our colleague’s take), and our goal in this article is not to settle it. But as language researchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of assuming that an entity that can use language fluently is sentient, conscious, or intelligent.

Using AI to generate human-like language

Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language.

Source: SciTechDaily