Google’s Advanced AI Highlights A Problem With Human Cognition

  • Recently, a former Google engineer asserted that LaMDA, their AI system, has a sense of self.
  • It can be challenging to recognize text produced by models like Google’s LaMDA.
  • The human brain is programmed to deduce the meaning of words.

More often than not, we as humans presume that someone holding a good command of a language can only be another human with inherent processes of thinking and feeling. However, contrary to popular belief, today’s artificial intelligence systems can produce some sentences that are extremely human-like after being educated on vast volumes of text written by humans. To further assume that being able to articulate oneself clearly is a sign that an AI model can think and feel just like a person, which is natural—but potentially false—to believe.

Therefore, the fact that a former Google engineer recently asserted that LaMDA, Google’s AI system, has a sense of self since it can eloquently write language about its own feelings, is maybe not surprising. This was then followed by other assertions, from media coverages to other scholars, that computational representations of human language are sentient, that is, are able to think, feel, and experience.

Before we explore why it is all too simple for people to make the mistake of assuming that something that can speak eloquently is sentient, conscious, or intelligent, let’s first examine how AI is utilized to produce human-like language.

It can be difficult to tell text produced by models like Google’s LaMDA from content authored by people. A multi-decade effort to create models that produce grammatical, meaningful language has yielded this astounding feat. The earliest iterations, known as n-gram models, dating back to at least the 1950s and relied solely on the number of times a given phrase appeared in order to infer which words were most likely to do so in a given context. For example, it’s obvious that “peanut butter and jelly” is more likely to be used than “peanut butter and pineapples.”

The collections of information and rules used in modern models, which approach human language, differ significantly from those used in earlier attempts. They receive training on almost the entire internet in the beginning. Second, they can discover connections between words that are not close by but also ones that are far off. Third, there are so many internal “knobs” that they are tuned in such a way that even the engineers who created them struggle to comprehend why they produce one set of words rather than another.        However, the objective for the models is still the same as it was in the 1950s: predict the likely next word. Today, they are so proficient at this activity that almost all of the sentences they produce sound natural and grammatically correct.

Humans have an innate ability to infer meaning from spoken intentions. Every time you have a discussion, a mental image of your conversation partner is immediately created in your head. After that, you fill in the model with the aims, emotions, and beliefs of that person using the terms they use.

When you hear a whole sentence, the process of moving from words to the mental model is automatically activated. You spend a lot less time and effort thanks to this cognitive process, which considerably facilitates social interactions.

But when it comes to AI systems, it fails, creating a mental model out of thin air. Consider the results of the terms “peanut butter and pineapples” and “peanut butter and feathers” to gain a better understanding.

When the sentence “Peanut butter and pineapples” was left unfinished, the huge language model GPT-3 responded. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.”If a person said this, it may be assumed that they had combined peanut butter and pineapple and had a positive experience, then shared their perspective with the reader.

The model never saw, touched, or tasted pineapples – it just processed all the texts on the internet that mention them. And yet reading this paragraph can lead the human mind – even that of a Google engineer – to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.

Now, moving on to the latter combination, “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”

Although the wording in this instance is just as fluid as the one in our pineapple example, the model is making a much less logical argument this time. One starts to wonder if GPT-3 has ever tried peanut butter and feathers in real life.

Assuming an excessively close relationship between fluent expression and fluent thinking can result in prejudice towards people who speak differently, which is another issue with the notion that AI systems are sentient. For example, individuals with foreign accents are frequently thought to be less clever and are less likely to get hired for the positions for which they are qualified. Similar prejudices exist against those who speak dialects that are not seen as prestigious, such as Southern English in the U.S., against those who are deaf and communicate through sign languages, and against those who stammer.

These biases have repeatedly been proved to be unjustified, to be extremely detrimental, and to frequently give rise to racist and sexist preconceptions.

Will AI ever become sentient is the next topic to be asked.

It is important to give this issue careful thought because, as researchers have found, you cannot simply believe a language model when it describes its emotions. Words can be deceiving, and it is all too simple to confuse fluid speech with fluid thought.

Disclaimer: This information is covered based on the latest research and development available. However, it may not fully reflect all current aspects of the subject matter.

Leave A Reply

Please enter your comment!
Please enter your name here

Popular Stories

Telegram

Telegram

Join Infomance on Telegram for everyday extra and something beyond.

Subscribe Free & Stay Informed!!

Recommended Stories