AI chatbot has become sentient, Google employee Blake Lemoine says

  • Google employee Blake Lemoine believes that its LaMDA software which is an AI chatbot has become sentient or conscious. 
  • Most others disapprove and believe that AI-based robots are built to be humanoid but can’t replace humans. 
  • The safety of humanoid AIs and their misuse capacity remains debatable.

Al- How close are they to replacing humans?

The shift toward Artificial Intelligence(AI) has gained an unprecedented speed in the recent past. More and more technological firms are pushing to be independent of Human Resources and shifting to a robot-mediated and controlled experience.

There are many reasons for this increased dependence on AI applications. Whether it be the speed of offered solutions or the computational accuracy, AI offers a more fuss-free existence than humans could ever imagine. It’s no secret that Artificial Intelligence has taken over the technological world. Most firms are in a race to be AI-first.

AI applications form a huge chunk of our daily lives as well. From self-driving cars to the use of AI in security, data science as well as financial services  – the list of applications is endless.

The power of AI expressed through AI chips can be seen in a variety of fields such as Natural language processing (NLP), computer vision, robotics, and network security across a wide variety of sectors, including automotive, IT, healthcare, and retail. The use of AI chips for NLP has increased due to the rise in demand for chatbots.

The problem of conscious AI

The debate over conscious AI was sparked when a Software developer from Google asked a simple question that has for long been on the fringes of our world. Can AI-powered software become sentient?

So far pop culture has been flooded with many movies that depict robots to have developed feelings and emotions and thinking of their own. In a basic sense, this can be termed “consciousness”.

These robots have gone beyond the coding-based instructions which were inserted in them by the engineer who designed them and the scientist who built them. Now with the rise of AI throughout the world, this glaring doubt or dilemma, whatever one may call it, has come outside the screens and hit us in real lives.

Google’s LaMDA software or Language Model for Dialogue Applications is a very sophisticated AI-driven chatbot whose job is to answer the questions input by the user. Now according to claims made by Blake Lemoine, LaMDA has become sentient and has developed a consciousness of its own.

Although other AI experts have disproved this claim saying that software like LaMDA recognizes patterns that vomit out the variations which were taught to it. It can never be capable of “thinking on its own” and reply differently from what’s put inside the system or software.

The Big Question

No matter what the claims and results are, one thing is for sure we as a species don’t know how to react if indeed a machine or software does become sentient. With the way AI is advancing this may be possible in the near future and truly speaking we do not have an answer to that.

Also, the definition of AI is still not complete. While scientists are pushing their limits to develop the latest and most advanced technological models on this aspect, an important byproduct that people are turning a blind eye to is the aspect of safety.

Most of the AI being used in this era has been designed mindfully to make them more efficient than humans in terms of speed and accuracy but at the same time, to trigger “humanoid” responses. However, now it’s highly debatable if in a race to build the ultimate “AI” scientists will end up creating something undesirable.

Another aspect of this is how one can truly differentiate consciousness from just plain machine-powered intelligence. Some studies have even gone on to claim that a machine with the right programming and channeling can even work and function exactly like a human mind. This view is called physicalism wherein the main idea is that consciousness is nothing but a purely physical phenomenon.

Indeed if this view and study be held true then it’s not impossible for an AI-driven advanced software to develop feelings and thinking of its very own.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Popular