Artificial Intelligence Obeys Our Instructions. But Is AI Dangerous?

Is AI Dangerous
via

Computers were not enough for mankind. AI came into existence and it is not a dream anymore. But is AI dangerous? AI can think and make decisions, unlike a computer

Scientists and researchers have been having one common goal for years. To make a complicated task easier for a person to do.

Every invention, starting from the light bulb to the recent self-driving cars, is a result of that goal. Since humans learned to ignite the fire, they have wanted to make life simple.

We have tried to keep in control and stop them from making extreme decisions. But do we?

The Paper-clip AI

The Paper-clip AI
via

Nick Bostrom is a popular philosopher from Oxford. He imagined a situation where he makes a super-computer which can make a decision on its own.

He programs the computer with specific instruction. It is to make start manufacturing paper clips.

This might seem like a simple and harmless task. He proposed a theory where the AI could turn the entire world into a clip making factory.

The AI might take the instruction literally without having any limits or morals.

AI backfires on YouTube

AI backfires on YouTube
via

It is obvious and a fact that YouTube has one major goal. Which is to increase the number of hours of an average user.

YouTube decided to try and implement AI to work on its goal. Everything went according to the plan initially.

The number of watch hours increased but at what cost? YouTube began to notice that in order to engage the viewers, the algorithm started to suggest extreme video content.

A person who watches a video of kids running around starts getting suggestions for a marathon. It was evident in research that YouTube AI started to feed people with harmful content.

It was all done to keep them engaged. This gives rise to the question “Is AI dangerous?”. The answer to that question is evident.

Do we know what we want?

Do we know what we want?
via

How can we blame AI if it is us who programmed it? Any programmer who programs an AI should exactly know his goal. But is really possible?

Stop and ask this question to any person who walks in the street. What do self-driving cars do? He/She would answer in a second that it avoids obstacles smartly. But is it what it does? No.

A car must handle several instructions and numerous goals at a time. This might cause it to malfunction.

A self-driving car manufactured has one ultimate goal. This goal might cause it to apply brakes every time it encounters a plastic bag which blows in the wind.

Human preferences should be the goal

Human preferences should be the goal
via

Stuart Russell is a computer research scientist residing in Berkeley. This 57-year-old intelligent mind has focused his entire attention to solving this problem. He has an interesting solution for this.

He says that we must not program an algorithm with a goal or a reward. When doing so, the AI might become unstoppable until it reached the goal.

Instead, the major aim of the AI is to understand the human preferences which lead to the goal.

When you give a robot or an AI to act on its own, there are very high chances in which it could go wrong. Hence, the major aim of the AI is to satisfy the preference of its maker.

Humans are ambiguous creatures. We are not sure about our needs. An AI should be taught to adapt to this.

It is must be made to interact with humans and observe their imperfect actions. This could help them invent new behaviours which they must follow to satisfy our preferences.

Imagine a scenario where a self-driving car standing a few steps back at a traffic signal to help us lead. Wouldn’t that be awesome?

Understanding the human mind

Understanding the human mind
via

Imagine a robot created to achieve just one goal. Which is to enhance the human experience. How do you think the robot can know what that is?

No algorithm must be programmed with functions like making paper-clips or increasing the watch time. Instead, their only purpose should be to make our lives a little easier than before.

There is another major problem which we might face. Any device should have an off switch. Same is the case for an autonomous AI.

But let us assume we create it to achieve a goal. What if we cannot stop it until it reaches the goal?

We cannot be sure that the robots won’t be turning off their own off switch to ensure the completion of their mission.

Conclusion

Conclusion
via

However promising Russell’s research may sound, further research is mandatory to ensure that AI is safe.

But is AI dangerous at the present? Yes. Without a doubt. Even if an AI is programmed to understand human preferences, there is one major problem.

Our preferences change every day. We do not prefer the same things in every instance. How will the AI detect what we need then? Let us hope we find an answer for this soon.

LEAVE A REPLY

Please enter your comment!
Please enter your name here