“Deepfakes” Make You Believe Something Which is Not Even Real

Deepfakes
via

Imagine a situation when someone blames you for something which you didn’t even do.

What if you see a picture of yourself on the Internet which you know is not yours? How will you let the world know what is true and what is not?

Welcome. You live in an advanced society where these are not just scary tales anymore. Have you heard about the word “Deepfakes”?

As the name suggests, a Deepfake can be a photo, video or any other form of multimedia generated by AI. It might portray something about you without you having to do it in the first place.

We might have watched several videos on the Internet which are Deepfakes. We might not have realized it was a fake one because it felt so real.

There is a video of Barack Obama being Verbally abusive about Trump on stage. Video of several popular personalities acting strangely on their social media. 90% of these videos which are circulating around are fake ones.

Curious about how a Deepfake is made?

Curious about how a Deepfake is made?
via

A computer student might feel Machine Learning is an advanced area of study. But ML has evolved into something which is much more fascinating.

Deep learning has proved to have several promising applications. Deep learning exactly works like a human brain. Layers of artificial neural networks can train themselves in a way similar to humans.

Just like how humans learn new languages, a neural network can be programmed to learn something new. Coming back to Deepfakes, it is exactly how they come into existence.

Tons of high-quality images of a person fed to a neural network trains it. It is later merged with a video which has another face. The algorithm has become so efficient that we cannot separate reality from the fake.

Similar to this, high-quality audio files which have a person’s voice can be used to train the neural mesh. It can teach itself to talk like a person, exactly imitating his voice.

And it is immoral to blame technology for human error. A neural network does not program itself overnight. Humans who feel the urge to earn popularity are the ones behind this.

Some videos require further editing to seem perfect. Certain videos might need slowing down or fastening them up, to make them feel real. These have a separate name for them, Shallow fakes.

Where are they used?

Where are they used?
via

There are several companies around the world which help in the detection of deep fakes. Diptrace is one among them.

According to them, 96% of deep fakes circulating on the internet is related to pornography. High-quality images of actresses are so easily available on the internet.

This results in defamation of the actress. This is just one bad example and there are a lot more. Does bank robbery seem like a crime? Well, watch this.

A group of skilled criminals used deep fakes of a boss’ voice to call the company and asked them to send cash.

While there is nothing wrong in it, it is illegal there. Although his supporters tried to prove it was fake, Diptrace could not confirm it. This will give you an idea about how difficult it is to identify a fake one.

Should you be worried?

Should you be worried?
via

The answer to this is pretty obvious. Of course, you should be. Imagine what this technology is capable of doing.

Imagine a situation where a video of a person molesting a child gets released and the person being the president of a nation.

What could it do his reputation? Even if the video turns out fake, the damage done is done. The most affected people are the popular ones.

Heroes and actresses who have several high-quality images of them circulating around on the Internet are in trouble. What about the politicians who have their voices circulating around?

It is not just dangerous for popular people. What about the women who get bullied on social media based on a fake post? This same case has been the cause of several suicides over the years.

Provided the rate at which social media is growing, it is almost impossible to remove a video once it has gone viral.

Can you detect and stop?

Can you detect and stop?
via

While it might be possible to detect them, it is almost impossible to stop. Several governments from across the world have several restrictions on producing deepfakes.

One popular case is from California which has a severe restriction on fake videos before 60 days of the election. There are exceptions to this such as the casual “fun” ones used for satire.

It is impossible to detect them after publishing ion Social media. They come into existence through AI. And AI has one objective which is to look real.

Positive Applications

Positive Applications
via

The legend has it that there is good in every bad. Deepfake is no exception to this. Every harmful application will have something which is extremely useful and valuable.

What about the people who have lost their voices in an accident? Deepfakes can synthesize artificial voices which look exactly like theirs.

Presidents who were killed before they gave their final address are synthesized through this.

To conclude

Technology never ceases to amaze us. It is surprising to see how technology with such potential can be detrimental if it ends up in the wrong hands.

Several popular characters around the globe have already lost their name to a real-looking deepfake. There is no regaining it. But it is not the technology which needs blaming.

The only thing which we can blame are ones who do not seem to realize how taken for granted technology has become.

LEAVE A REPLY

Please enter your comment!
Please enter your name here