The rise of manipulative Deepfakes

  • Deepfakes are AI-generated fake videos of celebrities and politicians.
  • These videos are created to spread hate and manipulate common people into scams and false information.
  • Elon Musk’s deepfake video promoting a cryptocurrency scheme went viral on Twitter.

 One of the most hyped aspects of technology in today’s market is the rise of Artificial Intelligence(AI) based applications. Numerous companies are engaged in a race to become “AI-first”. They are using AI applications to ease their daily functioning as these applications have great utility and computational skills as well as high efficiency and marginal errors, unlike their human counterparts.

However, like how any other coin has two sides, AI comes with its own share of pros and cons. One of the most dangerous downsides of AI-based applications is Deepfake.

What is a Deepfake and how are they created

A Deepfake is essentially a video of a person in which their face, voice, and mannerisms are altered using AI. While fake content is not new, deepfakes involve using machine learning and AI to manipulate or generate visual and audio content.

The main machine learning methods used to create deep fakes are based on deep learning. Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw input. In simpler terms, deep learning models are based on artificial neural networks known as convolutional networks, used to analyze visual imagery.

Deepfakes operate on a type of neural network called an autoencoder. These consist of an encoder, which reduces an image to a low dimensional latent space, and a decoder, which reconstructs the image from the latent representation. The latent representation contains key features of the target’s facial features and body posture. This information is used to be superimposed on the underlying facial and body features of the original video, represented in the latent space thus creating a “deep fake”.

Deepfakes are mostly used for malicious purposes like spreading hate speech or false news. Deepfakes can be very believable and manipulative in nature which can cause a lot of misconception the society in general. Other derogatory uses include creating child sexual abuse material, celebrity pornographic videos, fake news, hoaxes, bullying, and financial fraud.

Target Audience

The FBI has already identified this phenomenon of AI-powered synthetic media to be a major threat.

In general the entire population especially the younger adults fall prey to such deepfakes and are easily misled into scams and harming themselves and their friends and family.

Recent studies by Deepfake experts predict that the realism of the images and audio portrayed by the AI-generated synthetic media could become a common part of our world such that it would be difficult to differentiate truth from fiction and that’s a very alarming thing. It would be very difficult to filter out the real news from numerous fake campaigns.

Softwares for deepfakes

Some of the notable Deepfake software are:

  • FaceSwap
  • FaceApp
  • Avatarify
  • Zao

The content on these apps is protected under the First Amendment of the United States but still, the FBI was unable to identify some potential dangers and threats posed by these apps to the public. Several people ranging from influential personalities such as politicians and even some organizations were attacked. The common public was also not spared in terms of extortion schemes.

Cases involving Deepfake

The most blatant example of using deepfakes for a financial scam was exposed on 25th May as a video of the billionaire Elon Musk went viral. Musk was seen promoting and promising huge returns to invest in a dodgy cryptocurrency scheme.

However, the video went so viral due to the basic poor quality of the deepfake. Musk’s voice was robotic and distorted. It prompted users to seek clarification, to which Musk confirmed on Twitter that it wasn’t him in the said video.

More such examples include a case from 2019 where a group of criminals committed fraud against the owner of a U.K based energy firm. They convinced him to move $243,000 to a Hungarian supplier by tricking him with a fake audio.

The following year in 2020 a lawyer in Philadelphia was the victim of an audio spoof attack.

In 2021 several cases were reported against Russian pranksters who heckled and harrassed European politicians via a Deepfake video.

Experts believe that such deepfakes could cause outrage in the international sphere and spoil the political climate amongst rival nations as blaming an old rival in such cases is a very common practice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Popular