Why Deepfakes for Remote Job Applications are the New Black?

  • Deepfakes are being used for giving remote job interviews in the post-covid world.
  • People using deepfakes may not be aware of coughing, sneezing, and other similar activities not matching with the audio and video.
  • Deepfakes profiles can cause a threat to society as people can interchange faces and misuse this technology into creating absurd videos and making false political statements.

In the post-covid world, many remote job opportunities have been created digitally with the use of AI. But the use of artificial intelligence while giving interviews has given birth to the use of deepfakes. 

What are deepfakes?

Deepfakes, a combination of the words “deep learning” and “fake,” are artificial media in which a person’s resemblance is substituted for an existing person in a picture or video. While the practice of fabricating information is not new, deepfakes use advanced machine learning and artificial intelligence techniques to edit or produce visual and audio content that can be more easily deceptive. The primary machine learning techniques for producing deep fakes are deep learning-based and include training generative neural network topologies, including autoencoders or generative adversarial networks (GANs).

Use of deepfake for remote jobs

The FBI Internet Crime Complaint Center (IC3) in a public announcement revealed that people are employing deepfakes to act like someone else during remote jobs in the post-covid world. They have also detected an increase in the complaints related to the use of deepfakes and stolen Personally Identifiable Information (PII).

Deepfakes use a video, an image, or edited recordings to make someone appear to be doing something they weren’t. Jobs connected to databases, software, and information technology are also included in the reported positions. Additionally, some of these job postings include access to private information about customers, money, corporate IT databases, and confidential data, which could lead to undesirable outcomes for the concerned people or businesses. People who opted to deepfake during interviews presumably weren’t aware that the actions and lip movements captured on video don’t exactly match those captured on the audio. Coughing, sneezing, and other similar activities are not timed with the film shown during the interviews, according to the FBI.

Deepfake: dangerous AI crime threat 

Deepfakes, also known as fake audio and video content, were evaluated as the most significant AI criminal threat in a study that was published in Crime Science in 2020. According to the study, people have a strong inclination to believe what they see and hear for themselves, which lends credence to the impressive images and sound. Over time, it might become very simple to disparage public figures and obtain money by posing as other people. This might cause people to become suspicious of such content and cause harm to society.

According to Dr. Matthew Caldwell, Digital crimes are easily shared, replicated, and even sold, making it possible to advertise criminal methods and offer crime as a service. Therefore, the more difficult parts of their AI-based crimes may be outsourced by criminals.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Popular