What is a Deepfake and how can it pose a threat?

A Deepfake has the goal to create audio or video that can fool human viewers, but can this be seen as a positive or negative move?

Technology has developed rapidly, with people having access to more advanced apps and software to alter images and videos. Users can change the way they look and now, some can even create false videos that look very real.

Deepfakes are created when artificial intelligence (AI) is used to alter or replace something within a video. The term "deepfake" comes from the underlying technology "deep learning," which is a form of AI. Deep learning algorithms, which teach themselves how to solve problems when given large sets of data, are used to swap faces in video and digital content to make realistic-looking fake media. 

The emergence of a new generation of digitally manipulated media has given rise to concerns about possible misuse.

 

Are Deepfakes posing threats to organisations? 

According to startup Deeptrace, the number of deepfakes on the web increased 330% from October 2019 to June 2020, reaching over 50,000 at their peak. 

Attestiv, a data authentication startup, surveyed US-based professionals about threats to their employers related to altered or manipulated digital media. 

Over 80% of respondents said that manipulated media poses a potential risk to their organisation. However, less than 30% say they’ve taken steps to mitigate the fallout from a deepfake attack. 25% of respondents claim they’re planning to take action, but 46% say that their organisation lacks a plan or that they personally lack knowledge of the plan.

Attestiv also requested that respondents consider a possible solution to their potential deepfake problem. When asked, “What’s the best defense organisations can take against altered digital media?,” 48% of survey takers felt the best defense was automated detection and filtering solutions. Thirty-eight percent believed that training employees to detect deepfakes was a superior course of action.

“Training employees to detect deepfakes may not be a viable solution given the likelihood that they are rapidly becoming undetectable to human inspection,” the Attestiv report’s authors wrote. “It appears there may be a need for further education regarding the deepfake threat and the trajectory the technology is taking.”

 

How can you tell if something is a Deepfake? 

According to security company Kaspersky, these are some signs of a deepfake video:

  • Jerky movement
  • Shifts in lighting from one frame to the next
  • Shifts in skin tone
  • Strange blinking or no blinking at all
  • Lips poorly synched with speech
  • Digital artifacts in the image

Some new emerging technologies are helping video makers authenticate their videos. A cryptographic algorithm can be used to insert hashes at set intervals during the video; if the video is altered, the hashes will change. AI and blockchain can register a tamper-proof digital fingerprint for videos. 

Another way to repel Deepfake attempts is to use a program that inserts specially designed digital ‘artifacts’ into videos to conceal the patterns of pixels that face detection software uses. 

Ultimately Deepfakes are becoming more advanced, which is making it harder to spot and stop. Deepfakes can provide an opportunity to have a positive impact on our lives. AI-Generated synthetic media can be very empowering and a great enabler, for example they can give people a voice, purpose, and ability to impact scale and speed.  

However, as access to synthetic media technology increases, so does the risk of exploitation. Deepfakes can be used to damage reputations, fabricate evidence, defraud the public, and undermine trust in democratic institutions.

 

Share

Featured Articles

Ivanti’s David Shepherd joins Tech & AI LIVE London

David Shepherd, Senior Vice President of EMEA Sales at Ivanti to speak at Tech & AI LIVE London

Dell Technologies: Firms Expect AI to Transform Industries

Dell report highlights how more organisations in the UK have embarked on their Gen AI journey, despite concerns around security, privacy and accountability

Top 100 Women 2024: Robyn Denholm, Tesla - No. 8

Technology Magazine’s Top 100 Women in Technology honours Tesla’s Robyn Denholm at Number 8 for 2024

Cognizant and Microsoft Partner to Drive Enterprise Gen AI

AI & Machine Learning

Top 100 Women 2024: Safra Catz, Oracle - No. 7

Digital Transformation

Microsoft, AWS & Oracle: Why Big Tech is Investing in Japan

Digital Transformation