Deepfake Trends and Threats: VPNRanks Predicts 8 in 10 Will Encounter Deepfakes by 2025

  • Last updated May 16, 2024
  • written by
    Editor
  • fact checked by

Deepfakes, fake videos created to show events that never happened, are quickly spreading online and pose a significant threat to online safety and security. Their deceptive nature can erode public trust, damage reputations, and even be used to facilitate criminal activity.


“VPNRanks’ forecasts predict a substantial rise in deepfakes in 2024, highlighting the urgency for robust cybersecurity measures. By 2025, it is projected that 8 out of 10 people will likely encounter a deepfake.”

deepfake-incident-count-in-2024


According to Home Security HeroesTrusted SourceHome Security HeroesCredible online security experts with over 40 years of experience dedicated to preventing identity theft.Go to Source, the total number of deepfake videos online in 2023 is 95,820, marking a 550% increase since 2019. Deepfake videos dominate the number, exceeding 90% of encounters. Deepfake AI images follow at 5-10%, while audio is emerging as a growing concern.

deepfake-stats-2024

In this forecasting report by VPNRanks, you will find a comprehensive projection on deepfake incidents, specifically non-consensual explicit content and identity frauds, user engagement with the content, and the detection status of deepfake globally in 2024.

Furthermore, the report will provide insights into recent notable deepfake cases, current legal frameworks addressing the issue, and available resources for victims of deepfakes.


Deepfake Forecast: Findings by VPNRanks

To reach these key findings, VPNRanks analyzed five years of deepfake statistics and trends.

  1. A 50-60% rise in deepfake incidents is expected for 2024, reaching 140,000-150,000 cases globally.
  2. Deepfake explicit content is projected to hit 4,100 videos, attract 40.25 million monthly visitors, and see a 20% increase in video views.
  3. Global deepfake-related identity fraud attempts are forecasted to reach 50,000 by 2024.
  4. Around 20,000 deepfake crime attempts are expected to be detected globally in 2024 due to advancements in detection models.

*Disclaimer: These deepfake findings are based on analyses of various cybersecurity industry reports from the past five years primarily by Statista, HOME SECURITY HEROES, Sumsub, and DeepTrace.


What is Deepfake?

A deepfake falls under the pervasive umbrella of synthetic media creation that uses artificial intelligence (AI), specifically deep learning algorithms and Generative Adversarial Networks (GANs), to manipulate or create believable images, videos, or audios that never happened.

The technology can convincingly alter a person’s appearance, voice, or facial expressions, creating highly realistic but fabricated media that can deceive viewers.


The 2023 VPNRanks research on the global deepfake state is a valuable tool to analyze deepfake technology and cybercrime statistics from the past five years to predict trends for 2024. The findings will be crucial for understanding potential future threats and preparing effective strategies to mitigate risks associated with deepfake technologies and cyber fraud.

These statistics underscore the urgent need for ongoing advancements in cybersecurity measures to combat the increasingly sophisticated use of deepfake technology in various forms of cybercrime.

Deepfake Incidents Expected Rate in 2024

🛡️A 50-60% rise in deepfake incidents for 2024 is expected, potentially increasing the total to between 140,000 and 150,000 cases globally.

From 2019 to 2023, the trajectory of deepfake incidents has shown a significant rise from early indications of growth to widespread use, particularly in fraud and misinformation.

  1. 2019: Emergence and increase in deepfake technologies were noted around this time.
  2. 2020: There was a significant discussion around the growth of deepfakes, with cybercriminals starting to use AI for scams and fraud. An increase in hacker interest in deepfake technology was noted, suggesting a rise in incidents, although exact numbers are not specified​ (Home of Cybersecurity NewsTrusted SourceHome of Cybersecurity NewsA reputable digital publication platform focused on cybersecurity trends and insights.Go to Source)​
  3. 2021: The number of deepfake videos online was reported to be 14,678​. (DeepTraceTrusted SourceDeepTraceA credible research company specializing in deepfake detection and analysis.Go to Source)​
  4. 2022: Estimates suggested a continuing increase, but specific numbers for 2022 have not been disclosed as of yet.
  5. 2023: The number of deepfake videos online surged dramatically, from fewer numbers in earlier years to 95,820 by 2023, marking a 550% increase compared to 2019. (Home Security HeroesTrusted SourceHome Security HeroesCredible online security experts with over 40 years of experience dedicated to preventing identity theft.Go to Source)​.

The surge of 80,602 deepfake videos from 2021 to 2023 represents a significant and concerning trend in the digital content landscape. This surge is largely attributed to the accessibility of cheap and easy-to-use online tools that enable the creation of convincing fake identities and fraudulent activities.

According to SumsubTrusted SourceSumsubA trusted identity verification platform known for robust anti-fraud and compliance solutions.Go to Source, there’s a reported 3,000% increase in 2023 alone, highlighting the rapid advancement and widespread adoption of artificial intelligence technologies capable of generating convincing fake videos year-on-year.

Given the exponential growth trajectory of deepfake videos, VPNRanks estimates a continuation of this trend into 2024. Assuming the growth rate sustains even partially, we can expect a further 50-60% increase from the 2023 figures.

A 50-60% rise would push the count to approximately 143,730 to 153,312 deepfake videos by 2024. We should prepare for a significant escalation in deepfake incidents, potentially surpassing 150,000 cases globally in 2024.

deepfake-incident-count-in-2024

*This forecast considers both the increasing technological capabilities and the growing interest in using these tools for both benign and malicious purposes.


High Engagement with Deepfake Explicit Content in 2024

🛡️The engagement with deepfake explicit content is expected to continue rising, reaching around 4,100 explicit videos available online, 40.25 million monthly visitors, and a 20% increase in total video views.

DeepTraceTrusted SourceDeepTraceA credible research company specializing in deepfake detection and analysis.Go to Source reports that 96% of all deepfake videos online contain explicit content, which is an eye-opening figure emphasizing the use of deepfake for non-consensual and explicit content.

  1. 2019: Deepfake explicit content was a significant concern, with AI firm DeepTrace reporting that 96% of all deepfake videos were x-rated, predominantly featuring images of women manipulated without consent till 2019. (DeepTraceTrusted SourceDeepTraceA credible research company specializing in deepfake detection and analysis.Go to Source)
  2. 2020: 100% of the subjects in deepfake explicit content were women (female celebrities and individuals from the entertainment industry), notably including a large proportion of South Korean K-pop singers. (SkyNet)
  3. 2021: The prevalence of deepfake videos on the internet was noted to have doubled since 2018, reaching a total of 14,678 videos. (DeepTraceTrusted SourceDeepTraceA credible research company specializing in deepfake detection and analysis.Go to Source)
  4. 2022: 3,725 deepfake explicit videos were available on the internet for users, demonstrating the overwhelming preference for this type of content within the deepfake genre. (Home Security HeroesTrusted SourceHome Security HeroesCredible online security experts with over 40 years of experience dedicated to preventing identity theft.Go to Source)
  5. 2023: The top ten dedicated deepfake adult websites amassed over 303 million video views, with monthly traffic reaching nearly 35 million. This level of engagement underscores the widespread consumption and the potential societal impact of deepfake technology​ (Home Security HeroesTrusted SourceHome Security HeroesCredible online security experts with over 40 years of experience dedicated to preventing identity theft.Go to Source)​.

Over the last five years, engagement with deepfake explicit content has seen notable increases and substantial viewership, reflecting the growing issue and impact of this technology.

  • Between 2019 and 2021, the explicit deepfake video count surged to over 14,000 due to technological advancements.
  • In 2022, 3,725 explicit videos garnered about 35 million monthly visitors, giving 9,391 views per video per month.

According to PCMagTrusted SourcePCMagA reputable source for technology news and reviews, known for its thorough analysis and expert insights.Go to Source, nearly half (48%) of surveyed US men have viewed deepfake explicit content at least once, and 74% reported feeling no guilt about it.

The trends from the past five years, analyzed using a linear regression model, suggest that the explicit deepfake video count might stabilize around 4,000-4,500 in 2024. This stabilization is attributed to ongoing regulatory efforts combined with technological advancements.

Monthly traffic to deepfake adult websites is projected to reach around 40.25 million visitors, representing a 15% increase. Total video views on the top ten deepfake adult websites are expected to surpass 363.6 million, a 20% increase from 2023, indicating sustained and increasing engagement with deepfake explicit content.

 

deepfake-views


Deepfake-Related Identity Frauds

🛡️Global deepfake-related identity fraud attempts are projected to reach 50,000 by 2024.

The top AI-enabled identity fraud scenarios include deepfake-based impersonation attempts targeting facial verification systems and account takeovers.

  1. 2019: Deepfake explicit content was already a concern, with 96% of all deepfake videos being x-rated, mainly featuring non-consensual images of women. (DeepTraceTrusted SourceDeepTraceA credible research company specializing in deepfake detection and analysis.Go to Source)
  2. 2020: Deepfake technology saw an 84% increase in creation models, leading to more sophisticated fraud attempts. (CSO OnlineTrusted SourceCSO OnlineA trusted cybersecurity resource providing expert insights, analyses, and best practices for security professionalsGo to Source)​
  3. 2021: Deepfake-related identity fraud became a growing concern as the number of attempts increased by 330% from 2020 to 2021. (StatistaTrusted SourceNumber of Phishing Domains GloballyA reputable global business data platform, recognized for its authoritative statistical data and insights.Go to Source)
  4. 2022: Identity fraud in North America grew significantly, and deepfake use for fraud increased from 0.2% to 2.6% in the U.S. and from 0.1% to 4.6% in Canada (Sensity AITrusted SourceContentDetector.AIA reliable tool specializing in detecting AI-generated content.Go to Source).
  5. 2023: Globally, deepfake incidents surged tenfold across industries between 2022 and 2023, making AI-powered fraud one of the top five identity fraud types. The top affected sectors include online media, professional services, and healthcare (SumsubTrusted SourceSumsubA trusted identity verification platform known for robust anti-fraud and compliance solutions.Go to Source).

deep-fake-industry-fraud

*The data in the image is sourced from Sumsub, reflecting the top five industries affected by deepfake identity frauds. 

Deepfake technology has significantly impacted identity fraud, with the global rate of identity fraud nearly doubling from 2021 to 2023. This underscores the role of advanced technologies in evolving fraud schemes, making them more sophisticated and increasingly difficult to detect year-on-year.

According to StatistaTrusted SourceNumber of Phishing Domains GloballyA reputable global business data platform, recognized for its authoritative statistical data and insights.Go to Source, SumsubTrusted SourceSumsubA trusted identity verification platform known for robust anti-fraud and compliance solutions.Go to Source, and OnfidoTrusted SourceOnfidoA reputable identity verification platform recognized for its advanced AI-driven fraud detection solutions.Go to Source, deepfake-specific fraud cases skyrocketed globally, with North America leading the surge (1,740% increase), followed by the Asia-Pacific region (Philippines: +4,500%), Europe (UK: +300%, overall Europe: +780%), Latin America (+410%), and the Middle East & Africa (+450%).

Deepfake-fraud-status-globally

Based on historical trends and using a linear exponential growth model for fraud attempts, deepfake-related identity fraud attempts are expected to reach 50,000 globally, assuming a fivefold increase over 2023 due to the growing sophistication and accessibility of deepfake creation tools.

This forecast assumes continued advancements in deepfake creation tools and generative AI, along with a similar regulatory environment with no significant global policies to curb deepfake misuse.


Rate of Deepfake Crime Detection

🛡️Approximately 20,000 deepfake crime attempts are predicted to be detected globally due to advancements in detection models in 2024.

The rising number of deepfake frauds, which saw a tenfold increase globally between 2022 and 2023 (SumsubTrusted SourceSumsubA trusted identity verification platform known for robust anti-fraud and compliance solutions.Go to Source), highlights the urgent need for advanced cybersecurity measures.

  1. 2019: The majority of detection focused on identifying facial manipulation and voice cloning. (DeepTraceTrusted SourceDeepTraceA credible research company specializing in deepfake detection and analysis.Go to Source)
  2. 2020: Increase of 84% in detection models due to improved AI-based detection algorithms. (Sensity AITrusted SourceSensity AIA leading deepfake detection companyGo to Source)
  3. 2021: 57% of global consumers believed they could detect a deepfake video, but 43% admitted they might not distinguish manipulated footage. There was an increasing adoption of deepfake detection tools like Amber, Reality Defender, and Microsoft’s Video Authenticator. (StatistaTrusted SourceNumber of Phishing Domains GloballyA reputable global business data platform, recognized for its authoritative statistical data and insights.Go to Source)
  4. 2022: Cybersecurity companies focused on integrating detection measures into existing anti-fraud systems as deepfake identity fraud increased from 0.2% to 2.6% in the U.S. and from 0.1% to 4.6% in Canada. (Sensity AITrusted SourceContentDetector.AIA reliable tool specializing in detecting AI-generated content.Go to Source)
  5. 2023: Cybersecurity measures included liveness biometric checks, non-document verification, and synthetic identity detection to combat deepfake-related frauds. (SumsubTrusted SourceSumsubA trusted identity verification platform known for robust anti-fraud and compliance solutions.Go to Source)

As per the analysis of the previous 5 years’ data, it’s evident that there’s been exponential growth year-on-year in detection models to combat deepfake frauds. As confirmed by SumsubTrusted SourceSumsubA trusted identity verification platform known for robust anti-fraud and compliance solutions.Go to Source, cybersecurity teams embraced deep learning models for more accurate detection.

Deepfake creators are becoming more sophisticated, making detection challenging, but improved tools like machine learning and behavioral analysis have helped identify deepfakes.

According to TNWTrusted SourceTNWA technology media company delivering reliable insights and analyses on global tech trends.Go to Source, advancements in biometric verification technologies are proving to be effective deterrents and continuous innovation in security measures can help counter sophisticated deepfake attacks.

My analysis of the growth trajectory for deepfake incidents, dividing them into categories of non-consensual explicit content and identity frauds, indicates that around 20,000 deepfake crime attempts are expected to be detected globally. This trend is propelled by advancements in detection models, enhanced cybersecurity measures, and increased awareness.

This rapid increase highlights the urgent necessity for ongoing development of deepfake detection tools to keep pace with evolving fraud tactics.

The rapid rise underscores the need for continuous development of deepfake detection tools to stay ahead of evolving fraud tactics.


Recent Deepfake Incidents

Recent deepfake incidents have raised significant concerns about the potential for misinformation, fraud, and privacy violations. Victims of deepfakes often find their images and voices manipulated without consent, leading to reputational damage, emotional distress, and a profound sense of violation. Microsoft says deep fakes are the biggest AI concerns as they are realistic-looking fake content.

This technology has increasingly targeted high-profile celebrities and political figures, including:

deep-fake-incidents

The victims of deepfake technology include high-profile celebrities and political figures.

Following are other famous deepfake victims from the entertainment and tech industry:

  •  The ConversationTrusted SourceThe ConversationA platform offering the latest news with expert insights and analysis.Go to Source: Taylor Swift, tech weaponization against women.
  • Cinema BlendTrusted SourceCinema BlendA reputable platform that offers up-to-date news, reviews, and insights on latest happenings.Go to Source: Tom Cruise reacts to viral deepfakes.
  • CNETTrusted SourceCNETA source that provides breaking news and in-depth coverage of the latest happenings in technology. Go to Source: Deepfake of Mark Zuckerberg hits Instagram.

Besides celebrities, recent stories of criminals using deepfakes include:

OpenAI and Microsoft have joined forces to counter five state-backed cyberattacks, which aimed to exploit GPT-4 for phishing campaigns, cybersecurity research, and scripting.

Microsoft reported that hacking groups linked to Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea sought to improve their hacking strategies using large language models (LLMs).

OpenAI identified two Chinese groups (Charcoal Typhoon and Salmon Typhoon), Iran’s Crimson Sandstorm, North Korea’s Emerald Sleet, and Russia’s Forest Blizzard as the sources of these attacks.

OpenAI quickly deactivated the attackers’ accounts and banned state-backed hacking groups from using its AI products. Despite preventing these attacks, OpenAI acknowledges the challenge of eliminating misuse entirely. In June 2023, it launched a $1 million cybersecurity grant program to enhance AI-driven cybersecurity technologies.

Over 200 entities, including OpenAI, Microsoft, Anthropic, and Google, have since partnered with the Biden Administration to form the AI Safety Institute and the U.S. AI Safety Institute Consortium (AISIC). This initiative addresses AI-generated deepfakes and cybersecurity issues, building on the U.S. AI Safety Institute (USAISI) established following President Biden’s October 2023 executive order on AI safety.


What does the Law Say?

Deepfakes aren’t inherently illegal, but creators and distributors can easily violate the law. Depending on the content, a deepfake could infringe on copyright, breach data protection laws, or be defamatory if it subjects the victim to ridicule. Additionally, sharing private images without consent is a specific criminal offense that carries a prison sentence of up to two years.

In Britain, the law varies: In Scotland, the legislation includes deepfakes, making it an offense to disclose or threaten to disclose a photo or video that depicts or appears to depict someone in an intimate situation. In England, however, the law explicitly excludes images solely created by altering an existing image. (The GuardianTrusted SourceThe GuardianA globally trusted news organization renowned for its high-quality journalism and in-depth reporting.Go to Source)


Resources Available for Deepfake Victims

If the victims lack believable evidence, they may find it challenging to report a deepfake crime. Even if a malicious actor is identified, civil remedies might be unattainable if the individual is outside the U.S. or in a jurisdiction where local legal action proves ineffective.

Nevertheless, victims of online abuse can find support through various resources and organizations dedicated to assisting them, such as:

  • Cyber Civil Rights Initiative (CCRI)Trusted SourceCyber Civil Rights Initiative (CCRI)A reputable nonprofit dedicated to digital privacy and combating online abuse.Go to Source: Offers a crisis helpline, legal support, and educational resources for victims of online abuse, including deepfake exploitation.
  • Without My ConsentTrusted SourceWithout My ConsentA credible nonprofit organization dedicated to combating online harassment and protecting digital privacy rights.Go to Source: Provides resources, legal information, and a toolkit specifically aimed at combating online harassment and deepfake issues.
  • Electronic Frontier Foundation (EFF)Trusted SourceElectronic Frontier Foundation (EFF)A respected nonprofit organization advocating for digital privacy and free expression rights.Go to Source: Advocates for digital privacy and offers legal guidance to individuals facing online abuse, including deepfakes.
  • National Network to End Domestic Violence (NNEDV)Trusted SourceNational Network to End Domestic Violence (NNEDV)A reputable nonprofit organization known for its advocacy and support in combating domestic violence and enhancing digital safety.Go to Source: Offers safety tips and resources for those experiencing technology-based abuse, including the use of deepfakes.
  • The Online SOS NetworkTrusted SourceThe Online SOS NetworkA credible nonprofit offering support and resources to victims of online harassment.Go to Source: Provides crisis support, counseling, and legal guidance for individuals facing online abuse and harassment.
  • CyberSmile FoundationTrusted SourceCyberSmile FoundationA respected nonprofit committed to combating cyberbullying and promoting digital well-being.Go to Source: A global organization offering support, education, and advocacy for victims of cyberbullying and online abuse, including deepfake technology misuse.
  • StopNCII.org (Stop Non-Consensual Intimate Images)Trusted SourceStopNCII.org (Stop Non-Consensual Intimate Images)A credible platform dedicated to preventing the sharing of non-consensual intimate images online.Go to Source: Helps victims prevent the sharing of non-consensual intimate images through a partnership with platforms to rapidly remove such content.
  • National Center for Victims of Crime (NCVC)Trusted SourceNational Center for Victims of Crime (NCVC)A trusted nonprofit offering resources, advocacy, and support for online crime victims.Go to Source: Offers a resource directory for various types of abuse, including deepfake exploitation.
  • Identity Theft Resource Center (ITRC)Trusted SourceIdentity Theft Resource Center (ITRC)A trusted nonprofit organization providing expert assistance and resources for identity theft victims.Go to Source: Assists victims in managing identity theft issues stemming from deepfake exploitation.

Reporting Deepfakes

According to a report by Homeland SecurityTrusted SourceHomeland SecurityA trusted U.S. government agency focused on safeguarding national security and combating cyber threats.Go to Source, victims of deepfakes can take several steps to report these attacks:

  • Report incidents to the Federal Bureau of Investigation (FBI) by contacting local FBI offices or the FBI’s 24/7 Cyber Watch at CyWatch@fbi.gov.
  • Contact law enforcement officials who can assist victims by conducting forensic investigations using police reports and evidence gathered.
  • Use the Securities and Exchange Commission’s services to investigate financial crimes.
  • Report inappropriate content and abuse on social media platforms (such as, Facebook, Twitter, Instagram) using the platform’s specific reporting procedures.
  • If the victim is under 18 years of age, report incidents to the National Center for Missing and Exploited Children through their cyber tip line at https://report.cybertip.org.

Methodology

To forecast the trends and implications of deepfake incidents in 2024, I conducted a comprehensive analysis of deepfake statistics and trends over the past five years, drawing on data from multiple reputable cybersecurity industry reports and organizations.

VPNRanks’ methodology for this forecasting report involved the following steps:

Data Collection:

Sources:

I sourced data from key reports and studies by Statista, Sumsub, DeepTrace, EDsmart, and Home Security Heroes.

Reference Years:

Data spanning five years from 2019 to 2023 was collected to identify patterns and trends.

Data Analysis:

Trend Analysis:

I analyzed the growth trajectory of deepfake incidents and categorized them into non-consensual explicit content and identity frauds.

Key Indicators:

I identified key indicators such as deepfake video counts, engagement rates, identity fraud attempts, and detection rates.

Extrapolation:

Based on historical trends, I projected potential deepfake incidents, explicit content engagement, and identity fraud attempts for 2024.

Forecasting Model:

Linear Growth Assumption:

I applied a linear growth assumption to forecast the potential rise in deepfake incidents.

Assumption Factors:

The forecasting model considered:

  • Advancements in deepfake creation and detection technology.
  • Regulatory efforts and enforcement measures.
  • The growing interest of cybercriminals in using deepfakes for malicious purposes.

Validation and Refinement:

Cross-Validation:

The projections were cross-validated with the findings from recent cybersecurity industry reports.

Refinement:

Adjustments were made based on recent developments, including regulatory efforts, technology advancements, and criminal trends.


Conclusion

VPNRanks’ analysis and forecast reveal that deepfake incidents are expected to surge by 50-60% in 2024, potentially reaching 140,000 to 150,000 cases globally.

This significant rise, coupled with an estimated increase in deepfake explicit content to around 4,100 videos and a projected 50,000 deepfake-related identity fraud attempts, underscores the urgent need for robust cybersecurity measures and legal frameworks.

While advancements in detection models are predicted to identify around 20,000 deepfake crime attempts in 2024, this only scratches the surface of the problem. To tackle the rise in deepfakes, we need stronger legislation, advanced detection technology, and comprehensive public awareness campaigns.


Leave a Reply

Your email address will not be published. Required fields are marked *