• Home
  • Cyber Crime
  • Cyber warfare
  • APT
  • Data Breach
  • Deep Web
  • Digital ID
  • Hacking
  • Hacktivism
  • Intelligence
  • Internet of Things
  • Laws and regulations
  • Malware
  • Mobile
  • Reports
  • Security
  • Social Networks
  • Terrorism
  • ICS-SCADA
  • POLICIES
  • Contact me
MUST READ

Qilin ransomware claimed responsibility for the attack on the beer giant Asahi

 | 

DragonForce, LockBit, and Qilin, a new triad aims to dominate the ransomware landscape

 | 

DraftKings thwarts credential stuffing attack, but urges password reset and MFA

 | 

Redis patches 13-Year-Old Lua flaw enabling Remote Code Execution

 | 

U.S. CISA adds Synacor Zimbra Collaboration Suite (ZCS) flaw to its Known Exploited Vulnerabilities catalog

 | 

GoAnywhere MFT zero-day used by Storm-1175 in Medusa ransomware campaigns

 | 

CrowdStrike ties Oracle EBS RCE (CVE-2025-61882) to Cl0p attacks began Aug 9, 2025

 | 

Discord discloses third-party breach affecting customer support data

 | 

Oracle patches critical E-Business Suite flaw exploited by Cl0p hackers

 | 

LinkedIn sues ProAPIs for $15K/Month LinkedIn data scraping scheme

 | 

Zimbra users targeted in zero-day exploit using iCalendar attachments

 | 

Reading the ENISA Threat Landscape 2025 report

 | 

Ghost in the Cloud: Weaponizing AWS X-Ray for Command & Control

 | 

SECURITY AFFAIRS MALWARE NEWSLETTER ROUND 65

 | 

Security Affairs newsletter Round 544 by Pierluigi Paganini – INTERNATIONAL EDITION

 | 

GreyNoise detects 500% surge in scans targeting Palo Alto Networks portals

 | 

U.S. CISA adds Smartbedded Meteobridge, Samsung, Juniper ScreenOS, Jenkins, and GNU Bash flaws to its Known Exploited Vulnerabilities catalog

 | 

ShinyHunters Launches Data Leak Site: Trinity of Chaos Announces New Ransomware Victims

 | 

ProSpy, ToSpy malware pose as Signal and ToTok to steal data in UAE

 | 

Google warns of Cl0p extortion campaign against Oracle E-Business users

 | 
  • Home
  • Cyber Crime
  • Cyber warfare
  • APT
  • Data Breach
  • Deep Web
  • Digital ID
  • Hacking
  • Hacktivism
  • Intelligence
  • Internet of Things
  • Laws and regulations
  • Malware
  • Mobile
  • Reports
  • Security
  • Social Networks
  • Terrorism
  • ICS-SCADA
  • POLICIES
  • Contact me
  • Home
  • Uncategorized
  • How threat actors can use generative artificial intelligence?

How threat actors can use generative artificial intelligence?

Pierluigi Paganini December 02, 2024

Generative Artificial Intelligence (GAI) is rapidly revolutionizing various industries, including cybersecurity, allowing the creation of realistic and personalized content.

The capabilities that make Generative Artificial Intelligence a powerful tool for progress also make it a significant threat in the cyber domain. The use of GAI by malicious actors is becoming increasingly common, enabling them to conduct a wide range of cyberattacks. From generating deepfakes to enhancing phishing campaigns, GAI is evolving into a tool for large-scale cyber offenses

GAI has captured the attention of researchers and investors for its transformative potential across industries. Unfortunately, its misuse by malicious actors is altering the cyber threat landscape. Among the most concerning applications of Generative Artificial Intelligence are the creation of deepfakes and disinformation campaigns, which are already proving to be effective and dangerous.

Deepfakes are media content—such as videos, images, or audio—created using GAI to realistically manipulate faces, voices, or even entire events. The increasing sophistication of these technologies has made it harder than ever to distinguish real content from fake. This makes deepfakes a potent weapon for attackers engaged in disinformation campaigns, fraud, or privacy violations.

A study by the Massachusetts Institute of Technology (MIT) presented in 2019 revealed that deepfakes generated by AI could deceive humans up to 60% of the time. Given the advancements in AI since then, it is likely that this percentage has increased, making deepfakes an even more significant threat. Attackers can use them to fabricate events, impersonate influential figures, or create scenarios that manipulate public opinion.

The use of Generative Artificial Intelligence in disinformation campaigns is no longer hypothetical. According to a report by the Microsoft Threat Analysis Center (MTAC), Chinese threat actors are using GAI to conduct influence operations targeting foreign countries, including the United States and Taiwan. By generating AI-driven content, such as provocative memes, videos, and audio, these actors aim to exacerbate social divisions and influence voter behavior.

For example, these campaigns leverage fake social media accounts to post questions and comments about divisive internal issues in the U.S. The data collected through these operations can provide insights into voter demographics, potentially influencing election outcomes. Microsoft experts believe that China’s use of AI-generated content will expand to influence elections in countries like India, South Korea, and the U.S.

Generative Artificial Intelligence China AI influence operations Taiwan

GAI is also a boon for attackers seeking financial gain. By automating the creation of phishing emails, malicious actors can scale their campaigns, producing highly personalized and convincing messages that are more likely to deceive victims.

An example of this misuse is the creation of fraudulent social media profiles using GAI. In 2022, the Federal Bureau of Investigation (FBI) warned of an uptick in fake profiles designed to exploit victims financially. GAI allows attackers to generate not only realistic text but also photos, videos, and audio that make these profiles appear genuine.

Additionally, platforms like FraudGPT and WormGPT, launched in mid-2023, provide tools specifically designed for phishing and business email compromise (BEC) attacks. For a monthly fee, attackers can access sophisticated services that automate the creation of fraudulent emails, increasing the efficiency of their scams.

Another area of concern is the use of GAI to develop malicious code. By automating the generation of malware variants, attackers can evade detection mechanisms employed by major anti-malware engines. This makes it easier for them to carry out large-scale attacks with minimal effort.

One of the most alarming aspects of GAI is its potential for automating complex attack processes. This includes creating tools for offensive purposes, such as malware or scripts designed to exploit vulnerabilities. GAI models can refine these tools to bypass security defenses, making attacks more sophisticated and harder to detect.

While the malicious use of GAI is still in its early stages, it is gaining traction among cybercriminals and state-sponsored actors. The increasing accessibility of GAI through “as-a-service” models will only accelerate its adoption. These services allow attackers with minimal technical expertise to execute advanced attacks, democratizing cybercrime.

For instance, in disinformation campaigns, the impact of GAI is already visible. In phishing and financial fraud, the use of tools like FraudGPT demonstrates how attackers can scale their operations. The automation of malware development is another worrying trend, as it lowers the barrier to entry for cybercrime.

Leading security companies, as well as major GAI providers like OpenAI, Google, and Microsoft, are actively working on solutions to mitigate these emerging threats. Efforts include developing robust detection mechanisms for deepfakes, enhancing anti-phishing tools, and creating safeguards to prevent the misuse of GAI platforms.

However, the rapid pace of technological advancement means that attackers are always a step ahead. As GAI becomes more sophisticated and accessible, the challenges for defenders will grow exponentially.

Generative Artificial Intelligence is a double-edged sword. While it offers immense opportunities for innovation and progress, it also presents significant risks when weaponized by malicious actors. The ability to create realistic and personalized content has already transformed the cyber threat landscape, enabling a new era of attacks ranging from deepfakes to large-scale phishing campaigns.

As the technology evolves, so will its misuse. It is imperative for governments, businesses, and individuals to recognize the potential dangers of GAI and take proactive measures to address them. Through collaboration and innovation, we can harness the benefits of GAI while mitigating its risks, ensuring that this powerful tool serves humanity rather than harming it.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, generative artificial intelligence)


facebook linkedin twitter

you might also like

Pierluigi Paganini September 30, 2025
Broadcom patches VMware Zero-Day actively exploited by UNC5174
Read more
Pierluigi Paganini September 29, 2025
Despite Russian influence, Moldova votes Pro-EU, highlighting future election risks
Read more

leave a comment

newsletter

Subscribe to my email list and stay
up-to-date!

    recent articles

    Qilin ransomware claimed responsibility for the attack on the beer giant Asahi

    Cyber Crime / October 08, 2025

    DragonForce, LockBit, and Qilin, a new triad aims to dominate the ransomware landscape

    Cyber Crime / October 08, 2025

    DraftKings thwarts credential stuffing attack, but urges password reset and MFA

    Security / October 08, 2025

    Redis patches 13-Year-Old Lua flaw enabling Remote Code Execution

    Security / October 08, 2025

    U.S. CISA adds Synacor Zimbra Collaboration Suite (ZCS) flaw to its Known Exploited Vulnerabilities catalog

    Hacking / October 07, 2025

    To contact me write an email to:

    Pierluigi Paganini :
    pierluigi.paganini@securityaffairs.co

    LEARN MORE

    QUICK LINKS

    • Home
    • Cyber Crime
    • Cyber warfare
    • APT
    • Data Breach
    • Deep Web
    • Digital ID
    • Hacking
    • Hacktivism
    • Intelligence
    • Internet of Things
    • Laws and regulations
    • Malware
    • Mobile
    • Reports
    • Security
    • Social Networks
    • Terrorism
    • ICS-SCADA
    • POLICIES
    • Contact me

    Copyright@securityaffairs 2024

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities...
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
    Non-necessary
    Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
    SAVE & ACCEPT