• Home
  • Cyber Crime
  • Cyber warfare
  • APT
  • Data Breach
  • Deep Web
  • Digital ID
  • Hacking
  • Hacktivism
  • Intelligence
  • Internet of Things
  • Laws and regulations
  • Malware
  • Mobile
  • Reports
  • Security
  • Social Networks
  • Terrorism
  • ICS-SCADA
  • POLICIES
  • Contact me
MUST READ

Arizona woman sentenced for aiding North Korea in U.S. IT job fraud scheme

 | 

Operation CargoTalon targets Russia’s aerospace with EAGLET malware,

 | 

Unpatched flaw in EoL LG LNV5110R cameras lets hackers gain Admin access

 | 

Koske, a new AI-Generated Linux malware appears in the threat landscape

 | 

Mitel patches critical MiVoice MX-ONE Auth bypass flaw

 | 

Coyote malware is first-ever malware abusing Windows UI Automation

 | 

SonicWall fixed critical flaw in SMA 100 devices exploited in Overstep malware attacks

 | 

DSPM & AI Are Booming: $17.87B and $4.8T Markets by 2033

 | 

Stealth backdoor found in WordPress mu-Plugins folder

 | 

U.S. CISA adds CrushFTP, Google Chromium, and SysAid flaws to its Known Exploited Vulnerabilities catalog

 | 

U.S. CISA urges FCEB agencies to fix two Microsoft SharePoint flaws immediately and added them to its Known Exploited Vulnerabilities catalog

 | 

Sophos fixed two critical Sophos Firewall vulnerabilities

 | 

French Authorities confirm XSS.is admin arrested in Ukraine

 | 

Microsoft linked attacks on SharePoint flaws to China-nexus actors

 | 

Cisco confirms active exploitation of ISE and ISE-PIC flaws

 | 

SharePoint under fire: new ToolShell attacks target enterprises

 | 

CrushFTP zero-day actively exploited at least since July 18

 | 

Hardcoded credentials found in HPE Aruba Instant On Wi-Fi devices

 | 

MuddyWater deploys new DCHSpy variants amid Iran-Israel conflict

 | 

U.S. CISA urges to immediately patch Microsoft SharePoint flaw adding it to its Known Exploited Vulnerabilities catalog

 | 
  • Home
  • Cyber Crime
  • Cyber warfare
  • APT
  • Data Breach
  • Deep Web
  • Digital ID
  • Hacking
  • Hacktivism
  • Intelligence
  • Internet of Things
  • Laws and regulations
  • Malware
  • Mobile
  • Reports
  • Security
  • Social Networks
  • Terrorism
  • ICS-SCADA
  • POLICIES
  • Contact me
  • Home
  • Breaking News
  • Cyber Crime
  • Hacking
  • Security
  • Cybercriminals Are Targeting AI Conversational Platforms

Cybercriminals Are Targeting AI Conversational Platforms

Pierluigi Paganini October 09, 2024

Resecurity reports a rise in attacks on AI Conversational platforms, targeting chatbots that use NLP and ML to enable automated, human-like interactions with consumers.

Resecurity has observed a spike in malicious campaigns targeting AI agents and Conversational AI platforms that leverage chatbots to provide automated, human-like interactions for consumers. Conversational AI platforms are designed to facilitate natural interactions between humans and machines using technologies like Natural Language Processing (NLP) and Machine Learning (ML). These platforms enable applications such as chatbots and virtual agents to engage in meaningful conversations, making them valuable tools across various industries.

Chatbots are a fundamental part of conversational AI platforms, designed to simulate human conversations and enhance user experiences. Such components could be interpreted as a subclass of AI agents responsible for orchestrating the communication workflow between the end user (consumer) and the AI. Financial institutions (FIs) are widely implementing such technologies to accelerate customer support and internal workflows, which may also trigger compliance and supply chain risks. Many of such services are not fully transparent regarding data protection and data retention in place, operating as a ‘black box’, associated risks are not immediately visible. That may explain why major tech companies restrict employee access to similar AI tools, particularly those provided by external sources, due to concerns that these services could take advantage of potentially proprietary data submitted to them.

Unlike traditional chatbots, conversational AI chatbots can offer personalized tips and recommendations based on user interactions. This capability enhances the user experience by providing tailored responses that meet individual needs. Bots can collect valuable data from user interactions, which can be analyzed to gain insights into customer preferences and behaviors. This information can inform business strategies and improve service offerings. At the same time, it creates a major risk in terms of data protection, as the data collected from users may reveal sensitive information due to personalized interactions. Another important aspect is whether the collected user input will be retained for further training and whether such data will later be sanitized to minimize the disclosure of PII (Personally Identifiable Information) and other data that may impact user privacy in the event of a breach.

One of the key categories of Conversational AI platforms is AI-powered Call Center Software and Customer Experience Suites. Such solutions utilize purpose-built chatbots to interact with consumers by processing their input and generating meaningful insights. The implementation of AI-powered solutions like these is especially significant in fintech, e-commerce, and e-government, where the number of end consumers is substantial, and the volume of information to be processed makes manual human interaction nearly impossible or, at least, commercially and practically ineffective. Trained AI models optimize feedback to consumers and assist with further requests, reducing response times and human-intensive procedures that could be addressed by AI.

At some point, conversational AI platforms begin to replace traditional communication channels. Instead of “old-school” email messaging, these platforms enable interaction via AI agents that deliver fast responses and provide multi-level navigation across services of interest in near real-time. The evolution of technology has also led to adjustments in tactics by adversaries looking to exploit the latest trends and dynamics in the global ICT market for their own benefit. Resecurity detected a notable interest from both the cybercriminal community and state actors toward conversational AI platforms due to the large number of consumers and the massive volumes of information processed during interactions and personalized sessions supported by AI agents.

On October 8, 2024, Resecurity identified a posting on the Dark Web related to the monetization of stolen data from one of the major AI-powered cloud call center solutions in the Middle East. The threat actor gained unauthorized access to the management dashboard of the platform, which contains over 10,210,800 conversations between consumers and AI agents (bots). Stolen data could be used to orchestrate advanced fraudulent activities as well as for cybercriminal purposes using AI. Breached communications between AI agents and consumers also revealed personally identifiable information (PII), including national ID documents and other sensitive details provided to address specific requests. The adversary could apply data mining and extraction techniques to acquire records of interest and use them in advanced phishing scenarios and other cyber offensive purposes.

As a result of the compromise, adversaries could access specific customer sessions and steal data and acquire knowledge about the context of interaction with the AI agent, which could later lead to hijacking. This vector may be especially effective in fraudulent and social engineering campaigns when the adversary focuses on acquiring payment information from the victim using some pretext of KYC verification or technical support from a specific financial institution or payment network. Many conversational AI platforms allow users to switch between an AI-assisted operator and a human – the bad actor could intercept the session and control the dialogue further. Exploiting user trust, bad actors could request that victims provide sensitive information or arrange certain actions (for example, confirming an OTP) that could be used in fraudulent schemes. Resecurity forecasts a variety of social engineering schemes that could be orchestrated by abusing and gaining access to trusted conversational AI platforms.

The end victim (consumer) will remain completely unaware if the session is intercepted by an adversary and will continue to interact with the AI agent, believing the session is secure and that the further course of action is legitimate. The adversary may exploit the trust the victim has in the AI platform to obtain sensitive information, which could later be used for payment fraud and identity theft.

Besides the issue of retained PII stored in communications between the AI agent and end users, bad actors were also able to target access tokens, which could be used by enterprises for the implementation of the service with APIs of external services and applications. According to Resecurity, due to the significant penetration of external AI systems into enterprise infrastructure and the processing of massive volumes of data, their implementation without proper risk assessment should be considered an emerging IT supply chain cybersecurity risk. The experts from Resecurity outlined the need for AI trust, risk, and security management (TRiSM), as well as Privacy Impact Assessments (PIAs) to identify and mitigate potential or known impacts that an AI system may have on privacy, as well as increased attention to supply chain cybersecurity. Conversational AI platforms have already become a critical element of the modern IT supply chain for major enterprises and government agencies. Their protection will require a balance between traditional cybersecurity measures relevant to SaaS (Software-as-a-Service) and those specialized and tailored to the specifics of AI, highlighted the threat research team at Resecurity. The EU AI Act and other regulatory frameworks in North America, China, and India are already establishing regulations to manage the risks of AI applications, and enterprises should be aware of these developments, added the cybersecurity company. For example, the recent PDPC AI Guidelines in Singapore already encourage businesses to be more transparent in seeking consent for personal data use through disclosure and notifications. Businesses have to ensure that AI systems are trustworthy, thereby providing consumers with confidence in how their personal data is used.

Follow me on Twitter: @securityaffairs and Facebook

Pierluigi Paganini

(SecurityAffairs – hacking, AI Conversational Platforms)


facebook linkedin twitter

AI chatbot Conversational AI Cybercrime Hacking hacking news information security news IT Information Security Pierluigi Paganini Security Affairs Security News

you might also like

Pierluigi Paganini July 26, 2025
Arizona woman sentenced for aiding North Korea in U.S. IT job fraud scheme
Read more
Pierluigi Paganini July 25, 2025
Operation CargoTalon targets Russia’s aerospace with EAGLET malware,
Read more

leave a comment

newsletter

Subscribe to my email list and stay
up-to-date!

    recent articles

    Arizona woman sentenced for aiding North Korea in U.S. IT job fraud scheme

    Intelligence / July 26, 2025

    Operation CargoTalon targets Russia’s aerospace with EAGLET malware,

    Intelligence / July 25, 2025

    Unpatched flaw in EoL LG LNV5110R cameras lets hackers gain Admin access

    Security / July 25, 2025

    Koske, a new AI-Generated Linux malware appears in the threat landscape

    Malware / July 25, 2025

    Mitel patches critical MiVoice MX-ONE Auth bypass flaw

    Security / July 25, 2025

    To contact me write an email to:

    Pierluigi Paganini :
    pierluigi.paganini@securityaffairs.co

    LEARN MORE

    QUICK LINKS

    • Home
    • Cyber Crime
    • Cyber warfare
    • APT
    • Data Breach
    • Deep Web
    • Digital ID
    • Hacking
    • Hacktivism
    • Intelligence
    • Internet of Things
    • Laws and regulations
    • Malware
    • Mobile
    • Reports
    • Security
    • Social Networks
    • Terrorism
    • ICS-SCADA
    • POLICIES
    • Contact me

    Copyright@securityaffairs 2024

    We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
    Cookie SettingsAccept All
    Manage consent

    Privacy Overview

    This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities...
    Necessary
    Always Enabled
    Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
    Non-necessary
    Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
    SAVE & ACCEPT