LLM

Pierluigi Paganini February 15, 2024
Nation-state actors are using AI services and LLMs for cyberattacks

Microsoft and OpenAI warn that nation-state actors are using ChatGPT to automate some phases of their attack chains, including target reconnaissance and social engineering attacks. Multiple nation-state actors are exploiting artificial intelligence (AI) and large language models (LLMs), including OpenAI ChatGPT, to automate their attacks and increase their sophistication. According to a study conducted by […]

Pierluigi Paganini December 02, 2023
Researchers devised an attack technique to extract ChatGPT training data

Researchers devised an attack technique that could have been used to trick ChatGPT into disclosing training data. A team of researchers from several universities and Google have demonstrated an attack technique against ChetGPT that allowed them to extract several megabytes of ChatGPT’s training data. The researchers were able to query the model at a cost […]

Pierluigi Paganini August 03, 2023
OWASP Top 10 for LLM (Large Language Model) applications is out!

The OWASP Top 10 for LLM (Large Language Model) Applications version 1.0 is out, it focuses on the potential security risks when using LLMs. OWASP released the OWASP Top 10 for LLM (Large Language Model) Applications project, which provides a list of the top 10 most critical vulnerabilities impacting LLM applications. The project aims to educate […]

Pierluigi Paganini June 14, 2023
LLM meets Malware: Starting the Era of Autonomous Threat

Malware researchers analyzed the application of Large Language Models (LLM) to malware automation investigating future abuse in autonomous threats. Executive Summary In this report we shared some insight that emerged during our exploratory research, and proof of concept, on the application of Large Language Models to malware automation, investigating how a potential new kind of […]