Security

The risk of pasting confidential company data into ChatGPT

Experts warn that employees are providing sensitive corporate data to the popular artificial intelligence chatbot model ChatGPT.

Researchers from Cyberhaven Labs analyzed the use of ChatGPT by 1.6 million workers at companies across industries. They reported that 5.6% of them have used it in the workplace and 4.9% have provided company data to the popular chatbot model since it launched. ChatGPT uses this data to build its knowledge base, but it publicly shares information built on it.

The use of ChatGPT is becoming a serious problem in the workspace, it can potentially cause the leak of sensitive and confidential data. For this reason, companies like JP Morgan and Verizon are blocking access to the chatbot over concerns about confidential data. 

The experts reported that less than one percent (0.9%) of employees is responsible for 80% of leaks caused by pasting company data into the chatbot. This percentage could rapidly increase in the next months with the integration of the technology in multiple services, for example through the use of ChatGPT API.

Employees believe that the artificial intelligence chatbot developed by OpenAI can significatively improve their productivity.

The experts also warn that enterprise security software cannot monitor the use of ChatGPT by employees and prevent the leak of sensitive/confidential company data.

“Since ChatGPT launched publicly, 5.6% of knowledge workers have tried using it at least once in the workplace. Furthermore, 2.3% of employees have put confidential company data into ChatGPT.” reads the report published by Cyberhaven Labs. “Despite a growing number of companies outright blocking access to ChatGPT, usage continues to grow exponentially. On March 1, our product detected a record 3,381 attempts to paste corporate data into ChatGPT per 100,000 employees, defined as “data egress” events in the chart below.”

The researchers also monitored workers copying data out of the popular chatbot and pasting it elsewhere, like a company email, Google Doc, or their source code editor. They noticed that employees copy data out of the chatbot more than they paste company data into ChatGPT at a nearly 2-to-1 ratio.

The researchers pointed out that the average company leaks sensitive data to ChatGPT hundreds of times each week. For example, during the week of February 26 – March 4, workers at the average company with 100,000 employees provided confidential documents into the AI-based chatbot 199 times, client data 173 times, and source code 159 times.

We cannot demonize the use of the popular platform in the workplace, however, it is essential to spread awareness about the risks of the improper use of such technology

Let me close with this question I made to the chatbot:

How risky is it to provide company data to ChatGPT?

“As a large language model, ChatGPT doesn’t have the ability to store or acquire company or personal data. This means that there is no inherent risk in using ChatGPT to discuss general issues or ask knowledge-related questions.

However, if you provide confidential or sensitive information about a company or person, that information could be vulnerable to potential security or privacy threats associated with online communication. In general, it’s always important to exercise due diligence when discussing sensitive information online.

In summary, ChatGPT doesn’t represent a risk to the security of company data in itself, but it’s always important to be mindful of online privacy and security and carefully consider what kind of information you share on a chatbot platform.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, AI)

Pierluigi Paganini

Pierluigi Paganini is member of the ENISA (European Union Agency for Network and Information Security) Threat Landscape Stakeholder Group and Cyber G7 Group, he is also a Security Evangelist, Security Analyst and Freelance Writer. Editor-in-Chief at "Cyber Defense Magazine", Pierluigi is a cyber security expert with over 20 years experience in the field, he is Certified Ethical Hacker at EC Council in London. The passion for writing and a strong belief that security is founded on sharing and awareness led Pierluigi to find the security blog "Security Affairs" recently named a Top National Security Resource for US. Pierluigi is a member of the "The Hacker News" team and he is a writer for some major publications in the field such as Cyber War Zone, ICTTF, Infosec Island, Infosec Institute, The Hacker News Magazine and for many other Security magazines. Author of the Books "The Deep Dark Web" and “Digital Virtual Currency and Bitcoin”.

Recent Posts

U.S. CISA adds Apple, Oracle Agile PLM bugs to its Known Exploited Vulnerabilities catalog

U.S. Cybersecurity and Infrastructure Security Agency (CISA) adds Apple, Oracle Agile PLM bugs to its…

13 hours ago

More than 2,000 Palo Alto Networks firewalls hacked exploiting recently patched zero-days

Threat actors already hacked thousands of Palo Alto Networks firewalls exploiting recently patched zero-day vulnerabilities.…

17 hours ago

Ransomhub ransomware gang claims the hack of Mexican government Legal Affairs Office

Mexico is investigating a ransomware attack targeting its legal affairs office, as confirmed by the…

1 day ago

US DoJ charges five alleged members of the Scattered Spider cybercrime gang

The U.S. Justice Department charged five suspects linked to the Scattered Spider cybercrime gang with…

1 day ago

Threat actor sells data of over 750,000 patients from a French hospital

A threat actor had access to electronic patient record system of an unnamed French hospital,…

2 days ago

Decade-old local privilege escalation bugs impacts Ubuntu needrestart package<gwmw style="display:none;"></gwmw>

Decade-old flaws in the needrestart package in Ubuntu Server could allow local attackers to gain…

2 days ago

This website uses cookies.