OpenAI addressed multiple severe vulnerabilities in ChatGPT that could have allowed attackers to take over user accounts and view chat histories.
One of the issues was a “Web Cache Deception” vulnerability reported by the bug bounty hunter and Shockwave founder Gal Nagli, it could lead to an account takeover.
The expert discovered the vulnerability while analyzing the requests that handle ChatGPT’s authentication flow. The following GET request caught the attention of the expert:
https://chat.openai[.]com/api/auth/session
“Basically, whenever we login to our ChatGPT instance, the application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below” Nagli wrote on Twitter detailing the bug.
Basically, whenever we login to our ChatGPT instance, they application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below: pic.twitter.com/m0R0C1Eu2e
— Nagli (@naglinagli) March 24, 2023
The expert explained that to exploit the flaw, a threat actor can craft a dedicated .css path to the session endpoint (/api/auth/session) and send the link to the victim. Upon visiting the link, the response is cached and the attacker can harvest the victim’s JWT credentials and take full control over his account.
Attack Flow:
— Nagli (@naglinagli) March 24, 2023
1. Attacker crafts a dedicated .css path of the /api/auth/session endpoint.
2. Attacker distributes the link (either directly to a victim or publicly)
3. Victims visit the legitimate link.
4. Response is cached.
5. Attacker harvests JWT Credentials.
Access Granted.
Nagli praised the OpenAI security team that quickly addressed the issue by instructing the caching server to not catch the endpoint through a regex.
Vulnerability Disclosure Process from @OpenAI:
— Nagli (@naglinagli) March 24, 2023
1. Email sent at 19:54 to disclosure@openai.com
2. First response 20:02
3. First fix attempt 20:40
4. Production fix 21:31
The bad news is that the mitigation implemented by the company only partially addressed the issue. The researcher Ayoub Fathi discovered that it is possible to bypass authentication targeting another ChatGPT API. An attacker can exploit this bypass technique to access to a user’s conversation titles.
Update:
— Nagli (@naglinagli) March 25, 2023
Couple of hours after my tweet I was made aware by a fellow researcher @_ayoubfathi_ and others that there were a number of bypasses to the regex based fix implemented by @OpenAI (which didn't surprise me).
I notified the team ASAP once again and https://t.co/g1GJMtxG4E… pic.twitter.com/SDdn2VU0EA
How could I have Hacked into any #ChatGPT account, including saved conversations, account status, chat history and more!
— Ayoub FATHI 阿尤布
A tale of 4 ChatGPT vulnerabilities
We can discuss it now that the #OpenAI team has confirmed it's completely fixed.
Let me explain: pic.twitter.com/WwDsGtpqzI
(@_ayoubfathi_) March 25, 2023
here I thought all I could find was a bypass to read someone's conversation titles – which is still bad but not as bad as taking over accounts, correct? pic.twitter.com/IRj1mINU52
— Ayoub FATHI 阿尤布(@_ayoubfathi_) March 25, 2023
“GET /backend-api/conversations%0A%0D-testtest.css?offset=0&limit=20 Send it to a victim, and upon accessing it – his own “API” response will be cached, and if you recheck the same URL (i.e. fetching the cached response of the victim), you will be able to see the victim’s HTTP response, which contains the conversations’ titles.” explained the expert Ayoub Fathi on Twitter.
The expert pointed out that all ChatGPT APIs were vulnerable to the bypass, which means that an attacker could exploit the issue to read conversation titles, full chats, and account status.
Fathi reported the issue to OpenAI which quickly addressed it.
Unfortunately for the researchers, OpenAI has yet to run a bug bounty program to reward researchers that report vulnerabilities in its chatbot.
On Friday, OpenAI revealed that the recent exposure of users’ personal information and chat titles in its chatbot service was caused by a bug in the Redis open-source library.
The company identified the bug and quickly addressed it.
“We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.” reads an update published by the company.
Please vote for Security Affairs (https://securityaffairs.com/) as the best European Cybersecurity Blogger Awards 2022 – VOTE FOR YOUR WINNERS
Vote for me in the sections:
You can nominate yourself or your favourite blogger. We ask that you provide a brief paragraph of 250 words explaining why they should win.
Nominate here: https://docs.google.com/forms/d/e/1FAIpQLSfaFMkrMlrLhOBsRPKdv56Y4HgC88Bcji4V7OCxCm_OmyPoLw/viewform
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, ChatGPT)