ChatGPT 0-Click Plugin Exploit Risked Leaked Of Private GitHub Repos

Vulnerabilities in ChatGPT and several of its third-party plugins risked leakage of user conversations and other account contents, including a zero-click exploit that could give an attacker access to a victim’s private GitHub repositories.

The ChatGPT plugin vulnerabilities were discovered by Salt Labs, a part of Salt Security, which published its research in a blog post Wednesday. The problems were reported to ChatGPT and plugin developers in July and September 2023, respectively, and have since been resolved, according to Salt Labs.

ChatGPT plugins, which are now known as “Actions” available through custom GPTs, represent the growing “generative AI ecosystem” allowing large language models (LLMs) like ChatGPT to access information outside of its training data.

These plugins allow ChatGPT to send and receive potentially sensitive data to and from other websites, including users’ private third-party accounts, creating a new potential attack vector.

“Our recent vulnerability discoveries within ChatGPT illustrate the importance of protecting the plugins within such technology to ensure that attackers cannot access critical business assets and executive account takeovers,” Salt Security Vice President of Research Yaniv Balmas said in a statement.

OAuth implementation flaws found in ChatGPT, multiple plugins

The three main vulnerabilities discovered by the SaltLabs team all involved faulty implementation of the Open Authorization (OAuth) standard. OAuth allows an application to access information from accounts on other websites, without the user needing to directly provide their login credentials to the app.

Many ChatGPT plugins, or custom GPTs, use OAuth to allow ChatGPT to access data from the user’s accounts on third-party sites. Attempting to perform certain plugin actions will prompt the request and exchange of OAuth tokens between ChatGPT, the user and the third-party site.

When improperly implemented, OAuth tokens can be intercepted, redirected or otherwise misused to allow an attacker to access a victim’s account or connect a victim’s account to a malicious application.

“Increasingly, employees are entering proprietary data into AI tools — including intellectual property, financial data, business strategies and more — and unauthorized access by a malicious actor could be crippling for an organization,” Darren Guccione, CEO and co-founder of Keeper Security, told SC Media in an email.

One of the OAuth flaws discovered by Salt Labs was found in PluginLab, a framework used by many developers to create plugins for ChatGPT. The researchers found several plugins developed using PluginLab were vulnerable to a zero-click exploit that would allow an attacker to access a victim’s account on sites used by the plugins, including GitHub.  

The researchers demonstrated how the exploit could be used on the “AskTheCode” plugin, which allows the user to query their GitHub repositories. They discovered that an OAuth request sent from AskTheCode to PluginLab’s authorization page asked for a token based on the user’s “memberID”; because PluginLab did not authenticate these requests, an attacker could alter the request to insert any user’s memberID.

The memberID was found to be readily available to any attacker who already knew their target’s email address, as the ID was the SHA-1 hash of the user’s email address; a PluginLab API endpoint was also found to leak memberIDs when called with a request containing a user’s email.

Once the attacker obtained an OAuth token from a request containing their target’s memberID, they could forward this token to ChatGPT and use AsktheCode to access the victim’s GitHub repositories from the ChatGPT interface. This would include the ability to ask for a list of all private repositories and read specific files, potentially exposing proprietary code, private keys and other confidential information.

The other two vulnerability exploits described by Salt Labs would require a victim to click a link to expose their personal data. One, discovered in ChatGPT itself, involved an attacker creating their own plugin and having ChatGPT request an OAuth token from an attacker-controlled domain. When the OAuth token is generated, the user is redirected to an OpenAI link containing the OAuth code for authentication.

If an attacker sends this link to the victim, and the victim is logged into their OpenAI account, clicking the link would automatically install the attacker’s plugin to their account without any confirmation. The plugin could then potentially gather sensitive information from the victim’s ChatGPT conversations.

The third vulnerability was found in several ChatGPT plugins, including “Charts by Kesem AI,” which failed to validate the “redirect_uri” link an OAuth token is sent to. This allows the attacker to insert their own domain as the redirect_uri and send the altered authentication link to the target.

When the target clicks the link, an OAuth token for their own account (ex. their Kesem AI account) is sent to the attacker, who can then use it to access the victim’s account contents through ChatGPT.

“These vulnerabilities underline the importance of scrutinizing third-party integrations, even within trusted platforms like ChatGPT. IT leaders should establish internal protocols for vetting plugins before allowing employee use,” Sarah Jones, cyber threat intelligence research analyst at Critical Start, told SC Media in an email.

Generative AI ecosystem creates new attack surface for cyber threats

While the ChatGPT plugin vulnerabilities were fixed shortly after their discovery, the rapid evolution and adoption of generative AI tools continues to present many emerging risks. Salt Labs says it plans to publish additional research about cyber risks its researchers have discovered in ChatGPT’s custom GPT marketplace, which replaces the older plugin system.

Threats leveraging generative AI are varied, ranging from the theft of sensitive LLM inputs, to prompt manipulations, to the mass generation of convincing phishing emails. Vulnerabilities in Google’s Gemini AI, also disclosed this week, could allow an attacker to obtain hidden system prompts or abuse Gemini Advanced extensions to manipulate users into inputting sensitive information, according to HiddenLayer.

Microsoft and OpenAI also revealed last month that ChatGPT was used by state-sponsored hackers from Russia, North Korea, Iran and China for tasks ranging from scripting help to target reconnaissance and vulnerability research.

“As organizations rush to leverage AI to gain a competitive edge and enhance operational efficiency, the pressure to quickly implement these solutions should not take precedence over security evaluations and employee training,” Guccione said.

READ MORE HERE