OpenAI traced a security problem to a compromised Axios developer library. The company says no user data or passwords were accessed.
What OpenAI found
OpenAI discovered that a widely used third-party JavaScript library called Axios was tampered with on March 31, and that the compromise triggered an automated workflow used by the company to build and sign macOS apps.
That workflow ran on GitHub Actions and, for a window of time, pulled in a malicious version of Axios, OpenAI said. The automated job had access to a signing certificate and notarization material that are part of the macOS app verification process — the bits Apple looks for to show an app is legitimate.
OpenAI emphasized that it found no evidence of user data being accessed, system integrity being compromised, or software being altered. The company added that passwords and OpenAI API keys were not affected, and that the underlying cause was a misconfiguration in the GitHub Actions workflow, which it's repaired.
Immediate impact on apps and updates
The GitHub Actions workflow had material tied to several OpenAI macOS packages: ChatGPT Desktop, Codex, Codex-cli and Atlas. OpenAI's investigation concluded the signing certificate present in that workflow was probably not successfully taken by the malicious payload.
Still, the company said it's updating how it issues and checks certificates — and it will require macOS users to update to the latest OpenAI app releases. Effective May 8, older macOS versions of OpenAI's desktop apps will stop getting updates or support and may stop working.
OpenAI wants to prevent anyone from distributing fake apps that claim to be its signed software. Requiring fresh installs and tighter certificate controls is part of that effort.
How the attack worked
According to OpenAI's postmortem, the Axios compromise was part of a broader software supply chain attack that security researchers and the company say is linked to actors believed to be connected to North Korea. The malicious Axios release poisoned build processes that relied on the library.
The attacker code ran inside a GitHub Actions workflow. That workflow had permission to use signing material, so the risk wasn't just broken builds — it was the possibility that an attacker could produce binaries that looked legitimate to macOS and to users who check notarization.
OpenAI's analysis focused on whether the payload managed to exfiltrate the signing certificate. The company concluded it's likely the certificate wasn't taken. But the presence of the certificate inside a workflow that executed untrusted code was the root problem, OpenAI said.
Remediation and user guidance
OpenAI is rotating signing certificates and tightening the certification flow so that build systems no longer hold long-lived access to notarization materials. It's also forcing app updates for macOS users to minimize any chance of someone distributing a counterfeit app.
Users should watch for app update prompts and install the newest versions before May 8, OpenAI said. After that date, older builds might stop receiving updates or may not function correctly.
Those steps follow a basic rule: don't run code in build or CI jobs that has unfettered access to signing keys. OpenAI said it has fixed the GitHub Actions misconfiguration that allowed the exposed access.
Where this sits in a broader trend
Software supply chain attacks have been on the rise: attackers target libraries, package managers or build systems because breaking one shared tool touches many downstream projects. The Axios compromise is another example of that method.
Companies that rely on public libraries have to balance speed with safety. Automated workflows speed development, but they can give attackers a place to hide. OpenAI's incident follows other high-profile cases where malicious code slid into commonly used packages and then spread into multiple projects.
The bigger issue isn't just one compromised package. It's how many teams treat CI/CD and third-party dependencies as routine and then forget to lock down the parts that sign or publish releases. When signing keys sit in build systems that also fetch external code, the door opens.
Attribution and unanswered questions
OpenAI said the Axios compromise appears to be part of a broader campaign thought to be run by actors linked to North Korea. The firm didn't name who made that attribution; it relied on details uncovered during internal analysis and on the pattern seen in related incidents industrywide.
Security researchers will likely probe whether the malicious Axios payload attempted any lateral moves beyond the GitHub Actions environment. OpenAI's public statement stops short of saying the payload tried to contact external command-and-control infrastructure, or whether any other internal developer workflows were touched.
That leaves some practical questions for other companies: how many teams keep signing material in workflows; how many use ephemeral secrets versus long-lived keys; and how quickly organizations rotate keys after a supply-chain alert.
Reporting and context
The initial reporting on the incident was compiled from OpenAI's disclosure and industry sources. Reuters' coverage was credited to Rhea Rose Abraham in Bengaluru, with editing by William Mallard. CNBC and other outlets published summaries of OpenAI's technical findings and the action plan for macOS users.
OpenAI framed the incident as contained: it said internal systems, intellectual property and software weren't compromised. But it also signaled that the company is treating the episode as a close call, not a near miss to ignore.
Supply chain security has become more than just an operational issue. It's a product and risk issue. Any company shipping signed apps should assume someone will try to exploit build systems and plan accordingly.
What to watch next
Security teams will be watching whether other projects that used Axios saw similar breaches and whether GitHub Actions workflows across firms had the same misconfiguration risk. OpenAI's choice to force macOS updates and rotate certificates sets a timeline others might copy.
For users, the concrete step is simple: update macOS apps from OpenAI to their latest versions before May 8, and prefer app downloads from official sources.
That said, the broader fix will take time: developers must audit build systems, reduce secrets in shared workflows, and adopt ephemeral signing processes. The industry will be debating trade-offs between convenience and security long after this single Axios incident fades from the headlines.
Related Articles
- Anthropic’s Mythos Prompts Emergency Talks With Banks as Cyber Risk Fears Grow
- AI Hunters Are Uncovering Decades-Old Bugs — And Opening New Risks
- Anthropic closing in on OpenAI as business demand surges
Effective May 8, older versions of OpenAI's macOS desktop apps will no longer receive updates or support and may not be functional.