OpenAI follows Anthropic’s lead in limited release of GPT-5.4‑Cyber

OpenAI has introduced GPT-5.4-Cyber, a new AI model that may be willing to accept seemingly malicious commands in the name of cybersecurity. Fortunately, ChatGPT’s developer won’t let anyone play with its slightly limited, free-wheeling AI.
Is Anthropic’s Claude Mythos a big surprise, or a real security threat? What do the experts say?
Announced in a blog post on Tuesday, GPT-5.4-Cyber is a variant of OpenAI’s large publicly available GPT-5.4 language model. According to OpenAI, its boundary AI models such as GPT-5.4 have protections against obvious malicious use, making it reject dangerous user requests such as stealing information or finding vulnerabilities in code. In contrast, the company’s new GPT-5.4-Cyber model is trained to be patient, and may accept this information instead.
Describing GPT-5.4-Cyber as “cyber-permissive,” OpenAI says the change is to allow AI to be used for cybersecurity measures, such as helping researchers identify vulnerabilities that need to be addressed.
“We want to empower defenders by providing broad access to cross-border capabilities, including models designed for cybersecurity,” OpenAI wrote. “This is the version of GPT-5.4 that lowers the threshold for denying legitimate cybersecurity work and enables new capabilities for enhanced defense workflows.”
Given the potential risk posed by GPT-5.4-Cyber’s lowered defenses, not everyone will be able to jump in quickly to push the limits of flexible AI. OpenAI says it’s starting with “limited, repeated deployments to vetted security vendors, organizations, and researchers.” As such, only members of its Trusted Cyber Access (TAC) program will be granted access to GPT-5.4-Cyber at this time, and only those at the highest level.
Mashable Light Speed
Launched in February, TAC is a network of users who have gone through OpenAI’s automated identity verification process, including completing a government ID check. Once approved, users in TAC’s OpenAI program are allowed access to versions of its AI models with fewer protections, such as GPT-5.4‑Cyber. OpenAI says this is intended to enable cybersecurity research, education, and programming.
However, not all TAC-certified users will get their hands on GPT-5.4-Cyber right away. OpenAI says users who aren’t already part of the TAC’s higher tiers can request access to it, which will require them to go through authentication to confirm themselves as “legitimate defenders of the Internet.”
The unveiling of GPT-5.4-Cyber comes just one week after OpenAI competitor Anthropic announced Project Glasswing. Like TAC, Project Glasswing is an initiative that limits Anthropic’s cybersecurity-focused Claude Mythos Preview AI model to select authorized organizations. Claiming that Claude Mythos’ Preview “has already discovered thousands of very powerful vulnerabilities,” Anthropic said that Project Glasswing was an effort to ensure that its AI model was used only for cybersecurity purposes.
“Given the rate of AI development, it won’t be long before these capabilities expand, possibly beyond actors who are committed to their safe use,” Anthropic wrote.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’s copyright in training and using its AI programs.



