How OpenAI’s red team made ChatGPT agent into an AI fortress
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Called the “ChatGPT agent,” this new feature is an optional mode that ChatGPT paying subscribers can engage by clicking “Tools” in the prompt entry box and selecting “agent mode,” at which point, they can ask ChatGPT to log into their email and other web accounts; write and respond to emails; download, modify, and create files; and do a host of other tasks on their behalf, autonomously, much like a real person using a computer with their login credentials. Obviously, this also requires the user to trust the ChatGPT agent not to do anything problematic or nefarious, or to leak their data and sensitive information. It also poses greater risks for a user and their employer than the regular ChatGPT, which can’t log into web accounts or modify files directly. Keren Gu, a member of the Safety Research team at OpenAI, commented on X that “we’ve activated our strongest safeguards for ChatGPT Agent. It’s the first model we’ve classified as High capability in biology & chemistry under our Preparedness Framework. Here’s why that matters–and what we’re doing to keep it safe.” The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF So how did OpenAI handle all these security issues? The red team’s mission Looking at OpenAI’s ChatGPT agent system card, the “read team” employed by the company to test the feature faced a challenging mission: specifically, 16 PhD security researchers who were given 40 hours to test it out. Through systematic testing, the red team discovered seven universal exploits that could compromise the system, revealing critical vulnerabilities in how AI agents handle real-world interactions. What followed next was extensive security testing, much of it predicated on red teaming. The Red Teaming Network submitted 110 attacks, from prompt injections to biological information extraction attempts. Sixteen exceeded internal risk thresholds. Each finding gave OpenAI engineers the insights they needed to get fixes written and deployed before launch. The results speak for themselves in the published results in the system card. ChatGPT Agent emerged with significant security improvements, including 95% performance against visual browser irrelevant instruction attacks and robust biological and chemical safeguards. Red teams exposed seven universal exploits OpenAI’s Red Teaming Network was comprised 16 researchers with biosafety-relevant PhDs who topgether submitted 110 attack attempts during the testing period. Sixteen exceeded internal risk thresholds, revealing fundamental vulnerabilities in how AI agents handle real-world interactions. But the real breakthrough came from UK AISI’s unprecedented access to ChatGPT Agent’s internal reasoning chains and policy text. Admittedly that’s intelligence regular attackers would never possess. Over four testing rounds, UK AISI forced OpenAI to execute seven universal exploits that had the potential to compromise any conversation: Attack vectors that forced OpenAI’s hand Attack Type Success Rate (Pre-Fix) Target Impact Visual Browser Hidden Instructions 33% Web pages Active data exfiltration Google Drive Connector Exploitation Not disclosed Cloud documents Forced document leaks Multi-Step Chain Attacks Variable Cross-site actions Complete session compromise Biological Information Extraction 16 submissions exceeded thresholds Dangerous knowledge Potential weaponization FAR.AI’s assessment was openly critical of OpenAI’s approach. Despite 40 hours of testing revealing only three partial vulnerabilities, they identified that current safety mechanisms relied heavily on monitoring during reasoning and tool-use processes, which the researchers considered a potential single point of failure if compromised. How red teaming helped turn ChatGPT vulnerabilities into a fortress OpenAI’s response to the red team’s results redefined entire segments of the ChatGPT agent architecture. One of the many initiatives undertaken including building a dual-layer inspection architecture that monitors 100% of production traffic in real-time, achieving these measurable improvements: Security improvements after red team discoveries Defense Metric Previous Models ChatGPT Agent Improvement Irrelevant Instructions (Visual Browser) 82% 95% +13% In-Context Data Exfiltration 75% 78% +3% Active Data Exfiltration 58% 67% +9% System Reliability Sampling-based 100% coverage Complete monitoring The architecture works like this: First Tier: A fast classifier with 96% recall flags suspicious content Second Tier: A reasoning model with 84% recall analyzes flagged interactions for actual threats But the technical defenses tell only part of the story. OpenAI made difficult security choices that acknowledge some AI operations require significant restrictions for safe autonomous execution. Based on the vulnerabilities discovered, OpenAI implemented the following countermeasures across their model: Watch Mode Activation: When ChatGPT Agent accesses sensitive contexts like banking or email accounts, the system freezes all activity if users navigate away. This is in direct response to data exfiltration attempts discovered during testing. Memory Features Disabled: Despite being a core functionality, memory is completely disabled at launch to prevent the incremental data leaking attacks red teamers demonstrated. Terminal Restrictions: Network access limited to GET requests only, blocking the command execution vulnerabilities researchers exploited. Rapid Remediation Protocol: A new system that patches vulnerabilities within hours of discovery—developed after red teamers showed how quickly exploits could spread. During pre-launch testing alone, this system identified and resolved 16 critical vulnerabilities that red teamers had discovered. A biological risk wake-up call Red teamers revealed the potential that the ChatGPT Agent could be comprimnised and lead to greater biological risks. Sixteen experienced participants from the Red Teaming Network, each with biosafety-relevant PhDs, attempted to extract dangerous biological information. Their submissions revealed the model could synthesize published literature on modifying and creating biological threats. In response to the red teamers’ findings, OpenAI classified ChatGPT Agent as “High capability” for biological and chemical risks, not because they found definitive evidence of weaponization potential, but as a precautionary measure based on red team findings. This triggered: Always-on safety classifiers scanning 100% of traffic A topical classifier achieving 96% recall for biology-related content A reasoning monitor
How OpenAI’s red team made ChatGPT agent into an AI fortress Read More »