April 25, 2026ResearchAPI

OpenAI's $25K Bio Bug Bounty Says Out Loud What Most Labs Hide

OpenAI opened a restricted bug bounty just for biology jailbreaks on GPT-5.5. Top prize $25,000. The target is a single reusable prompt that defeats a five-question bio-safety challenge from a clean Codex Desktop session. Applications opened April 23, testing runs April 28 to July 27.

The structure tells the story. NDA-required. Vetted red-teamers only. Background-checked researchers in AI red teaming, security, biosecurity. Submissions go through a controlled environment, not a public form. This is not a flashy find-an-exploit-and-tweet-it bounty. This is let-us-pay-you-to-find-the-universal-jailbreak-before-someone-else-does.

A universal jailbreak in the biology domain means a single prompt that consistently produces harmful biological outputs across diverse queries. That is the failure mode that makes deployment legally and politically unsurvivable. OpenAI is essentially saying we built deeper safeguards into GPT-5.5 than GPT-5, and we want to know in private whether they hold.

Other labs run similar programs. Anthropic has external red teamers, Google has biosecurity reviews. None of them put a public price tag on the failure. OpenAI just did, and the implicit message to enterprise buyers is clear: we are putting cash where everyone else only puts press releases.

Apply: https://openai.com/index/gpt-5-5-bio-bug-bounty/
← Previous
LamBench: 120 Lambda Calculus Problems That Sort Models By a Cliff
Next β†’
ZeroHuman Bundled Four Agents Into a Solo-Founder Kit
← Back to all articles

Comments

Loading...
>_