← April 19, 2026
tech power

OpenAI's Drug Discovery AI Is Closed-Access. That Is the Whole Strategy.

OpenAI's Drug Discovery AI Is Closed-Access. That Is the Whole Strategy.
Shutterstock / Heise Online

What happened

OpenAI announced GPT-Rosalind, a frontier reasoning model tuned for biochemistry, genomics, protein engineering, and drug discovery workflows. Named after Rosalind Franklin, the model accesses over 50 scientific databases and tools via a new Life Sciences plugin for Codex. Benchmarks show it outperforms GPT-5 on chemistry and bioinformatics tasks; in collaboration with Dyno Therapeutics, its RNA sequence predictions ranked in the 95th percentile against human experts. Access is restricted to qualified US enterprise customers through a Trusted Access Program. The model does not release weights, internal reasoning, or detailed error analyses. OpenAI describes it as the first in a planned series of life sciences models, entering a market where Anthropic offers Claude for Life Sciences and Google DeepMind's AlphaFold dominates protein folding.

OpenAI is not trying to accelerate drug discovery for the world; it is trying to become the infrastructure layer that pharmaceutical companies depend on, locking in the highest-value sector of scientific research before any competitor does.

The Hidden Bet

1

Benchmark performance on BixBench and LABBench translates into real reductions in drug development timelines.

The benchmarks measure isolated research tasks. Drug development failures mostly happen in clinical trials, which AI models cannot currently predict reliably because the failure modes are biological complexity, not information synthesis. A model that generates better hypotheses does not prevent the 90% of drug candidates that fail after Phase II.

2

Closed access to a research AI model is a commercial necessity that researchers will accept.

Academic and independent researchers depend on reproducibility and peer inspection of methods. A model where the reasoning steps are opaque and the weights are proprietary cannot be peer-reviewed in any meaningful sense. Regulatory agencies that approve drugs based on AI-assisted discovery will eventually demand auditability OpenAI is not currently providing.

3

The 'tuned for skepticism' design feature meaningfully reduces hallucination in scientific contexts.

OpenAI itself advises users to treat outputs as preliminary and validate independently. In a drug discovery context, 'validate independently' means running wet lab experiments that take months and cost millions. If the model confidently generates wrong hypotheses that pass initial human screening, the validation cost of finding the error is enormous.

The Real Disagreement

The genuine fork is between two models of what AI in science should do. The first: AI should be open, auditable, and integrated into the existing peer-review and reproducibility infrastructure of science. The second: AI should be a black-box accelerator that generates outputs faster than traditional methods, with validation remaining on the researcher. These require incompatible access policies, different regulatory frameworks, and different liability structures. OpenAI is betting heavily on the second model. Google DeepMind's AlphaFold went the other direction: open weights, published methodology, peer-reviewed validation. AlphaFold is credited with one of the most significant scientific advances in decades. It is not obvious that the closed-access model will generate comparable scientific trust or regulatory acceptance. Lean toward the closed model winning commercially in the short term, but facing a reckoning when the first high-profile drug failure is traced to an AI-generated hypothesis that nobody could inspect.

What No One Is Saying

GPT-Rosalind is restricted to 'qualified US enterprise customers.' That means publicly funded academic researchers, who produce the majority of the foundational science that drug discovery depends on, are currently excluded. OpenAI is trying to privatize access to AI-accelerated science in a field where most of the upstream knowledge was generated with public funding.

Who Pays

Academic and independent researchers globally

Immediate; Trusted Access Program is already limited to US enterprise customers.

Closed-access models mean that researchers at underfunded institutions, in non-US countries, or outside enterprise pharma cannot access the tools. The gap between resource-rich and resource-poor science institutions widens.

Patients waiting for drugs that AI overpromises

Medium-term; 5-10 years as AI-selected candidates enter clinical trials.

If benchmark improvements lead to overconfident drug candidate selection, failures happen later in the pipeline (Phase II, Phase III), at much greater cost and time. The pipeline fills with AI-selected candidates that benchmarks validated but biology rejected.

Anthropic and Google DeepMind

Ongoing; the race to lock in enterprise contracts is happening now.

First-mover enterprise contracts in pharma are sticky. If Amgen and Moderna build internal workflows around GPT-Rosalind, switching costs make it difficult to adopt competitors' models even if those prove superior.

Scenarios

OpenAI captures pharma infrastructure

The first cohort of GPT-Rosalind enterprise customers integrate the model into their discovery workflows. A publicly announced drug candidate emerges from the pipeline citing Rosalind assistance. OpenAI announces Rosalind 2 with clinical trial prediction capabilities.

Signal Amgen or Moderna announces a drug candidate that entered IND filing with GPT-Rosalind cited as a primary discovery tool.

Regulatory pushback on opaque AI

FDA issues guidance requiring auditability of AI tools used in drug discovery workflows. Closed-access models without explainable reasoning fail to meet the standard. OpenAI must choose between opening the model or exiting the regulatory approval pathway.

Signal FDA publishes a draft guidance document specifically addressing AI-assisted drug candidate selection and requiring logging of model reasoning steps.

Open-source competitor captures academic market

A well-funded open-source life sciences model (from a consortium, a European initiative, or a DeepMind open release) matches GPT-Rosalind's benchmarks and gains adoption at academic institutions. The enterprise market splits: pharma uses Rosalind, academia uses open models. The two research ecosystems diverge.

Signal A peer-reviewed Nature or Science paper using an open-source life sciences AI model reports drug discovery results comparable to GPT-Rosalind benchmarks.

What Would Change This

If GPT-Rosalind opened its weights and reasoning logs to peer inspection, the scientific community's resistance to closed-access AI would substantially diminish. Alternatively, if a drug discovered with GPT-Rosalind's assistance passes Phase III clinical trials, it would become the strongest possible argument that the closed-access model is worth its costs. Neither has happened yet.

Sources

Heise Online — Technical detail: BixBench score 0.751, beats GPT-5 on biochemistry and protein biochemistry benchmarks; access limited to 'qualified US enterprise customers' through Trusted Access Program; weights not released.
AIxploria — First partner list: Amgen, Moderna, Allen Institute, Thermo Fisher Scientific. Drug development bottleneck framing: 10-15 year timelines, 1-in-10 clinical survival rate. OpenAI positions Rosalind as first in a series of life sciences models.
AI Daily Post — Competitive landscape: Anthropic offers Claude for Life Sciences; Google DeepMind has AlphaFold for protein folding. OpenAI differentiates by targeting full research workflows rather than single tasks.
1News New Zealand / AP — OpenAI's broader pivot to enterprise as a path to profitability amid Anthropic competition; GPT-Rosalind is part of a segment-specific model strategy alongside GPT-5.4-Cyber for security.

Related