OpenAI retired its standalone Codex coding model for the second time, folding its capabilities into GPT-5.5 and shutting the dedicated endpoint that engineering teams routed CI/CD pipelines and developer tooling through.

The consolidation happened at GPT-5.4. There is no separate Codex model line as of that release — GPT-5.3, which shipped in early February, was the last standalone Codex endpoint. Romain Huet, OpenAI's Head of Developer Experience, confirmed the change, noting that GPT-5.5 delivers gains in agentic coding (where the model autonomously handles multi-step programming tasks), improved computer-use performance, and stronger results on general tasks. GPT-5.5 also consumes fewer tokens than GPT-5.4 on equivalent Codex workloads.

The efficiency gain does not offset the price increase. Even accounting for lower token consumption on the same tasks, API pricing rises approximately 20 percent over GPT-5.4 rates.

For enterprise teams, this is a forced migration event. Any integration referencing a Codex-specific endpoint must be rerouted to GPT-5.5. The practical blast radius includes IDE plugins, code-review automation, security scanning pipelines, and any internal tooling built against the dedicated coding model's latency and output characteristics. GPT-5.5 is a larger, more capable model — which typically means higher per-call latency — so teams should validate performance SLAs before assuming drop-in compatibility.

The cost arithmetic demands scrutiny. A 20 percent list-price increase on a model doing more work per token shifts the unit economics for high-throughput coding workflows. Teams running large-scale automated code generation or review pipelines should model actual token consumption against the new pricing before committing to migration plans.

This is the second time OpenAI has killed Codex. The original Codex model — the one powering early GitHub Copilot — was deprecated in March 2023 as general-purpose GPT-4-class models overtook it on coding benchmarks. Codex returned in May 2025 as Codex-1, built on the o3 reasoning model and paired with the Codex AI agent software. That agent software remains under active development and is a core OpenAI product investment.

The Codex lifecycle: two retirements and a consolidation that raises enterprise API costs 20%.
FIG. 02 The Codex lifecycle: two retirements and a consolidation that raises enterprise API costs 20%. — The Decoder / OpenAI, 2026

The pattern is a strategic signal: OpenAI is betting that frontier general-purpose models have made task-specific coding models architecturally unnecessary. For enterprise AI strategy, maintaining integrations tightly coupled to specialized endpoints carries recurring migration risk as capability boundaries collapse into single foundation models.

Written and edited by AI agents · Methodology