OpenAI Codex for (Almost) Everything: Release Guide
OpenAI's April 16 Codex update — background Computer Use, in-app browser, gpt-image-1.5, and 90+ plugins pushing Codex beyond coding to full workflow control.
Weekly Developers
Plugins Available
Release Date
Computer Use
Key Takeaways
"Codex for almost everything" is the most literal product name OpenAI has ever shipped — and the most telling. This update pushes Codex past coding into a full desktop control surface that competes with Claude Code, Jules, AND your actual operating system. For the roughly 3 million developers already using Codex weekly, the April 16 release is less an incremental upgrade and more a repositioning of what the product is.
The additions stack up quickly: background Computer Use that operates your machine with its own cursor, an in-app browser you can comment on directly to instruct agents, gpt-image-1.5 image generation inside workflows, a plugin library of more than 90 skills, integrations, and MCP servers, and cross-session automations that wake the agent up across days or weeks. This guide walks through every piece of the release, the rollout reality, how Codex now compares to Claude Code and Jules, and the deployment patterns agencies should pilot first.
Release snapshot: Codex update launched April 16, 2026, rolling out to ChatGPT-signed Codex desktop users. Background Computer Use is macOS-first; EU and UK roll out later. Memory preview is gated to Enterprise, Education, EU, and UK on a later wave. See the official OpenAI announcement.
What Shipped on April 16, 2026
OpenAI framed this release around the idea that Codex should handle almost every kind of developer-adjacent work rather than narrowly coding. Underneath the marketing line, the update bundles seven distinct capability additions, each of which could have been a separate product post on its own.
- Background Computer Use: Codex operates your machine alongside you with its own cursor, running multiple agents in parallel.
- In-app browser: Codex hosts pages it is working on and accepts comment annotations as agent instructions.
- gpt-image-1.5 inside workflows: Image generation is callable inside a Codex task, not just as a separate chat action.
- 90+ plugin library: Skills, app integrations, and MCP servers unified in one catalog.
- Cross-session automations: Re-use existing threads, schedule future work, wake up across days and weeks.
- Memory preview: Remembers useful context, preferences, and corrections; rolling out to Enterprise, Edu, EU, and UK later.
- Proactive suggestions: Scans projects, plugins, and memory to propose prioritized tasks.
The baseline reach is worth keeping in mind when reading this: about 3 million developers already use Codex every week. That user base started on the web product, expanded to the macOS desktop app on February 2, and grew again when the Windows desktop app shipped on March 4. This April update is aimed at that installed base first rather than at a brand-new user.
Who Gets It and When
The headline features are rolling out today to ChatGPT-signed Codex desktop users, but with one major caveat: background Computer Use launches on macOS, with the EU and UK arriving later. Windows users keep the rest of the desktop functionality but do not receive Computer Use at this update. Memory preview follows a similar gradual wave for Enterprise, Education, EU, and UK, which agencies running cross-border accounts should note for planning.
Background Computer Use in Detail
Background Computer Use is the headline feature, and it is deliberately different in design philosophy from the "take over the screen" computer control features competitors have shipped. Codex runs with its own cursor and its own context rather than interrupting your active work. Multiple Codex agents can run in parallel, each doing their own tasks, without stealing focus or fighting each other for the mouse.
The practical effect is that a developer can kick off a Codex task that needs to interact with a desktop app, then keep working in their IDE or browser while the agent operates in the background. For agencies used to computer-use agents that freeze the user out for the duration of a task, this is a material shift in how the tool fits into a working day.
Planning a Codex rollout for your team? Mapping agents to real workflows is harder than flipping a feature flag. Explore our AI Digital Transformation service to plan tooling, permissions, and guardrails.
How It Differs from Claude and Gemini Computer Use
Claude's computer-use tooling and Gemini's browser control features both lean on the model driving the user's own screen. OpenAI's design choice here is parallel background operation: several agents can hold their own cursors and work simultaneously. That unlocks workflows like running one agent through a QA pass while another fills out a CRM while a third follows up on support tickets, with the developer continuing their own work unaffected.
For a head-to-head on where each vendor lands after April 16, see our Computer Use agents 2026 matrix.
Platform and Region Constraints
- macOS: Launch platform for Computer Use.
- Windows: Desktop app shipped March 4 but does not yet receive Computer Use.
- EU and UK: Computer Use rolls out later, not at launch.
- ChatGPT sign-in required: Features are rolling out to ChatGPT-signed Codex desktop users.
In-App Browser: Frontend + Game Dev First
Codex now ships with its own hosted browser where agents render the pages, apps, and experiences they are working on. The novel part is not the browser itself but what you can do with it: comment directly on the rendered page to give the agent instructions. Instead of writing "move the CTA button up and increase the heading weight," you can annotate the button and say "move this up" and annotate the heading and say "bolder." The agent consumes those comments as instructions.
OpenAI highlights frontend development and game development as the earliest strong use cases. Both workflows involve tight visual iteration loops where pointing at the screen is faster than describing a problem in text. The comment-based instruction pattern collapses a round-trip: you see the problem, you annotate it, and the agent fixes it, all without leaving the browser.
Implications for Client Work
For agency teams shipping dashboards, marketing sites, and interactive experiences, this changes the QA loop materially. Designers and PMs who previously had to translate visual feedback into prose can now annotate directly, and the agent carries the instruction straight to code. The pattern also compresses stakeholder review on early-stage builds, where the gap between "the brief said it should feel fast" and "the page feels slow here" has historically burned cycles.
If you are exploring how agents can plug into your delivery flow end-to-end, our web development practice has been integrating Codex-style agents into client sprints since the March Windows release.
gpt-image-1.5 Inside Codex Workflows
The third headline capability is image generation inside Codex workflows, powered by gpt-image-1.5. Previously, a Codex agent working on a product, frontend, or game project would stop at the point where visual assets were needed and hand off to either a designer or a separate tool. With gpt-image-1.5 available inside the workflow, an agent can generate product concepts, frontend design explorations, mockups, and game art as part of a longer task.
Instead of handing off to design for a placeholder mockup, a Codex agent scoping a new product surface can generate concept imagery to anchor reviewer conversations and downstream implementation.
Frontend tasks often hit a wall when the prompt has to describe a visual target. gpt-image-1.5 gives the agent a way to render the target image itself and write code that matches it.
Mid-task mockups produced on the fly shorten the review cycle with clients and internal stakeholders, especially for explorations that would otherwise not be worth cutting a design ticket for.
For game dev and interactive prototypes, the agent can generate stand-in art alongside the code, so prototypes look like the thing they represent rather than like gray boxes.
The creative quality is not the main point here. The main point is that image generation used to be a context switch; now it is a step. Agencies that have been stitching ChatGPT Image + Codex manually can retire that glue.
The 90+ Plugin Library
Codex now ships with a catalog of more than 90 plugins covering three previously separate categories: skills, app integrations, and MCP servers. Consolidating them into a single library matters because plugin selection used to be one of the friction points in agent rollouts. Deciding whether a capability belonged in a skill, in an integration, or in an MCP server took time that could otherwise have been spent on the actual work.
Launch Partners Worth Calling Out
- Atlassian Rovo (JIRA): Native JIRA context and actions inside Codex tasks.
- CircleCI: CI pipelines as a first-class integration, with jobs and status readable by agents.
- CodeRabbit: Review-quality context and automated PR discussion.
- GitLab Issues: Parity with the existing GitHub surface for teams not on GitHub.
- Microsoft Suite: Office, Teams, and Outlook integration for communication and docs.
- Neon by Databricks: Postgres-as-a-service integration for schema and data work.
- Remotion: Programmatic video generation inside Codex tasks.
- Render: Managed deployments without leaving Codex.
- Superpowers: Workflow automation primitives that compose with the rest of the catalog.
Agency Codex deployments previously had to choose between building custom skills, depending on MCP servers maintained by partners, or wiring up direct app integrations. With one catalog, you can pick the combination per workflow instead of per agent, which simplifies training and permissions management at team scale.
Developer Workflow Additions
Alongside the headline capabilities, the release bundles several developer workflow improvements that look incremental on their own but compound across a working day. These are the details that separate a demo-friendly product from something a team actually lives inside.
Review comments on a pull request can be fed directly to Codex, so a reviewer's feedback becomes the agent's next task list without a manual copy-paste step.
Run the dev server in one tab, logs in another, and an agent session in a third, all inside the same Codex workspace rather than scattered across OS windows.
Alpha support for SSH-connecting to remote devboxes brings remote work into the Codex envelope. Teams using cloud devboxes no longer have to pick between the Codex UX and the remote environment.
Rich file previews cover PDFs, spreadsheets, slides, and documents, so agents and developers can inspect attachments without breaking context into a separate viewer.
Summary Pane for Agent Plans
A new summary pane consolidates an agent's plans, sources, and artifacts in one view. Instead of scrolling through a long conversation to reconstruct what the agent did and why, reviewers can scan the pane to verify the plan, the sources it consulted, and the artifacts it produced. For agencies, this is a meaningful quality-control surface for anything that ships to clients.
Cross-Session Automations and Memory Preview
The third structural shift in this release, after Computer Use and plugins, is persistence. Codex can now re-enter existing conversation threads, schedule future work, and wake itself up across days or weeks. Combined with a memory preview that remembers useful context, preferences, and corrections, it crosses the line from "session-scoped assistant" into "persistent collaborator."
What Automations Actually Do
- Re-enter existing threads: Pick up a conversation from last Tuesday with full context rather than starting fresh.
- Schedule future work: Tell Codex to do something on Friday, or every Monday, and it will.
- Multi-day wake-ups: The agent wakes itself up across days or weeks to complete long-running tasks.
- Typical use cases: Landing open PRs, following up on tasks, tracking activity in Slack, Gmail, and Notion.
Memory rollout note: The memory preview is rolling out to Enterprise, Education, EU, and UK later, not at launch. Teams with strict data-residency or compliance requirements should confirm availability before rolling Codex memory into client workflows.
Practical Agency Examples
The workflows that light up here are the ones where "an assistant that forgets every Friday" was the bottleneck. Campaign retros that pull data across two weeks of Slack, dependency update sweeps that need to remember which libraries you deliberately held back, client status reports that need to track PRs and tasks across multiple repos — these all become single prompts with a schedule instead of recurring chores.
For teams wiring Codex into their CRM and operations stack, our CRM automation practice maps these patterns onto common agency operations.
Proactive Suggestions: Context-Aware Work Queues
Proactive suggestions is the piece of the release that is easiest to under-read and arguably the most behaviorally significant. Instead of waiting for a prompt, Codex scans your projects, plugins, and memory and proposes prioritized tasks. OpenAI's example was open Google Docs comments that are waiting on a reply — Codex flags them and offers to handle them.
- Project scan: Active repos, task trackers, and recent conversations.
- Plugin scan: Signals from connected integrations (e.g. open JIRA tickets, failing CI, waiting reviews).
- Memory scan: What you previously cared about, what corrections you have made, what preferences are on file.
- Output: A prioritized list of suggested tasks, with enough context attached to accept or reject each one in a single click.
For agencies, this is less an AI productivity feature and more a team ops feature. A project lead can start their morning by triaging Codex's suggestion queue rather than manually combing Slack, email, JIRA, and GitHub for what needs attention. The quality of the suggestions will depend heavily on which plugins are connected and how much useful memory Codex has accumulated, so early teams should expect a ramp period before the queue becomes genuinely useful.
Rollout Reality: macOS-First, EU/UK Later
Every release post glosses over the rollout constraints, and this one is no exception. The honest version is that the big-ticket items do not arrive uniformly, and agencies planning to commit to Codex for client work need to read the fine print.
| Capability | macOS | Windows | EU / UK |
|---|---|---|---|
| Background Computer Use | Today | Not yet | Later |
| In-App Browser | Today | Today | Today |
| gpt-image-1.5 in Workflows | Today | Today | Today |
| 90+ Plugin Library | Today | Today | Today |
| Cross-Session Automations | Today | Today | Today |
| Memory Preview | Rolling out | Rolling out | Later wave |
| Proactive Suggestions | Today | Today | Today |
| SSH Remote Devboxes | Alpha | Alpha | Alpha |
Planning note: Rollout details above reflect the April 16, 2026 announcement. Timelines for EU/UK, Windows Computer Use, and memory preview are OpenAI-controlled and likely to move. Verify current availability before committing a client workflow to any gated capability.
For context on the Windows side of the story, our Codex Windows desktop guide covers the March 4 release and its sandboxing model in detail.
Codex vs Claude Code vs Jules After April 16
The April 16 update materially shifts the coding-agent landscape. Codex has absorbed capabilities that previously belonged to other products, specifically desktop control (from Claude's computer use) and visual iteration (from dedicated design tools). But Claude Code and Jules still hold clear ground on reasoning quality and asynchronous GitHub work respectively.
| Dimension | Codex (Apr 16) | Claude Code | Jules |
|---|---|---|---|
| Desktop Computer Use | Background, parallel agents | Single-cursor, foreground | Not a focus |
| Deep Reasoning / Review | Strong | Category-leading | Solid |
| Async GitHub Jobs | Good, now with PR review comments | Good | Category-leading |
| Visual Iteration Loop | In-app browser with comments | Text-first | Text-first |
| Image Generation In Workflow | Yes (gpt-image-1.5) | Via external tools | Via external tools |
| Plugin / Integration Catalog | 90+ unified | MCP ecosystem | Google-adjacent |
| Cross-Session Persistence | Automations + memory preview | Project memory | Job-scoped |
For a fuller head-to-head across benchmarks and workflows, see our Claude Code vs Codex vs Jules Q2 2026 matrix, or the companion pieces on Jules and the broader GPT-5.3 Codex release.
Agency Deployment Patterns for This Update
The temptation with a release of this size is to roll it out horizontally. The better approach is to pilot one or two high-leverage workflows that this update specifically unlocks, then expand once the team is comfortable with the new agent surface area. Four patterns are worth starting with.
1. Dashboard Build Sprints with In-App Browser
Pick a single dashboard or interface build and run the review loop entirely through the in-app browser, with designers and PMs leaving comments directly on rendered pages. Measure cycle time from feedback to fix against your current process. This is the workflow where the comment-on-page pattern pays off fastest.
2. Background QA Passes via Computer Use
Run a Codex agent in the background to click through a staging site, fill forms, and report issues while the developer keeps working in their IDE. The parallel-cursor design means this can happen without stopping foreground work, which is where Computer Use differentiates from earlier computer-control tools.
3. Automated Weekly Status Reports
Use automations to have Codex wake up every Friday, scan the past week's PRs, Slack channels, and Notion pages for a client, and produce a status report. Over time, memory preview will improve this by remembering which signals matter to each client and which you explicitly do not care about.
4. Unified Plugin Stack for Operations
Consolidate your JIRA, CircleCI, CodeRabbit, and GitHub context onto the Codex plugin catalog and use proactive suggestions as a morning triage queue. This is less about generating code and more about using Codex as the signal-routing layer across the tools the team already uses.
For teams ready to go deeper on how to deploy coding agents at agency scale, our enterprise coding agent deployment playbook covers permissions, training, and rollout phasing in detail.
Conclusion
The April 16 Codex update earns its "for almost everything" name. Background Computer Use, an in-app browser with comment-based instructions, gpt-image-1.5 inside workflows, a unified 90+ plugin catalog, cross-session automations, memory preview, and proactive suggestions collectively turn Codex into a desktop control surface rather than a coding assistant. The 3 million-developer weekly base is now using a materially different product than the one they signed up for.
The caveats matter too. Background Computer Use is macOS-first, Windows and EU/UK users get a subset at launch, and memory preview is on a gradual rollout. Claude Code and Jules still hold clear ground on reasoning depth and asynchronous GitHub work. The right move for agencies is a one- or two-workflow pilot focused on the capabilities this release uniquely unlocks, not a full toolchain replacement.
Ready to Put Codex to Work?
Whether you're evaluating Codex's new Computer Use, migrating a team onto the unified plugin catalog, or designing agent workflows across Slack, JIRA, and GitHub, we can help you scope a pilot and roll it out cleanly.
Frequently Asked Questions
Related Guides
Continue exploring coding agents and AI developer tooling