OpenAI's Codex CLI was rewritten from TypeScript to Rust over 2025–2026 (the Rust build now ships as the maintained Codex CLI at versions 0.128–0.131), and the team conventions around it have shifted with the rewrite. This playbook walks the configuration changes a team typically needs to plan and verify — config.tomllayout, authentication, profiles, and sandbox modes — when moving off the legacy TypeScript CLI onto today's build.
Two notes on framing up front. OpenAI does not version this as a formal "v1 → v2" cut-over: the Rust implementation simply became "the maintained Codex CLI" in the open-source repo and the prior client is referred to as "the legacy TypeScript CLI." And the official Migrate to Codex flow is for moving from a different agent (Claude Code, Cursor) into Codex, not for in-place CLI upgrades. The "v1 → v2" shorthand in this post is editorial: it's how teams talk about the practical config delta, not a published OpenAI release line.
With that framing settled, this playbook walks the conventional changes a team handles in practice: how the config.toml layout has settled out, how authentication works (ChatGPT OAuth and API keys, plus the access-token pattern for trusted CI), how profiles compose under [profiles.NAME], the three sandbox modes, a phased rollout pattern, and the four common failures with their diagnostic signals.
- 01config.toml is where most of the configuration work lives.Codex loads ~/.codex/config.toml plus per-project .codex/config.toml layers (closest wins, trusted projects only). Most teams need a one-time sweep across every checked-in config to consolidate keys and remove anything specific to the legacy TypeScript CLI.
- 02Pick the right auth path for each environment.Codex supports two sign-in paths — ChatGPT OAuth (recommended for developer laptops) and OpenAI API keys (recommended for CI/CD and programmatic workflows). Trusted CI runners can also keep auth.json refreshed across jobs; API keys are still the recommended automation default.
- 03Per-profile settings unlock dev / CI / prod separation.The profile API revamp lets one config file describe many environments. Use named profiles deliberately rather than overloading environment variables — fewer foot-guns and the activation is explicit.
- 04Sandbox modes are read-only, workspace-write, danger-full-access.Codex exposes three documented sandbox modes plus matching CLI flags (--sandbox workspace-write, --ask-for-approval on-request). Set sandbox_mode and approval_policy at the top level of config.toml; per-profile overrides go under [profiles.NAME].
- 05Keep a last-known-good config branch through cut-over.OpenAI does not publish a fixed rollback window. Hold a last-known-good config branch and pin the legacy TypeScript binary through each wave so a problem mid-migration is a one-command revert rather than a forward-only firefight.
01 — What's Newv2 ships in four axes — config, auth, profiles, sandbox.
None of these are framed by OpenAI as breaking-change axes — the Rust CLI is the maintained Codex CLI and the documentation simply describes its configuration surface (see Config basics and Advanced config). What teams experience as a migration is the practical work of settling on a single config.toml shape, picking sign-in paths per environment, designing named profiles, and choosing sandbox defaults deliberately.
The non-mechanical parts are the auth path (where CI workflows need their own pattern — typically OpenAI API keys, sometimes the cached-credential pattern on trusted runners) and the profile design (where teams need to think about which environments deserve their own profile versus sharing one).
config.toml layout
flat keys · [profiles.NAME] tablesCodex reads ~/.codex/config.toml plus trusted project .codex/config.toml layers. Top-level keys like model, sandbox_mode, and approval_policy live flat; profiles compose under [profiles.NAME]; MCP servers under [mcp_servers.NAME]. Most teams need a one-time sweep to consolidate.
Where most teams spend their timeAuth paths
ChatGPT OAuth · OpenAI API key · cached CI authCodex supports Sign in with ChatGPT (OAuth) and Sign in with an API key. API keys are the recommended default for CI/CD; trusted private runners can also keep auth.json refreshed across jobs (the advanced CI/CD-auth pattern). One auth path per environment, not three modes in one config.
Two sign-in paths, one cached loginProfile setup
per-profile model · sandbox_mode · approval_policyNamed profiles under [profiles.NAME] each carry their own model, sandbox_mode, approval_policy, and other overrides. Activation is explicit via the --profile flag; setting profile = "name" at the top of config.toml makes it the default. Profiles are documented as experimental in the Codex docs.
Deliberate environment separationSandbox modes
three documented modesThree sandbox modes are documented: read-only, workspace-write (default low-friction local), and danger-full-access. Matching CLI flags are --sandbox <mode> and --ask-for-approval <policy>. In workspace-write mode Codex protects .git and .codex from edits by default.
Three modes, mode-specific defaultsThe fourth surface — profile design — is the one that benefits from human thinking, because the right profile layout for your team isn't something a tool can infer. Most teams that bother with profiles land on three (dev, ci, prod) and stop there; a minority with multiple production surfaces add a fourth or fifth deliberately.
0.128, 0.129, 0.130 in May 2026. What teams experience as a migration is the practical move from the legacy TypeScript CLI to the Rust implementation that is now the maintained Codex CLI. Treat the "v1 → v2" phrasing in this post as editorial shorthand for that practical delta.02 — config.tomlThe actual config.toml shape teams settle on.
Codex reads ~/.codex/config.toml for user-level defaults and overlays trusted per-project .codex/config.toml layers (closest wins). The schema is documented in Config basics and Config reference — the layout is flat top-level keys (model, sandbox_mode, approval_policy) with a handful of structured tables ([profiles.NAME], [mcp_servers.NAME], AWS Bedrock-style nested settings).
What teams typically rework when moving off the legacy TypeScript CLI: consolidate any ad-hoc keys into the documented surface, move MCP server configuration into [mcp_servers.NAME] tables, and decide whether named profiles are worth the maintenance burden for the team's shape of work.
# ~/.codex/config.toml — minimal, real schema
model = "gpt-5.5"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
# MCP servers are configured per-server under [mcp_servers.NAME]
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp"]
# Named profiles (experimental) — switch with --profile <name>
[profiles.deep-review]
model = "gpt-5-pro"
model_reasoning_effort = "high"
approval_policy = "never"
[profiles.lightweight]
model = "gpt-4.1"
approval_policy = "untrusted"
# Make a profile the default at the top level
# profile = "deep-review"
Two callouts on the schema. First, profiles are explicitly documented as experimental and not currently supported in the Codex IDE extension — design around that constraint. Second, OpenAI does not ship a codex migrate codemod that rewrites configs in place; the practical sweep is a search-and-replace job per repo.
model — top-level default
Top-level model = '...' sets the default Codex model for the CLI and IDE. Codex documentation shows examples like gpt-5.5 and gpt-5-pro; check the Models page for the current default and supported list.
Top-level keysandbox_mode — workspace-write default
sandbox_mode is a flat top-level key. Three documented values: read-only, workspace-write (default low-friction local), danger-full-access. Matching CLI flag is --sandbox <mode>. Section 05 covers the per-mode defaults around writable roots and network.
Pick per environmentAuth — managed via sign-in, not config keys
Codex does not read API keys or tokens from config.toml directly. Sign in with ChatGPT (OAuth) or with an API key from the OpenAI platform; the CLI caches login state in auth.json. For CI/CD, OpenAI recommends API keys; the auth/ci-cd-auth doc covers the advanced cached-credential pattern for trusted runners.
Pick sign-in per envapproval_policy — on-request, untrusted, never
approval_policy is a flat top-level key. Common values: untrusted, on-request (default low-friction), never. The matching CLI flag is --ask-for-approval <policy>. Override per-profile under [profiles.NAME] when dev / CI / prod want different defaults.
Pick per environmentPractical sweep pattern: list every config file the team has checked in (developer defaults, CI runners, per-package configs in monorepos), reconcile each one against the documented schema, remove any keys not present in the Configuration reference, and commit the sweep in its own PR so reviewers can audit the mechanical changes separately from any deliberate edits.
On managed machines, organizations can enforce constraints via requirements.toml (for example, disallowing approval_policy = "never" or sandbox_mode = "danger-full-access"). If your team operates under managed configuration, check the requirements file before designing profiles — settings the requirements layer blocks won't take effect even if you set them locally.
03 — Auth SurfaceChatGPT OAuth, API keys, and the cached CI/CD pattern.
Codex CLI documents two sign-in paths in the official Authentication overview: Sign in with ChatGPT (OAuth) and Sign in with an API key. Pick the path that matches the environment: ChatGPT OAuth is the recommended default for developer laptops (it unlocks ChatGPT-plan features like fast mode), and OpenAI API keys are the recommended default for programmatic Codex CLI workflows including CI/CD jobs.
Codex caches login details after the first sign-in (whether ChatGPT or API key), and the CLI and IDE extension share the same cached state. A logout from one invalidates the other. For trusted private CI/CD runners, OpenAI documents a more advanced pattern (Maintain Codex account auth in CI/CD) that lets Codex refresh auth.json during normal runs and keep the updated file for the next job — useful when you specifically need ChatGPT-plan features in automation. API keys remain the default recommendation.
Sign in with ChatGPT — developer laptops
Codex runs a browser OAuth flow on first use and caches the result. Recommended for developer laptops; required to unlock ChatGPT-plan features like fast mode. The right path for any environment where a human is present at first use.
Pick for laptopsSign in with an OpenAI API key
Use an API key from the OpenAI platform dashboard. Recommended for programmatic Codex CLI workflows, including CI/CD jobs. Billing follows your API organization at standard API rates; ChatGPT-plan features such as fast mode are not available with API-key sign-in.
Pick for CI/CDCached auth.json on trusted private runners
For trusted private CI/CD runners that specifically need ChatGPT-plan features, the advanced CI/CD-auth pattern lets Codex refresh auth.json during runs and persist it for the next job. Use only on private, trusted runners — see the OpenAI docs link above for the full setup.
Advanced — private runners onlyFor teams currently running Codex in CI with whichever pattern felt convenient, the practical migration is conservative: standardise on OpenAI API keys for unattended jobs, keep them in your existing secrets manager (GitHub Actions secrets, GitLab CI variables, AWS Secrets Manager), and reserve the cached-credential pattern for the narrow case where ChatGPT-plan features in CI are worth the extra runner-side hardening. Don't expose Codex execution in untrusted or public environments.
Permission separation between dev / CI / prod still comes from the surrounding environment — secrets managers, repository access, network egress — rather than from Codex auth itself. Codex's job is to authenticate the agent; your platform's job is to scope what that agent can do.
"For Codex in CI, API keys are the default for a reason — they fit cleanly into existing secrets-manager workflows. Reach for cached-credential patterns only when ChatGPT-plan features in CI are worth the extra runner-side hardening."— Internal note, Digital Applied agentic engineering team
04 — Profile APIPer-profile model, sandbox, approval settings.
Codex profiles let you save named sets of configuration values and switch between them from the CLI. They're documented as experimental and are not currently supported in the Codex IDE extension — design around that constraint before adopting them broadly. The mechanics are simple: define profiles under [profiles.NAME] in config.toml, then run codex --profile NAME. The judgement call is which environments deserve their own profile, and which can share.
Three profiles is a common landing point — dev, ci, prod — and most teams should start there before adding more. Profiles aren't free: every profile is a configuration surface that someone has to maintain, and the surface compounds with the number of repos that consume it. The right question is "does this environment have meaningfully different requirements" rather than "could we make a profile for this?".
# ~/.codex/config.toml with three profiles
# (profiles are experimental; see Advanced Config)
model = "gpt-5.5"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
# Pick a profile to make default at the top level (optional):
# profile = "dev"
[profiles.dev]
model = "gpt-5.5"
sandbox_mode = "workspace-write"
approval_policy = "on-request"
[profiles.ci]
model = "gpt-5.5"
sandbox_mode = "workspace-write"
approval_policy = "untrusted"
[profiles.prod]
model = "gpt-5.5"
sandbox_mode = "read-only"
approval_policy = "untrusted"
Activation is explicit via the --profile NAME flag. To make a profile the default without typing the flag every time, add profile = "NAME" at the top level of config.toml; Codex loads that profile unless you override it on the command line. There is no documented CODEX_PROFILE environment variable in the Codex docs as of May 2026 — rely on the --profile flag or the top-level default.
Developer laptops
sandbox_mode = 'workspace-write', approval_policy = 'on-request' for a low-friction local default. Sign in with ChatGPT (OAuth) at the user level. The most permissive profile — appropriate because a human is at the keyboard.
ChatGPT OAuthCI workers
sandbox_mode = 'workspace-write' with approval_policy = 'untrusted' for unattended runs. OpenAI API key for auth, supplied via the runner's secrets manager. The right balance of capability and containment for CI.
API key authProduction agents
sandbox_mode = 'read-only' with approval_policy = 'untrusted' for production agents that only need to read code or write to a tightly scoped output channel. API-key auth, managed by your platform's secrets layer.
API key authOne practical rule for designing profiles: name them after environments rather than people or teams — dev, ci, prod ages better than alice, frontend-team, migration-project. Codex profile values override the top-level settings; rely on that precedence to keep shared defaults at the top level and put only the deltas inside each [profiles.NAME] table.
For teams currently using environment variables to switch behaviour, the migration is straightforward: create profiles for each environment, move the variable-driven settings into the corresponding profile, and replace the variable-switching shell wrapper with an explicit codex --profile NAME call (or set a top-level profile = "..."default). The result is a config surface that's easier to read, easier to review, and harder to misconfigure silently.
05 — Sandbox FlagsThe three documented sandbox modes and their defaults.
Codex documents three sandbox modes (Sandboxing): read-only, workspace-write (the default low-friction mode for local work), and danger-full-access (no filesystem or network boundary; reserve for narrow, deliberate use). The matching CLI flags are --sandbox <mode> and --ask-for-approval <policy>; the config keys are top-level sandbox_mode and approval_policy.
The three documented Codex sandbox modes
Source: OpenAI Codex docs — Sandboxing concept page (May 2026). Bar widths illustrate relative permissiveness, not measured usage.One default to know in detail. In workspace-write mode, some environments keep .git/ and .codex/ read-only by default — which is why commands like git commit may still require approval to run outside the sandbox. If you want Codex to skip specific commands (for example, block git commit outside the sandbox), use Codex rules: rules let you allow, prompt, or forbid command prefixes outside the sandbox, which is often a better fit than broadly expanding access.
If Codex needs to write across more than one directory, the documented escape hatch is sandbox_workspace_write.writable_roots (see the Configuration reference) — extend the writable paths rather than relaxing the entire sandbox boundary. There is no documented "read-only-with-tmp" mode in the current docs; if a workload needs writes to a temporary directory, configure it via writable roots or a separate profile.
read-only — inspect only
Codex can inspect files but can't edit or run commands without approval. The right mode for review / triage agents, code-reading workflows, and any environment where you want zero filesystem mutation.
Pick for read-only agentsworkspace-write — default local mode
Codex can read files, edit within the workspace, and run routine local commands inside the boundary. The default low-friction mode for local development. Pair with approval_policy = 'on-request' for the documented local-automation preset.
Default for laptopsdanger-full-access — no boundary
Codex runs without sandbox restrictions — filesystem and network are both open. Combine with approval_policy = 'never' only when you want Codex to act with full access; on managed machines this combination is commonly blocked via requirements.toml.
Reserve for narrow casesApprovals reviewer (orthogonal to mode)
approvals_reviewer = 'user' (default) surfaces approval prompts to the developer; 'auto_review' routes eligible prompts to a reviewer agent. The sandbox boundary doesn't change — only who answers the approval prompt does.
Orthogonal controlPractical pattern: pick workspace-write + on-request for everyday local work, read-only for review and triage profiles, and reserve danger-full-access for narrow, deliberate cases where you genuinely want no sandbox. Use Codex rules for surgical allow/prompt/forbid behaviour rather than broadly relaxing the sandbox mode.
06 — Phased RolloutPilot → wave 1 → wave 2 → cut over.
The phased rollout pattern below is what we recommend for any team with more than a handful of repos using Codex CLI. Big-bang migrations are tempting because they're conceptually simple, but they concentrate every failure mode into one window and leave no room to learn. A four-phase rollout spreads the risk across two to three weeks, lets each wave inform the next, and keeps a working legacy fallback through the entire process.
Pilot · one repo
1 repo · 1 team · 3 daysPick a low-traffic repo with a small team. Move the config to the documented schema, set up named profiles if useful, switch the CI workflow to API-key auth. Document every issue. The pilot's job is to surface the unknowns before they cost a wave.
Goal: find the gotchasWave 1 · ~30% of repos
non-production · 5–7 daysApply the pilot's lessons to the next batch — typically internal tools, documentation repos, and other non-critical surfaces. Two engineers shepherd the wave; one owns the config sweep, one owns CI workflow updates.
Goal: prove the patternWave 2 · production
production repos · 5–7 daysMigrate the production repos. Keep the legacy CLI installable and the prior config branch alive. Communicate the cut-over to dependent teams a week in advance — agentic pipelines breaking unexpectedly is the avoidable outage.
Goal: ship the valueCut over · retire legacy
post-cutover cleanup · 1–2 daysOnce the last wave has been stable for a business day or two, remove the legacy CLI from CI images, revoke any tokens that were only used by the legacy client, and confirm every repo loads cleanly against the documented Rust CLI schema. The migration is done.
Goal: leave it cleanTwo practical operating rules for the phased rollout. First, keep the mechanical sweep and the human edits in separate commits — a reviewer reading the PR should be able to see the schema-only changes at a glance and focus their attention on the deliberate ones. Second, every wave should produce a short retrospective note (what worked, what didn't, what changed for the next wave) — the second and third waves are cheaper precisely because the pilot's lessons compound.
One temptation to resist: combining the migration with other related changes (a model upgrade, a sandbox tightening, a profile reorganisation) into one cut-over. Each of those changes is worth doing on its own merits but bundling them into the migration window makes diagnosis harder when something breaks. Ship the schema move first, prove it's stable for a week, then make the other changes as deliberate follow-up PRs.
07 — Common PitfallsFour upgrade failures with diagnostic signals.
The migration failures below are the ones we see most often across teams — each has a clear diagnostic signal that points at the cause. None are catastrophic if caught early; all are painful if caught late.
Most common Codex CLI migration failures · relative frequency
Source: Digital Applied migration support, May 2026 (illustrative ordering across 23 incidents, not a quantitative benchmark).Failure 01 — Unknown / legacy key still in config. The most common failure by a wide margin. A key that worked under the legacy TypeScript CLI (or was added years ago for in-house tooling) survives the sweep because nobody noticed it isn't in the documented Rust CLI schema. The diagnostic signal is a startup error or warning mentioning an unknown / unrecognised key. The fix is to reconcile against the Configuration reference — remove obsolete keys, restructure to the documented shape, and file a feature request if a capability you depended on isn't documented.
Failure 02 — CI workflow auth not standardised. The team migrates configs in a sweep but leaves CI workflows on a mix of patterns inherited from earlier setups. Nothing breaks immediately, but the team loses the simplicity benefit of picking a single auth path per environment. The diagnostic signal is an audit on day seven that finds two or three different sign-in patterns across CI workflows. Standardise on OpenAI API keys for unattended jobs unless you have a documented reason to use the cached-credential pattern.
Failure 03 — Sandbox / approval default surprise. A workload that relied on assumptions about the legacy CLI breaks because the documented Rust defaults are sandbox_mode = "workspace-write" with approval_policy = "on-request". The diagnostic signal is a workload that asks for approval when it used to run silently, or a step that fails to write outside the workspace boundary. The fix is to set the appropriate sandbox_mode / approval_policyper profile, or extend writable roots — don't reach for danger-full-access just to make the warning go away.
Failure 04 — Profile activation missing. A team sets up named profiles but a wrapper script or CI workflow forgets to pass --profile NAME, and there's no top-level profile = "NAME" default in config.toml. Codex falls back to the top-level settings, and the wrong defaults apply silently. The diagnostic signal is a Codex run that succeeds but produces output inconsistent with the intended profile. Always grep your CI workflows for the --profile activation pattern after the migration; the explicit-activation discipline is what makes profiles trustworthy.
For broader context on the Codex ecosystem and how this migration sits inside it, our Codex test-generation pipeline tutorial walks the canonical CI-side pattern that benefits most from v2's headless auth, and the Claude Code custom subagent tutorial covers the parallel pattern in the Anthropic ecosystem — the architectural shape carries across vendors. Teams running CLI migrations across many surfaces should read our AI digital transformation engagements for the longer-form playbook on coordinated rollouts.
CLI migrations are predictable when phased — pilot, wave, cut over, retire.
The practical Codex CLI migration — legacy TypeScript client to the Rust implementation now shipped as the maintained Codex CLI — isn't a mysterious project. A documented config schema, two clear sign-in paths, three sandbox modes, and a phased rollout pattern that spreads the risk across two to three weeks. Done well, it's a focused multi-day project per repo; done ad-hoc, it eats a sprint and produces config sprawl that haunts the team for quarters.
The payback is worth naming: a single documented config surface instead of an ad-hoc collection of keys, environment separation via named profiles where teams actually need it, and a sandbox model that's legible enough to reason about in security review. Teams that pick one sign-in path per environment and one sandbox mode per profile — and resist the urge to invent new keys — find the post-migration config surface meaningfully smaller and harder to misconfigure.
The broader pattern is the one to keep. Treat every CLI migration as a phased rollout, not a big-bang cut-over. Separate the mechanical schema sweep from human edits. Preserve a last-known-good config branch and keep the prior binary installable. Audit on day seven for migrations that look done but quietly aren't. The same shape applies to every CLI bump you'll do in the next two years — Codex, Claude Code, Gemini, whatever ships next — and the team that internalises it once stops dreading the upgrade cycle for good.