Skip to main content

Configuration Playbook

Codex CLI

Overview

Configure Codex CLI for OpenAI-powered terminal code generation and planning. For broader feature details, see the official Codex CLI documentation.

Quick Start

Create ~/.codex/config.toml:

model = "gpt-5-codex"
approval_policy = "untrusted"
sandbox_mode = "read-only"

Configuration Files & Locations

ScopeLocationPurposeGit Tracked?
Global~/.codex/config.toml (or $CODEX_HOME/config.toml)User defaults, model, approval, MCP serversNo
Per-command--config key=value flagsOverride any setting for single commandN/A
Environment$CODEX_HOMEConfig location (defaults to ~/.codex)No

Authentication Setup

Codex CLI supports two authentication methods for OpenAI API access.

Method 1: ChatGPT OAuth (Interactive)

Login via browser-based OAuth flow:

codex login

This opens your browser to authenticate with ChatGPT. Credentials are stored in:

  • File storage (default): $CODEX_HOME/auth.json (permissions: 0600)
  • Keyring storage: OS-level secure storage (see “Control Credential Storage” below)

Method 2: API Key (Programmatic)

For CI/CD, automation, or non-interactive environments:

  1. Get key from OpenAI API dashboard
  2. Store in environment:
    export OPENAI_API_KEY="sk-proj-..."
  3. Codex automatically detects and uses the environment variable

No configuration file changes needed—Codex checks for OPENAI_API_KEY by default.

Enterprise: Forcing a Login Method

Restrict authentication methods organization-wide:

forced_login_method = "chatgpt"

forced_login_method = "api"

forced_chatgpt_workspace_id = "00000000-0000-0000-0000-000000000000"

Official managed configuration docs

Control Credential Storage

Choose where authentication credentials are stored:

cli_auth_credentials_store = "file"  # default: auth.json in $CODEX_HOME
cli_auth_credentials_store = "keyring"  # OS-level secure storage (macOS Keychain, Windows Credential Manager, Linux Secret Service)
cli_auth_credentials_store = "auto"  # Try keyring, fallback to file

Model Selection

OptionTypeDefaultPurpose
modelstring"gpt-5-codex" (macOS/Linux), "gpt-5" (Windows)Primary AI model
model_providerstring"openai"Provider from model_providers map
model_context_windowintAuto-detectedContext window in tokens
model_max_output_tokensintAuto-detectedMax response length
model_reasoning_effortstring"medium"Reasoning models: "minimal", "low", "medium", "high"
model_reasoning_summarystring"auto"Summary style: "auto", "concise", "detailed", "none"
model_verbositystring"medium"GPT-5 output length: "low", "medium", "high"

Example:

model = "gpt-5"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"
model_verbosity = "low"

Custom Model Providers

Add non-OpenAI providers compatible with OpenAI API:

model = "mistral"
model_provider = "mistral"

[model_providers.mistral]
name = "Mistral AI"
base_url = "https://api.mistral.ai/v1"
env_key = "MISTRAL_API_KEY"
wire_api = "chat"  # or "responses"

Official model provider examples

Safety & Approval Controls

Approval Policy

Controls when Codex prompts for permission before executing commands:

ValueBehavior
"untrusted"Prompt before running commands not in the “trusted” set (default)
"on-failure"Prompt only when sandbox execution fails
"on-request"Model decides when to escalate for permissions
"never"Never prompt; auto-retry on failure (used by codex exec subcommand)

Example:

approval_policy = "untrusted"  # safest for interactive use

Sandbox Mode

OS-level execution constraints for model-generated commands:

ModeBehavior
"read-only"Can read any file; blocks writes and network access (default)
"workspace-write"Can write to current working directory and $TMPDIR; network blocked
"danger-full-access"No sandboxing; full disk and network access

Example:

sandbox_mode = "read-only"  # recommended default

[sandbox_workspace_write]
writable_roots = ["/Users/YOU/.pyenv/shims"]  # additional writable paths
network_access = false  # allow network (default: false)
exclude_tmpdir_env_var = false  # exclude $TMPDIR from writable roots
exclude_slash_tmp = false  # exclude /tmp from writable roots

Command-line overrides:

codex --sandbox read-only  # safest
codex --sandbox workspace-write  # development
codex --sandbox danger-full-access  # unrestricted (use with caution)

Official sandbox documentation

MCP Servers & Tool Integration

Configure Model Context Protocol servers for external tool access.

STDIO Servers

Launch MCP servers directly via commands:

[[mcp_servers]]
[mcp_servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_PERSONAL_ACCESS_TOKEN = "${GITHUB_TOKEN}" }

startup_timeout_sec = 20  # default: 10
tool_timeout_sec = 30  # default: 60
enabled = true  # default: true
enabled_tools = ["search", "get_file"]  # optional whitelist
disabled_tools = ["delete"]  # optional blacklist
cwd = "/Users/user/code/my-server"  # optional working directory

Streamable HTTP Servers

Connect to HTTP-based MCP servers:

[mcp_servers.figma]
url = "https://mcp.figma.com/mcp"
bearer_token_env_var = "FIGMA_TOKEN"  # optional auth
http_headers = { "X-Custom" = "value" }  # optional static headers
env_http_headers = { "X-Dynamic" = "ENV_VAR" }  # optional env-based headers

For OAuth support (experimental):

experimental_use_rmcp_client = true

Then login: codex mcp login figma

MCP CLI Commands

codex mcp add github -- npx -y @modelcontextprotocol/server-github
codex mcp list  # show configured servers
codex mcp get github  # show server details
codex mcp remove github
codex mcp login figma  # OAuth login for streamable HTTP servers
codex mcp logout figma

Official MCP integration docs

Profiles for Multiple Environments

Define reusable configuration bundles:

model = "gpt-5-codex"
approval_policy = "untrusted"

profile = "o3"

[profiles.o3]
model = "o3"
model_provider = "openai"
approval_policy = "never"
model_reasoning_effort = "high"
model_reasoning_summary = "detailed"

[profiles.gpt3]
model = "gpt-3.5-turbo"
model_provider = "openai-chat-completions"
approval_policy = "untrusted"

Switch profiles:

codex --profile o3  # via command line
profile = "o3"  # in config.toml

Precedence (highest to lowest):

  1. Command-line flags (--model o3)
  2. --profile selection
  3. config.toml root settings
  4. Built-in defaults

Official profiles documentation

Feature Flags

Enable experimental and optional capabilities:

[features]
streamable_shell = true  # streamable exec tool
web_search_request = true  # allow model to request web searches
apply_patch_freeform = false  # freeform apply_patch tool (beta)
view_image_tool = true  # enable image uploads (stable, default: true)
rmcp_client = false  # OAuth for streamable HTTP MCP servers
ghost_commit = false  # create ghost commit each turn
experimental_sandbox_command_assessment = false  # model-based risk assessment

Full feature flag reference

Shell Environment Policy

Control which environment variables are passed to subprocesses:

[shell_environment_policy]
inherit = "core"  # "all", "core", or "none"
ignore_default_excludes = false  # skip KEY/SECRET/TOKEN filter
exclude = ["AWS_*", "AZURE_*"]  # case-insensitive globs
set = { CI = "1" }  # force-set values
include_only = ["PATH", "HOME"]  # whitelist (if provided, only these pass)

Official environment policy docs

History & Session Tracking

[history]
persistence = "save-all"  # or "none" to disable

Messages are saved to $CODEX_HOME/history.jsonl (permissions: 0600).

AGENTS.md Discovery

Codex automatically discovers project-specific instructions from AGENTS.md files:

project_doc_max_bytes = 32768  # max bytes to read (default: 32 KiB)
project_doc_fallback_filenames = ["CLAUDE.md", ".exampleagentrules.md"]  # fallback names

AGENTS.md documentation

Observability & Telemetry

OpenTelemetry Export

Export structured events to OTLP collectors:

[otel]
environment = "staging"  # defaults to "dev"
exporter = "none"  # or otlp-http, otlp-grpc
log_user_prompt = false  # redact prompt text by default

exporter = { otlp-http = {
  endpoint = "https://otel.example.com/v1/logs",
  protocol = "binary",
  headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}

exporter = { otlp-grpc = {
  endpoint = "https://otel.example.com:4317",
  headers = { "x-otlp-meta" = "abc123" }
}}

Events emitted: codex.conversation_starts, codex.api_request, codex.sse_event, codex.user_prompt, codex.tool_decision, codex.tool_result

Official OTEL documentation

Custom Notification Script

Execute a program on agent events:

notify = ["python3", "/Users/you/.codex/notify.py"]

Your script receives JSON argument:

{
  "type": "agent-turn-complete",
  "thread-id": "b5f6c1c2-...",
  "turn-id": "12345",
  "cwd": "/Users/alice/projects/example",
  "input-messages": ["Rename foo to bar"],
  "last-assistant-message": "Rename complete."
}

TUI Desktop Notifications

[tui]
notifications = true  # enable all notifications
notifications = ["agent-turn-complete", "approval-requested"]

Note: Requires terminal with escape code support (iTerm2, Ghostty, WezTerm; not macOS Terminal.app or VS Code terminal).

Hide/Show Reasoning

hide_agent_reasoning = true  # suppress reasoning events (default: false)
show_raw_agent_reasoning = true  # show raw chain-of-thought (default: false)

Best Practices

Do:

  • Store API keys in environment variables (never commit credentials)
  • Use approval_policy = "untrusted" for interactive development
  • Set sandbox_mode = "read-only" or "workspace-write" (avoid "danger-full-access")
  • Use profiles for different environments/models
  • Document MCP servers and their environment variables in project README
  • Run codex mcp list to verify MCP server configuration
  • Keep $CODEX_HOME/config.toml minimal; override per-command when needed
  • Use AGENTS.md for project-specific instructions

Don’t:

  • Commit auth.json or API keys to version control
  • Set sandbox_mode = "danger-full-access" without understanding risks
  • Use approval_policy = "never" in interactive sessions (reserved for codex exec)
  • Mix authentication methods without testing
  • Ignore MCP server startup failures
  • Disable view_image_tool if you need screenshot analysis

Troubleshooting & Validation

Check configuration:

cat ~/.codex/config.toml | toml-lint

Common issues:

ProblemSolution
”API key invalid” errorVerify OPENAI_API_KEY is set: echo $OPENAI_API_KEY
”Not authenticated” errorRun codex login for ChatGPT OAuth
TOML syntax errorUse https://www.toml-lint.com/ to validate syntax
MCP server fails to startCheck command path: which npx and test: npx -y @modelcontextprotocol/server-github --help
MCP server timeoutIncrease startup_timeout_sec in MCP server config
”Permission denied” in sandboxCheck sandbox_mode setting; use workspace-write for write access
Approval prompts too frequentChange approval_policy to "on-request" or "on-failure"

Debug logging:

ls ~/.codex/*.log

MCP troubleshooting:

codex mcp list  # show all configured servers
codex mcp get github  # show specific server details
codex mcp remove github  # remove broken server
codex mcp add github -- npx -y @modelcontextprotocol/server-github  # re-add

Additional Resources

Last updated: 2025-11-12