Codex Mobile with Third-Party API: Exploiting the Auth/Model Layer Decoupling


After Codex Mobile entered grey-scale rollout, logging in with a ChatGPT account lets you remotely control your local Codex instance — the experience is genuinely smooth. But the most common question in my inbox: can it work with third-party API relays?
I wasn’t sure at first. Codex has strict interface validation, and regular relay services get rejected outright. After an afternoon of digging through config files, I discovered something: the Auth and Model layers run completely independently. That’s the opening.
Here’s the full configuration walkthrough so you can replicate it.
How It Works: Two Gates, Independent Control
When Codex processes a conversation request, it passes through two checkpoints.
The first is the Auth layer. It handles the “who are you” question: login state, Plus membership, plugin permissions, Mobile unlock, and quota checks. Once verified, it doesn’t care which API provider you use downstream.
The second is the Model layer. This handles the actual work: sending your prompt to a model and returning the response. It reads from config.toml to determine the provider.
Here’s the key — these two layers are decoupled. The Auth layer validates your ChatGPT account. The Model layer reads your config. They don’t talk to each other.
Think of it this way: the Auth layer is like a building access card that proves you’re a resident. The Model layer is like the contractor you hire to do renovations. The access card doesn’t care which contractor you pick.
So the strategy is straightforward: keep Auth using your ChatGPT account for access, and quietly swap the Model layer to coding.rexai.top .
Three Things You Need
- A ChatGPT account — Free tier is enough. Codex Mobile supports free users remotely controlling a local Codex instance.
- A relay account that supports the OpenAI Responses API. I recommend coding.rexai.top — my go-to relay service, confirmed working. Why it matters, I’ll explain shortly.
- Two config file paths:
~/.codex/auth.jsonand~/.codex/config.toml
A few words on point two. Codex sends requests using OpenAI’s Responses API, which maps to the /v1/responses endpoint. Most relay services only support the older /v1/chat/completions path — hit them with a Responses request and you get a 404 or 405. coding.rexai.top is one of the few that explicitly supports the Responses API.
Configuration (Four Steps)
Step 1: Log In First, Then Edit Config
The order here matters.
Open Codex normally, log in with your ChatGPT account, and make sure the login state is healthy. Only then should you touch the config files.
If you edit the config first and try to log in after, the Auth layer breaks — because you’ve already modified the validation chain.
Step 2: Edit auth.json
Open ~/.codex/auth.json in a text editor and change these two fields:
{
"auth_mode": "chatgpt",
"OPENAI_API_KEY": null
}
auth_mode stays as "chatgpt" — meaning “keep verifying login through ChatGPT.” OPENAI_API_KEY becomes null — meaning “don’t use the official API key to burn quota.”
Leave every other field untouched.
Step 3: Edit config.toml
Open ~/.codex/config.toml and append this block at the end:
model_provider = "rexai"
model = "openai/gpt-4.1"
[model_providers.rexai]
name = "RexAI"
base_url = "https://coding.rexai.top/api/v1"
wire_api = "responses"
experimental_bearer_token = "your API Key"
requires_openai_auth = true
Field-by-field explanation:
model_provider: tells Codex which provider to use. Must match the name in[model_providers.xxx]below.model: the model name. Format isopenai/model-name— theopenai/prefix is required or it won’t find the model.base_url: the relay API endpoint, pointing to coding.rexai.top.wire_api: must be"responses". This determines which protocol Codex uses to send requests.experimental_bearer_token: your API Key generated from the relay dashboard.requires_openai_auth: set totrue. This is critical — it makes Codex think it’s still inside the OpenAI ecosystem, preventing permission errors from the provider switch.
Step 4: Verify
Save both files, completely quit Codex (not minimize — quit), and reopen it.
Send a test message and check if you get a response.
To confirm the traffic actually goes through the relay, check either the relay dashboard usage log, or the usage stats in Codex desktop’s bottom-left profile section. If both show records, the Model layer switch worked.
What It Looks Like After Switching
Once configured, mobile and desktop mirror each other. Desktop shows the relay as the provider, and mobile syncs accordingly.
This means when you open the Codex App on your phone and remotely control your local instance, the actual inference requests route through the relay — not through your official OpenAI quota.
Pitfalls I Hit
Conversation history disappears
After switching providers, history is bound to the provider. Switching means starting fresh — previous conversations are gone. No workaround for this currently. Think before you switch.
Model name prefix is mandatory
Relay services name models as openai/gpt-4.1, openai/gpt-4o — always with the provider prefix. Writing just gpt-4.1 throws a model-not-found error. Don’t ask me how I know.
Why other relays don’t work
Repeating this because it matters: with wire_api = "responses", Codex sends requests to /v1/responses. This is a completely different endpoint from the familiar /v1/chat/completions. If the relay hasn’t specifically adapted for the Responses API, you’ll get 404 or 405.
The foundation of this entire approach is Codex’s decoupled Auth/Model architecture. Your ChatGPT account proves who you are. The relay does the actual inference. Access card is access card, contractor is contractor — each handles its own job.
FAQ
Q: Do Plus benefits still work after switching?
A: Yes. The Auth layer still validates through ChatGPT, so Plus benefits, plugin permissions, and Mobile unlock are unaffected. Only the actual inference traffic routes through the relay.
Q: Are there alternatives to coding.rexai.top?
A: Theoretically, any relay supporting the /v1/responses endpoint should work. coding.rexai.top
is the one I use — solid stability and speed. If you find others, leave a comment.
Q: Can mobile and desktop use different providers?
A: No. The config file is global — both endpoints share one configuration.
