OpenAI Codex uses free models on OpenRouter

Explore how to use large language models such as OpenAI Codex for free through the OpenRouter platform.
Core content:
1. Introduction to the OpenRouter platform and the large language models it supports
2. Detailed steps to register for an API key and configure the environment
3. Sample code for sending requests using the OpenRouter API
OpenRouter is a unified API platform that allows developers to access hundreds of large language models (LLMs) through a single interface, including commercial models (such as OpenAI's GPT series, Anthropic's Claude, Google's Gemini) and open source models (such as Meta's Llama, Mistral, etc.).
Get started quickly with OpenRouter.
Step 1: Register and get an API key
Visit openrouter.ai and create an account.
Once logged in, navigate to the Keys page and create an API key.
Name your key (e.g. "bytenote").
Optionally set a Credit Limit, or leave it blank for no limit.
Purchase or top up credits to use the paid model.
OpenRouter offers a small number of initial free credits, and some models (such as some small open source models) are completely free.
Step 2: Configure the environment
Save the API key as an environment variable, or configure it in your project:
export OPENROUTER_API_KEY="your-api-key-here"
Or create a .env file in the project root directory:
OPENROUTER_API_KEY=your-api-key-here
The OpenRouter API automatically loads variables from the .env file.
Step 3: Send API request
OpenRouter's API is highly compatible with OpenAI's API, so you can use OpenAI's SDK or send HTTP requests directly.
Here is a simple example using Python's openai library:
from openai import OpenAI client = OpenAI( base_url="https://openrouter.ai/api/v1", api_key="<YOUR_OPENROUTER_API_KEY>", ) completion = client.chat.completions.create( model="openai/gpt-3.5-turbo", # Select supported models messages=[ {"role": "user", "content": "What is the meaning of life?"} ], extra_headers={ "HTTP-Referer": "<YOUR_SITE_URL>", # Optional, for OpenRouter leaderboard "X-Title": "<YOUR_SITE_NAME>", # Optional, application name } ) print(completion.choices[0].message.content)
Notice:
Replace <YOUR_OPENROUTER_API_KEY> with your actual key.
The model name must include the organization prefix, such as openai/gpt-3.5-turbo or meta-llama/llama-3.1-8b-instruct.
View the list of supported models:
https://openrouter.ai/models
Step 4: Model selection and optimization
Model selection: OpenRouter provides multiple models, some free (such as google/gemma-2-9b-it:free), some paid by token.
The behavior can be adjusted through dynamic variants such as :free, :nitro (optimized for speed) or :floor (optimized for cost).
Intelligent routing: OpenRouter automatically selects the best provider based on cost and availability by default.
You can customize the routing strategy through the provider parameter, for example:
"provider": { "order": ["Anthropic", "Together"], "allow_fallbacks": false, "data_collection": "deny" # Prevent data from being used for training. This will allow you to specify a preferred provider and disable data collection. }
OpenRouter now supports multimodal requests (text and images).
Images (URL or base64 encoded) can be sent via the /api/v1/chat/completions endpoint.
Example:
{ "model": "mistralai/mixtral-8x7b-instruct", "messages": [ {"role": "user", "content": [ {"type": "text", "text": "Describe this image"}, {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}} ]} ] }
Some models support native image generation. You need to include modalities: ["image", "text"] in the request.
Model and Provider Routing
Auto Router: Automatically select the best high-quality model for your cue using a special model ID, powered by NotDiamond.
Fallback mechanism: If the primary model or provider is unavailable, OpenRouter automatically switches to the backup model or provider to ensure high availability.
Custom routing: Specify the preferred model or provider via the models or provider parameters.
For example, prioritize low-cost providers:
{ "models": ["mistralai/mixtral-8x7b-instruct", "meta-llama/llama-3.1-8b-instruct"], "route": "fallback" }
Using OpenRouter with Codex CLI
Codex CLI can configure OpenRouter as a model provider via the --provider flag:
Set the OpenRouter API key:
export OPENROUTER_API_KEY="your-api-key-here"
Run the Codex CLI and specify the OpenRouter model:
codex --provider openrouter "explain this codebase to me"
Or specify it in the Codex configuration file (~/.codex/config.yaml):
provider: openrouter model: meta-llama/llama-3.1-8b-instruct
Codex CLI supports multi-model access to OpenRouter and is suitable for tasks such as code generation and file operations in the terminal.