/reference/model-mapping

Model mapping

How free-claude-code maps Claude model tiers to provider models. Configure per-model routing and mixing providers.

Model mapping

free-claude-code maps Claude Code’s model tier requests (Opus, Sonnet, Haiku) to specific models on your configured providers. Mix and match providers for each tier.

Model tier mapping

Claude Code requests one of three model tiers:

Claude TierVariableUse Case
OpusMODEL_OPUSComplex reasoning, large contexts
SonnetMODEL_SONNETBalanced performance
HaikuMODEL_HAIKUFast, simple tasks
Any otherMODELFallback for unknown models

Basic configuration

Use the same provider for all tiers:

NVIDIA_NIM_API_KEY="nvapi-your-key"

MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5"
MODEL_SONNET="nvidia_nim/qwen/qwen3.5-397b-a17b"
MODEL_HAIKU="nvidia_nim/z-ai/glm4.7"
MODEL="nvidia_nim/z-ai/glm4.7"

Mix providers

Route different tiers to different providers:

# Opus: Premium NVIDIA NIM model
MODEL_OPUS="nvidia_nim/moonshotai/kimi-k2.5"

# Sonnet: Free OpenRouter model
MODEL_SONNET="open_router/deepseek/deepseek-r1-0528:free"

# Haiku: Local model for speed
MODEL_HAIKU="lmstudio/unsloth/GLM-4.7-Flash-GGUF"

# Fallback: Free tier
MODEL="open_router/stepfun/step-3.5-flash:free"

Provider prefixes

Valid prefixes for the MODEL_* variables:

ProviderPrefixExample
NVIDIA NIMnvidia_nim/nvidia_nim/z-ai/glm4.7
OpenRouteropen_router/open_router/deepseek/deepseek-r1-0528:free
DeepSeekdeepseek/deepseek/deepseek-chat
LM Studiolmstudio/lmstudio/unsloth/GLM-4.7-Flash-GGUF
llama.cppllamacpp/llamacpp/local-model

Invalid prefixes cause an error at request time.

Fallback behavior

When MODEL_OPUS, MODEL_SONNET, or MODEL_HAIKU are empty, they fall back to MODEL:

MODEL_OPUS=""                       # Uses MODEL
MODEL_SONNET=""                     # Uses MODEL
MODEL_HAIKU=""                      # Uses MODEL
MODEL="nvidia_nim/z-ai/glm4.7"      # All tiers use this

This is the simplest configuration—one model for everything.

Model picker (interactive)

The claude-pick script lets you choose any model at launch time.

Setup

  1. Install fzf:
brew install fzf
  1. Add alias to shell config:
# ~/.zshrc or ~/.bashrc
alias claude-pick="/absolute/path/to/free-claude-code/claude-pick"
  1. Reload and run:
source ~/.zshrc
claude-pick

How it works

claude-pick reads your .env, lists all available models from active providers, and lets you select one interactively. It then launches Claude Code with that model as MODEL.

Fixed model aliases

For common models, create shell aliases without the picker:

# ~/.zshrc
alias claude-kimi='ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc:moonshotai/kimi-k2.5" claude'
alias claude-deepseek='ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc:deepseek/deepseek-chat" claude'

The format freecc:provider/model/name tells the proxy which model to use.

Provider-specific model lists

NVIDIA NIM

Browse: build.nvidia.com/explore/discover

Update local list:

curl "https://integrate.api.nvidia.com/v1/models" > nvidia_nim_models.json

OpenRouter

Browse: openrouter.ai/models

Free models: openrouter.ai/collections/free-models

LM Studio

Models available depend on what you have downloaded in LM Studio. Check the Developer tab for exact identifiers.

llama.cpp

Model name is arbitrary (llama-server ignores it). Use any identifier you prefer:

MODEL="llamacpp/my-awesome-model"

Thinking and reasoning

Set ENABLE_THINKING to control reasoning output:

ENABLE_THINKING=true    # Parse <thinking> tags and reasoning_content
ENABLE_THINKING=false   # Suppress thinking blocks

Models with native reasoning:

  • DeepSeek R1 (via OpenRouter or direct API)
  • Kimi K2.5 (via NVIDIA NIM)

These output reasoning content that the proxy converts to Claude’s thinking blocks.

Troubleshooting

“Invalid provider prefix”: Check your MODEL variable starts with a valid prefix (nvidia_nim/, open_router/, etc.).

“Model not found”: The model name may have changed or be unavailable. Check the provider’s model catalog.

Claude Code ignores model mapping: Verify ANTHROPIC_BASE_URL points to your proxy, not directly to Anthropic.

Wrong model being used: Check MODEL_OPUS/SONNET/HAIKU are set correctly. Empty values fall back to MODEL.