/reference/environment-variables

Environment variables

Complete reference for all free-claude-code configuration options. Configure providers, rate limits, timeouts, authentication, and messaging.

Environment variables

Complete reference for all free-claude-code configuration options.

Core provider variables

VariableDescriptionDefault
MODELFallback model (provider/model/name format)nvidia_nim/z-ai/glm4.7
MODEL_OPUSModel for Claude Opus requests(falls back to MODEL)
MODEL_SONNETModel for Claude Sonnet requests(falls back to MODEL)
MODEL_HAIKUModel for Claude Haiku requests(falls back to MODEL)
NVIDIA_NIM_API_KEYNVIDIA API keyrequired for NIM
ENABLE_THINKINGGlobal switch for thinking/reasoningtrue
OPENROUTER_API_KEYOpenRouter API keyrequired for OpenRouter
DEEPSEEK_API_KEYDeepSeek API keyrequired for DeepSeek
LM_STUDIO_BASE_URLLM Studio server URLhttp://localhost:1234/v1
LLAMACPP_BASE_URLllama.cpp server URLhttp://localhost:8080/v1

Proxy and network

VariableDescriptionDefault
NVIDIA_NIM_PROXYProxy for NIM requests""
OPENROUTER_PROXYProxy for OpenRouter""
LMSTUDIO_PROXYProxy for LM Studio""
LLAMACPP_PROXYProxy for llama.cpp""

Supports http:// and socks5:// formats:

NVIDIA_NIM_PROXY="http://username:password@host:port"
NVIDIA_NIM_PROXY="socks5://host:port"

Rate limiting and timeouts

VariableDescriptionDefault
PROVIDER_RATE_LIMITRequests per rate window40
PROVIDER_RATE_WINDOWRate limit window in seconds60
PROVIDER_MAX_CONCURRENCYMax simultaneous streams5
HTTP_READ_TIMEOUTRead timeout in seconds120
HTTP_WRITE_TIMEOUTWrite timeout in seconds10
HTTP_CONNECT_TIMEOUTConnect timeout in seconds10

Lower concurrency for local providers to prevent OOM:

PROVIDER_MAX_CONCURRENCY=1

Authentication

VariableDescriptionDefault
ANTHROPIC_AUTH_TOKENOptional API key for proxy""

When set, clients must provide the same token via the ANTHROPIC_AUTH_TOKEN header. Use this when:

  • Running on a public network
  • Sharing the server with restricted access
  • Adding a security layer

Messaging platforms

VariableDescriptionDefault
MESSAGING_PLATFORMdiscord or telegramdiscord
MESSAGING_RATE_LIMITMessages per window1
MESSAGING_RATE_WINDOWMessaging window (seconds)1

Discord

VariableDescriptionDefault
DISCORD_BOT_TOKENDiscord bot token""
ALLOWED_DISCORD_CHANNELSComma-separated channel IDs""

Telegram

VariableDescriptionDefault
TELEGRAM_BOT_TOKENTelegram bot token""
ALLOWED_TELEGRAM_USER_IDAllowed user ID""

Agent workspace

VariableDescriptionDefault
CLAUDE_WORKSPACEDirectory where agent operates./agent_workspace
ALLOWED_DIRAllowed directories for agent""
CLAUDE_CLI_BINClaude CLI binary nameclaude

ALLOWED_DIR restricts filesystem access. When empty, the agent can access any path the user has permission to read.

Voice notes

VariableDescriptionDefault
VOICE_NOTE_ENABLEDEnable voice transcriptionfalse
WHISPER_DEVICEcpu, cuda, or nvidia_nimcpu
WHISPER_MODELModel name or Hugging Face IDbase
HF_TOKENHugging Face token (optional)""

Local Whisper models

For WHISPER_DEVICE="cpu" or "cuda":

  • tiny (fastest, least accurate)
  • base (default)
  • small
  • medium
  • large-v2
  • large-v3
  • large-v3-turbo (best balance)

NVIDIA NIM Whisper models

For WHISPER_DEVICE="nvidia_nim":

  • openai/whisper-large-v3
  • nvidia/parakeet-ctc-1.1b-asr

Request optimization

These intercept trivial requests locally to save API quota:

VariableDescriptionDefault
FAST_PREFIX_DETECTIONEnable fast prefix detectiontrue
ENABLE_NETWORK_PROBE_MOCKMock network probe requeststrue
ENABLE_TITLE_GENERATION_SKIPSkip title generationtrue
ENABLE_SUGGESTION_MODE_SKIPSkip suggestion modetrue
ENABLE_FILEPATH_EXTRACTION_MOCKMock filepath extractiontrue

Disable any of these if you need the actual LLM response for these operations:

ENABLE_TITLE_GENERATION_SKIP=false

.env.example

The repository includes a complete .env.example file. Copy it as a starting point:

cp .env.example .env

Then edit only the values you need to change. All variables have sensible defaults.

Configuration priority

Configuration is loaded in this order (later overrides earlier):

  1. Built-in defaults
  2. .env file in project root
  3. Environment variables set in shell
  4. ~/.config/free-claude-code/.env (if using package install)

Command-line environment variables take highest priority:

MODEL="nvidia_nim/moonshotai/kimi-k2.5" uv run uvicorn server:app