/guides/install

Install

Install free-claude-code from source or as a package. Set up your environment and configure your first provider.

Install

Choose between installing as a package (simpler, automatic updates) or from source (for development, customization, or contributing).

No repository clone required. The package installs command-line tools and manages its own virtual environment.

uv tool install git+https://github.com/Alishahryar1/free-claude-code.git

Initialize the configuration directory:

fcc-init

This creates ~/.config/free-claude-code/.env from the built-in template. Edit it with your API keys and model preferences.

Start the server:

free-claude-code

Update to the latest version:

uv tool upgrade free-claude-code

Install with voice support

For Discord/Telegram voice note transcription:

# Local Whisper (offline, free, requires more disk space)
uv tool install "free-claude-code[voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git"

# NVIDIA NIM voice (uses API credits)
uv tool install "free-claude-code[voice] @ git+https://github.com/Alishahryar1/free-claude-code.git"

# Both
uv tool install "free-claude-code[voice,voice_local] @ git+https://github.com/Alishahryar1/free-claude-code.git"

Option B: Install from source

Clone the repository for development, debugging, or contributing.

git clone https://github.com/Alishahryar1/free-claude-code.git
cd free-claude-code

The project uses uv for dependency management. All dependencies are pinned in uv.lock.

Install dependencies:

uv sync

Install with voice extras:

# Local Whisper
uv sync --extra voice_local

# NVIDIA NIM voice
uv sync --extra voice

# Both
uv sync --extra voice --extra voice_local

Copy the example environment file:

cp .env.example .env

Edit .env to configure your providers. See the Environment variables reference for all options.

Configure your first provider

Choose one provider to start with. You can add more later.

Get an API key at build.nvidia.com/settings/api-keys.

Edit .env:

NVIDIA_NIM_API_KEY="nvapi-your-key-here"
MODEL="nvidia_nim/z-ai/glm4.7"
ENABLE_THINKING=true

OpenRouter

Get an API key at openrouter.ai/keys.

OPENROUTER_API_KEY="sk-or-your-key-here"
MODEL="open_router/stepfun/step-3.5-flash:free"

LM Studio (local)

No API key needed. Download LM Studio, load a model, then:

MODEL="lmstudio/unsloth/GLM-4.7-Flash-GGUF"
LM_STUDIO_BASE_URL="http://localhost:1234/v1"

llama.cpp (local)

No API key needed. Run llama-server with a tool-capable GGUF:

LLAMACPP_BASE_URL="http://localhost:8080/v1"
MODEL="llamacpp/local-model"

Project structure

If installing from source, the repository is organized as:

free-claude-code/
├── server.py              # Entry point
├── api/                   # FastAPI routes, service layer, model routing
├── core/                  # Anthropic protocol helpers, SSE, parsers
├── providers/             # Provider registry and transports
├── messaging/             # Discord/Telegram bots and session management
├── config/                # Settings and logging
├── cli/                   # CLI entrypoints and process management
└── tests/                 # Pytest test suite

Verify installation

Start the proxy:

# Package install
free-claude-code

# Source install
uv run uvicorn server:app --host 0.0.0.0 --port 8082

Test with a curl request:

curl http://localhost:8082/v1/models

You should see a JSON list of available models from your configured provider.

Run the test suite:

uv run pytest

Start using Claude Code

With the proxy running, launch Claude Code:

ANTHROPIC_BASE_URL="http://localhost:8082" ANTHROPIC_AUTH_TOKEN="freecc" claude

All API calls now route through your proxy to the configured provider.