Back to documentation
LLM Configuration
Configure any LLM provider - cloud or local
Multi-Provider Support
DocuLume supports OpenAI, Anthropic, Google, and local LLMs. You can switch between providers anytime without data migration.
OpenAI
Models: gpt-3.5-turbo, gpt-4, gpt-4-turbo-preview
Setup Instructions:
# Install SDK
pip install openai
# Set API key
export OPENAI_API_KEY=sk-your-key-here
# In DocuLume settings
# Navigate to Settings → API Keys → OpenAI
# Enter your API key and select default modelCode Example:
from app.core.llm.provider import OpenAIProvider
llm = OpenAIProvider(
api_key="sk-your-key",
model="gpt-4-turbo-preview"
)
response = await llm.chat(messages=[
{"role": "user", "content": "What is RAG?"}
])Anthropic Claude
Models: claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307
Setup Instructions:
# Install SDK
pip install anthropic
# Set API key
export ANTHROPIC_API_KEY=sk-ant-your-key-here
# In DocuLume settings
# Navigate to Settings → API Keys → Anthropic
# Enter your API key and select default modelCode Example:
from app.core.llm.provider import AnthropicProvider
llm = AnthropicProvider(
api_key="sk-ant-your-key",
model="claude-3-sonnet-20240229"
)
response = await llm.chat(messages=[
{"role": "user", "content": "Explain RAG"}
])Google Gemini
Models: gemini-pro, gemini-pro-vision
Setup Instructions:
# Install SDK
pip install google-generativeai
# Set API key
export GOOGLE_API_KEY=your-google-api-key
# In DocuLume settings
# Navigate to Settings → API Keys → Google
# Enter your API key and select default modelCode Example:
from app.core.llm.provider import GoogleProvider
llm = GoogleProvider(
api_key="your-key",
model="gemini-pro"
)
response = await llm.chat(messages=[
{"role": "user", "content": "What is vector search?"}
])Local LLM (Ollama)
Models: llama2, mistral, codellama, any model
Setup Instructions:
# Install Ollama
curl https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama2
# Start server
ollama serve # Runs on http://localhost:11434
# Configure DocuLume
export USE_LOCAL_LLM=true
export LOCAL_LLM_URL=http://localhost:11434
export LOCAL_LLM_MODEL=llama2Code Example:
# Option 1: Custom Provider (see full docs)
from app.core.llm.local_provider import OllamaProvider
llm = OllamaProvider(
base_url="http://localhost:11434",
model="llama2"
)
# Option 2: OpenAI-compatible
from app.core.llm.provider import OpenAIProvider
llm = OpenAIProvider(api_key="not-needed", model="llama2")
llm.client.base_url = "http://localhost:11434/v1"Provider Comparison
| Provider | Speed | Cost | Context | Privacy |
|---|---|---|---|---|
| OpenAI GPT-4 | ⚡⚡ | $$$ | 128k | ☁️ Cloud |
| Claude 3 | ⚡⚡⚡ | $$ | 200k | ☁️ Cloud |
| Gemini Pro | ⚡⚡⚡ | $ | 32k | ☁️ Cloud |
| Local (Ollama) | ⚡ | Free | Varies | 🔒 Private |