Overview
The OpenAI provider gives you access to OpenAI’s language models with automatic key rotation and fault tolerance.
Features
- ✅ Multiple API Keys - Automatic rotation
- ✅ Fault Tolerance - Automatic retry on failures
- ✅ Error Recovery - Handles rate limits gracefully
- ✅ Easy Setup - Simple initialization
Basic Usage
from langchat.llm import OpenAI
# Single key
llm = OpenAI(api_key="sk-...", model="gpt-4o-mini", temperature=0.7)
# Multiple keys (automatic rotation)
llm = OpenAI(api_keys=["sk-1", "sk-2", "sk-3"], model="gpt-4o-mini")
Configuration Options
llm = OpenAI(
api_key="sk-...", # Single key
# OR
api_keys=["sk-1", "sk-2"], # Multiple keys
model="gpt-4o-mini", # Model name
temperature=0.7, # 0.0-2.0
max_tokens=1000 # Optional
)
Available Models
gpt-4o-mini (recommended)
gpt-4o
gpt-4-turbo
gpt-3.5-turbo
Multiple API Keys
Use multiple keys for better reliability:
llm = OpenAI(
api_keys=[
"sk-org1-key1",
"sk-org1-key2",
"sk-org2-key1"
],
model="gpt-4o-mini"
)
Benefits:
- Handle rate limits automatically
- Distribute load
- Improve fault tolerance
The provider automatically rotates between keys on failures or rate limits.
Using with LangChat
from langchat import LangChat
from langchat.llm import OpenAI
from langchat.vector_db import Pinecone
from langchat.database import Supabase
llm = OpenAI(api_key="sk-...", model="gpt-4o-mini")
vector_db = Pinecone(api_key="...", index_name="...")
db = Supabase(url="https://...", key="...")
ai = LangChat(llm=llm, vector_db=vector_db, db=db)
Error Handling
The provider handles errors automatically:
- Rate Limits - Rotates to next key
- API Errors - Retries with different key
- Network Errors - Automatic retry
Best Practices
1. Use Multiple Keys
# ✅ Good: Multiple keys
llm = OpenAI(api_keys=["key1", "key2", "key3"])
# ⚠️ OK: Single key
llm = OpenAI(api_key="key1")
2. Store Keys Securely
import os
# ✅ Good: From environment
llm = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# ❌ Bad: Hardcoded
llm = OpenAI(api_key="sk-abc123...")
Next Steps
Built with ❤️ by NeuroBrain