Quick Deploy
Deploy your chatbot as an API server:
from langchat.api.app import create_app
from langchat.llm import OpenAI
from langchat.vector_db import Pinecone
from langchat.database import Supabase
import uvicorn
# Setup providers
llm = OpenAI(api_key="sk-...", model="gpt-4o-mini")
vector_db = Pinecone(api_key="...", index_name="...")
db = Supabase(url="https://...", key="...")
# Create server
app = create_app(llm=llm, vector_db=vector_db, db=db)
# Run
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Docker Deployment
Dockerfile
FROM python:3.10-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Expose port
EXPOSE 8000
# Run
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
requirements.txt
langchat>=2.0.0
uvicorn[standard]>=0.34.3
Build and Run
# Build
docker build -t langchat-chatbot .
# Run
docker run -p 8000:8000 \
-e OPENAI_API_KEY="sk-..." \
-e PINECONE_API_KEY="..." \
-e PINECONE_INDEX_NAME="..." \
-e SUPABASE_URL="https://..." \
-e SUPABASE_KEY="..." \
langchat-chatbot
Cloud Deployment
Google Cloud Run
- Build and push image
- Deploy to Cloud Run
- Set environment variables
- Done!
Works on:
- AWS Lambda
- Azure Container Instances
- Heroku
- Railway
- Any Docker-compatible platform
Environment Variables
Set these in production:
OPENAI_API_KEY=sk-...
PINECONE_API_KEY=pcsk-...
PINECONE_INDEX_NAME=your-index
SUPABASE_URL=https://...supabase.co
SUPABASE_KEY=eyJ...
PORT=8000
Never commit API keys to code. Always use environment variables or secrets management.
Best Practices
1. Use Multiple API Keys
llm = OpenAI(api_keys=["key1", "key2", "key3"])
Track response times and errors in production.
3. Set Up Health Checks
# Health endpoint available at /health
4. Use Production Database
Ensure Supabase is configured for production use.
Next Steps
Built with ❤️ by NeuroBrain