Skip to main content

Prerequisites

Before you begin, make sure you have:

Quick Start (5 Minutes)

Step 1: Install LangChat

Install LangChat using pip:
pip install langchat
Or install from source:
git clone https://github.com/neurobrains/langchat.git
cd langchat
pip install -e .

Step 2: Configure Your Environment

Create a .env file or set environment variables:
export OPENAI_API_KEYS="your-key-1,your-key-2"  # Multiple keys for rotation
export PINECONE_API_KEY="your-pinecone-key"
export PINECONE_INDEX_NAME="your-index-name"
export SUPABASE_URL="https://your-project.supabase.co"
export SUPABASE_KEY="your-supabase-key"

Step 3: Write Your First Chatbot

Create a file main.py:
import asyncio
from langchat import LangChat, LangChatConfig

async def main():
    # Load configuration from environment variables
    config = LangChatConfig.from_env()
    
    # Initialize LangChat
    langchat = LangChat(config=config)
    
    # Chat with the AI
    # Note: Response is automatically displayed in a Rich panel box in the console
    # You don't need to print it manually
    result = await langchat.chat(
        query="Hello! What can you help me with?",
        user_id="user123",
        domain="general"
    )

if __name__ == "__main__":
    asyncio.run(main())

Step 4: Run It!

python main.py

Using LangChat as an API Server

LangChat can also run as a FastAPI server with an auto-generated frontend interface.

Create API Server

Create server.py:
from langchat.api.app import create_app
from langchat.config import LangChatConfig
import uvicorn

# Create configuration
config = LangChatConfig.from_env()

# Create FastAPI app (auto-generates interface and Dockerfile)
app = create_app(
    config=config,
    auto_generate_interface=True,
    auto_generate_docker=True
)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=config.server_port)

Run the Server

python server.py
Now you can:
  • Frontend Interface: Visit http://localhost:8000/frontend
  • API Endpoint: POST to http://localhost:8000/chat
  • Health Check: GET http://localhost:8000/health

Configuration Options

LangChat supports configuration in multiple ways:
config = LangChatConfig.from_env()

2. Direct Configuration

config = LangChatConfig(
    openai_api_keys=["sk-...", "sk-..."],
    openai_model="gpt-4o-mini",
    openai_temperature=1.0,
    pinecone_api_key="pcsk-...",
    pinecone_index_name="my-index",
    supabase_url="https://xxxxx.supabase.co",
    supabase_key="eyJhbGc...",
    server_port=8000
)

3. Hybrid Approach

config = LangChatConfig.from_env()
# Override specific settings
config.openai_model = "gpt-4"
config.server_port = 8080

What Happens Under the Hood?

When you initialize LangChat, it automatically:
  1. Initializes Adapters: Sets up OpenAI, Pinecone, Supabase, and Flashrank
  2. Creates Database Tables: Sets up chat history, metrics, and feedback tables
  3. Downloads Reranker Models: Automatically downloads Flashrank reranker models
  4. Sets Up Sessions: Prepares user session management

Next Steps

Now that you have LangChat running, explore these topics:

Common Issues

Issue: “Supabase URL and key must be provided”

Solution: Make sure you’ve set SUPABASE_URL and SUPABASE_KEY environment variables.

Issue: “Pinecone index name must be provided”

Solution: Create a Pinecone index first, then set PINECONE_INDEX_NAME.

Issue: “OpenAI API keys must be provided”

Solution: Set OPENAI_API_KEYS or OPENAI_API_KEY environment variable.

Issue: Reranker model download fails

Solution: The reranker model is downloaded automatically to rerank_models/. Make sure you have write permissions.

Need Help?


Ready to build something amazing? Let’s continue with Configuration!