What is LangChat?
LangChat is a production-ready conversational AI library that simplifies building intelligent chatbots with vector search capabilities. Instead of juggling multiple libraries, API integrations, vector databases, and chat history management, LangChat provides a unified, modular architecture that handles all these concerns out of the box.LangChat combines the power of LLMs (Large Language Models), vector search, and conversation management into one easy-to-use library.
Why LangChat?
Building production-ready conversational AI systems is complex. You need to:- Integrate LLM APIs (OpenAI, Anthropic, etc.)
- Manage Vector Databases (Pinecone, Weaviate, etc.)
- Handle Chat History (conversation context and memory)
- Implement Reranking (improve search result relevance)
- Track Metrics (response times, errors, feedback)
- Rotate API Keys (handle rate limits and failures)
Installation
Quick Install
The fastest way to get started:LangChat requires Python 3.8 or higher.
From PyPI (Recommended)
For production use, specify a version:From Source
Clone the repository and install:Development Installation
For development with all dependencies:Virtual Environment (Recommended)
Always use a virtual environment:System Requirements
Python Version
- Minimum: Python 3.8
- Recommended: Python 3.10+
Operating System
LangChat works on:- ✅ Linux
- ✅ macOS
- ✅ Windows
Memory
- Minimum: 2GB RAM
- Recommended: 4GB+ RAM (for reranker models)
The Flashrank reranker downloads models (~50MB) on first use. Make sure you have sufficient disk space.
Verify Installation
Test your installation:Environment Setup
Required Environment Variables
Create a.env file:
Load Environment Variables
Usingpython-dotenv:
Your First Chatbot (5 Minutes)
Step 1: Install LangChat
Step 2: Configure Your Environment
Create a.env file or set environment variables:
Make sure your Pinecone index is already created and your Supabase project is set up.
Step 3: Write Your First Chatbot
Create a filemain.py:
Step 4: Run It!
🎉 Congratulations! You just built your first LangChat application!
Using LangChat as an API Server
LangChat can also run as a FastAPI server with an auto-generated frontend interface.Create API Server
Createserver.py:
Run the Server
- Frontend Interface: Visit
http://localhost:8000/frontend - API Endpoint: POST to
http://localhost:8000/chat - Health Check: GET
http://localhost:8000/health
Configuration Options
LangChat supports configuration in multiple ways:1. Environment Variables (Recommended)
2. Direct Configuration
3. Hybrid Approach
What Happens Under the Hood?
When you initialize LangChat, it automatically:- Initializes Adapters: Sets up OpenAI, Pinecone, Supabase, and Flashrank
- Creates Database Tables: Sets up chat history, metrics, and feedback tables
- Downloads Reranker Models: Automatically downloads Flashrank reranker models
- Sets Up Sessions: Prepares user session management
All initialization happens automatically - you don’t need to configure each component separately!
Key Features
⚡ Fast Development
Get a production-ready chatbot running in minutes, not months
🔧 Modular Architecture
Mix and match components as needed - highly customizable
🔄 Automatic API Key Rotation
Built-in fault tolerance with multiple OpenAI API keys
📊 Production Ready
Includes chat history, metrics, and feedback tracking
Next Steps
Now that you have LangChat installed and running:- Motivation - Understand why LangChat exists
- Configuration Guide - Customize LangChat for your needs
- Document Indexing - Build your knowledge base
- Examples - See more complex use cases
- Production Deployment - Deploy to production
Common Issues
Issue: “Supabase URL and key must be provided”
Solution: Make sure you’ve setSUPABASE_URL and SUPABASE_KEY environment variables.
Issue: “Pinecone index name must be provided”
Solution: Create a Pinecone index first, then setPINECONE_INDEX_NAME.
Issue: “OpenAI API keys must be provided”
Solution: SetOPENAI_API_KEYS or OPENAI_API_KEY environment variable.
Issue: Reranker model download fails
Solution: The reranker model is downloaded automatically torerank_models/. Make sure you have write permissions.
Need Help?
- Check our Troubleshooting Guide
- Open an issue on GitHub
- Review the API Reference
Built with ❤️ by NeuroBrain