Installation issues
pip install langchat fails
Python version error
LangChat requires Python 3.9+. Check your version:pyenv.
API key errors
ValueError: OpenAI API key is required. Set OPENAI_API_KEY...
Your API key is not set. Fix with one of:
Same error for Pinecone / Supabase
Check these environment variables:| Service | Variable |
|---|---|
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| Gemini | GEMINI_API_KEY or GOOGLE_API_KEY |
| Mistral | MISTRAL_API_KEY |
| Cohere | COHERE_API_KEY |
| Pinecone | PINECONE_API_KEY |
| Supabase | SUPABASE_URL and SUPABASE_KEY |
Pinecone issues
Index not found or connection error
- Verify the index name matches exactly (case-sensitive)
- Confirm the index exists in app.pinecone.io
- Check
PINECONE_API_KEYis set and valid
Dimension mismatch error
Your index dimensions don’t match the embedding model. Common cases:
| Model | Required dimensions |
|---|---|
text-embedding-3-large | 3072 |
text-embedding-3-small | 1536 |
No relevant results / poor answers
- Verify documents are indexed: check chunk count in your indexing script output
- Try a more specific query
- Check
verbose=Trueto see what context is being retrieved - Increase
top_nin the reranker (default: 3)
Supabase issues
Tables not created
Tables are created on first use. If they’re missing, run a testchat() call or check that SUPABASE_KEY has write permissions (use service_role key for server-side).
Row Level Security policy violation
Use the service_role key instead of the anon key:
Chat issues
Empty or very short responses
The LLM returned a minimal response. Try:- More specific query
- Verify documents are indexed and relevant
- Check
verbose=Trueto see the full prompt being sent - Try a different model (e.g.,
gpt-4oinstead ofgpt-4o-mini)
Responses ignore context
The prompt template may be misconfigured. Ensure{context} appears in your template:
Conversation history not working
History is peruser_id + platform. If you’re using different values between calls, each call starts fresh:
Performance issues
Slow response times
- Reduce
max_chat_history— fewer messages = smaller prompt = faster LLM call - Use a faster model —
gpt-4o-miniis ~5× faster thangpt-4o - Reduce
top_nin reranker — default is 3; try 2 - Use
text-embedding-3-smallinstead oftext-embedding-3-largefor Pinecone
High token costs
- Use
gpt-4o-miniinstead ofgpt-4o - Reduce
max_chat_history(each extra exchange costs tokens) - Reduce
chunk_sizeto keep context chunks shorter - Reduce reranker
top_nto pass fewer chunks to the LLM
Import errors
ModuleNotFoundError: No module named 'langchat'
ImportError when importing a provider
Some providers have optional dependencies. Install the extras:
Still stuck?
- Enable verbose logging:
LangChat(..., verbose=True) - Check the GitHub Issues
- Open a new issue with your error message, Python version, and minimal reproduction
