Install
Requires Python 3.9 or higher.
Set environment variables
All providers read credentials from the environment automatically. Create a .env file:
OPENAI_API_KEY=sk-...
PINECONE_API_KEY=pcsk-...
PINECONE_INDEX=my-index
SUPABASE_URL=https://xxxx.supabase.co
SUPABASE_KEY=eyJhbGc...
Your first chatbot
import asyncio
from langchat import LangChat
from langchat.providers import OpenAI, Pinecone, Supabase
async def main():
lc = LangChat(
llm=OpenAI("gpt-4o-mini"), # reads OPENAI_API_KEY
vector_db=Pinecone("my-index"), # reads PINECONE_API_KEY
db=Supabase(), # reads SUPABASE_URL + SUPABASE_KEY
)
response = await lc.chat(
query="What can you help me with?",
user_id="alice",
)
print(response) # prints response text
print(response.text) # same — explicit access
print(response.status) # "success" or "error"
asyncio.run(main())
Sync alternative
Don’t want async? Use the sync wrapper:
from langchat import LangChat
from langchat.providers import OpenAI, Pinecone, Supabase
lc = LangChat(
llm=OpenAI("gpt-4o-mini"),
vector_db=Pinecone("my-index"),
db=Supabase(),
)
response = lc.chat_sync(query="Hello!", user_id="alice")
print(response)
What LangChat does automatically
When you call chat():
- Reformulates the question as a standalone query (resolves “it”, “that”, etc.)
- Searches your Pinecone index for relevant context
- Reranks results with Flashrank for better precision
- Calls the LLM with context + conversation history
- Saves the exchange to Supabase
- Returns a typed
ChatResponse object
No configuration needed for any of this — it all works out of the box.
What you get back
chat() returns a ChatResponse dataclass:
| Field | Type | Description |
|---|
text | str | The AI’s response |
status | "success" | "error" | Whether the call succeeded |
user_id | str | Echo of the user ID you passed |
platform | str | Platform namespace (default: "default") |
response_time | float | Latency in seconds |
timestamp | str | ISO 8601 UTC timestamp |
error | str | None | Error message if status == "error" |
response = await lc.chat(query="Hello", user_id="alice")
if response: # True when status == "success"
print(response.text)
print(f"Answered in {response.response_time:.2f}s")
else:
print(f"Error: {response.error}")
Launch as an API server
One line to expose a full REST API with a built-in chat UI:
from langchat.api import create_app
from langchat.providers import OpenAI, Pinecone, Supabase
import uvicorn
app = create_app(
llm=OpenAI("gpt-4o-mini"),
vector_db=Pinecone("my-index"),
db=Supabase(),
)
uvicorn.run(app, host="0.0.0.0", port=8000)
Open http://localhost:8000/frontend to use the built-in chat interface.
Next steps
Installation
Virtual environments, uv, and dependency setup
Configuration
All providers and configuration options
Document Indexing
Load PDFs, CSVs, and other documents into Pinecone
API Reference
Complete method and parameter reference