Skip to main content
This guide walks through building a real chatbot from scratch — a support bot that answers questions from your documents.

Step 1 — Install and configure

Install LangChat and create your .env file:
pip install langchat
.env
OPENAI_API_KEY=sk-...
PINECONE_API_KEY=pcsk-...
SUPABASE_URL=https://xxxx.supabase.co
SUPABASE_KEY=eyJhbGc...

Step 2 — Index your documents

Before the chatbot can answer questions, it needs to read your content. Put your documents (PDFs, text files, CSVs) in a folder and index them:
# index_docs.py
from langchat import LangChat
from langchat.providers import OpenAI, Pinecone, Supabase

LangChat.load_env()

lc = LangChat(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=Pinecone("my-index"),
    db=Supabase(),
)

# Index a single file
result = lc.index("docs/faq.pdf")
print(f"Indexed {result['chunks_indexed']} chunks")

# Or index an entire folder
result = lc.index("docs/")
print(f"Indexed {result['chunks_indexed']} chunks, skipped {result['chunks_skipped']} duplicates")
Run this once (or whenever your documents change):
python index_docs.py
LangChat automatically detects duplicate chunks using a content hash, so re-running index() on the same files is safe.

Step 3 — Build the chatbot

# chatbot.py
import asyncio
from langchat import LangChat, ChatResponse
from langchat.providers import OpenAI, Pinecone, Supabase

LangChat.load_env()

lc = LangChat(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=Pinecone("my-index"),
    db=Supabase(),
)

async def chat_with_user(user_id: str):
    print("Support Bot ready. Type 'quit' to exit.\n")

    while True:
        query = input("You: ").strip()
        if query.lower() == "quit":
            break

        response: ChatResponse = await lc.chat(
            query=query,
            user_id=user_id,
        )

        if response:
            print(f"Bot: {response.text}")
            print(f"     ({response.response_time:.2f}s)\n")
        else:
            print(f"Bot: Sorry, something went wrong. ({response.error})\n")

asyncio.run(chat_with_user("alice"))

Step 4 — Handle multiple users

Each user_id gets its own conversation history. Use platform to separate different applications sharing the same backend:
# User in mobile app
response = await lc.chat(
    query="What's my order status?",
    user_id="user_123",
    platform="mobile-app",
)

# Same user in web app — separate conversation
response = await lc.chat(
    query="What's my order status?",
    user_id="user_123",
    platform="web-app",
)

Step 5 — Add a custom persona

Make the bot speak in your brand voice by customizing the prompt:
lc = LangChat(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=Pinecone("my-index"),
    db=Supabase(),
    prompt_template="""You are Aria, a friendly support agent for Acme Corp.
Always be polite and professional. If you don't know the answer, say so clearly.

Context from our knowledge base:
{context}

Conversation so far:
{chat_history}

Customer question: {question}

Aria's response:""",
)
The three template variables {context}, {chat_history}, and {question} are filled in automatically.

Step 6 — Deploy as an API

Turn your chatbot into a production REST API in one step:
# server.py
from langchat.api import create_app
from langchat.providers import OpenAI, Pinecone, Supabase
import uvicorn

app = create_app(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=Pinecone("my-index"),
    db=Supabase(),
)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)
python server.py
Your API exposes:
EndpointDescription
POST /chatSend a message
GET /healthHealth check
GET /frontendBuilt-in chat UI

What’s next

Configuration

Switch LLM providers, configure Pinecone namespaces, tune history length

Custom Prompts

Full guide to prompt templating and standalone question customization

Document Indexing

Supported file formats, chunking strategy, namespace organization

API Server

Production server setup, CORS, Docker, environment configuration