Skip to main content
This example builds a complete travel assistant called Voyager that answers questions about destinations, booking, and travel tips from a knowledge base of travel guides.

Project structure

travel-bot/
├── .env
├── content/
│   ├── destinations.pdf      # destination guides
│   ├── booking-guide.md      # how to book
│   ├── travel-tips.txt       # general travel advice
│   └── faqs.md               # frequently asked questions
├── build_index.py            # one-time indexing script
└── server.py                 # API server

Step 1 — Environment setup

.env
OPENAI_API_KEY=sk-...
PINECONE_API_KEY=pcsk-...
SUPABASE_URL=https://xxxx.supabase.co
SUPABASE_KEY=eyJhbGc...

Step 2 — Index travel documents

# build_index.py
from langchat import LangChat
from langchat.providers import OpenAI, Pinecone, Supabase

LangChat.load_env()

lc = LangChat(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=Pinecone("travel-index"),
    db=Supabase(),
)

result = lc.index(
    "content/",
    chunk_size=800,
    chunk_overlap=150,
    namespace="travel",
)

print(f"✓ Indexed {result['chunks_indexed']} chunks from {result['files_processed']} files")
python build_index.py
# ✓ Indexed 342 chunks from 4 files

Step 3 — Build the API server

# server.py
from langchat import LangChat
from langchat.api import create_app
from langchat.providers import OpenAI, Pinecone, Supabase
import uvicorn

LangChat.load_env()

VOYAGER_PROMPT = """You are Voyager, an expert travel assistant with deep knowledge of
destinations, travel logistics, culture, and local tips.

Use the travel guides below to give accurate, inspiring, and practical advice.
Be enthusiastic and helpful. If the information isn't in the guides, draw on your
general knowledge but make clear you're doing so.

Travel knowledge:
{context}

Conversation history:
{chat_history}

Traveler: {question}
Voyager:"""

app = create_app(
    llm=OpenAI("gpt-4o-mini", temperature=0.8),
    vector_db=Pinecone("travel-index"),
    db=Supabase(),
    prompt_template=VOYAGER_PROMPT,
    max_chat_history=15,
)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)
python server.py

Step 4 — Test it

# Ask about a destination
curl -X POST http://localhost:8000/chat \
  -F "query=What's the best time to visit Japan?" \
  -F "userId=traveler_001" \
  -F "platform=web"

# Follow-up question (references previous answer)
curl -X POST http://localhost:8000/chat \
  -F "query=What about cherry blossom season?" \
  -F "userId=traveler_001" \
  -F "platform=web"

Adding a Python client

# client.py
import asyncio
from langchat import LangChat, ChatResponse
from langchat.providers import OpenAI, Pinecone, Supabase

LangChat.load_env()

VOYAGER_PROMPT = """..."""  # same prompt as above

lc = LangChat(
    llm=OpenAI("gpt-4o-mini", temperature=0.8),
    vector_db=Pinecone("travel-index"),
    db=Supabase(),
    prompt_template=VOYAGER_PROMPT,
)

async def main():
    user_id = "traveler_001"
    print("🌍 Voyager — Your AI Travel Assistant")
    print("Type 'quit' to exit\n")

    questions = [
        "What's the best time to visit Kyoto?",
        "What should I pack for that trip?",
        "How far is it from Tokyo to Kyoto?",
        "What's the cheapest way to get there?",
    ]

    for q in questions:
        print(f"You: {q}")
        response: ChatResponse = await lc.chat(query=q, user_id=user_id)
        print(f"Voyager: {response.text}")
        print(f"         [{response.response_time:.2f}s]\n")

asyncio.run(main())

What this demonstrates

  • Document indexing — PDFs, Markdown, and text files indexed together
  • Custom persona — Voyager has a distinct, enthusiastic personality
  • Conversation memory — follow-up questions about “that trip” work correctly
  • Platform separation — web and mobile users have separate conversation threads
  • API deployment — the same chatbot as both a Python library and a REST API