Basic Server
Run LangChat as a FastAPI server:
from langchat.api.app import create_app
from langchat.llm import OpenAI
from langchat.vector_db import Pinecone
from langchat.database import Supabase
import uvicorn
# Setup providers
llm = OpenAI(api_key="sk-...", model="gpt-4o-mini")
vector_db = Pinecone(api_key="...", index_name="...")
db = Supabase(url="https://...", key="...")
# Create server
app = create_app(llm=llm, vector_db=vector_db, db=db)
# Run
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
API Endpoints
POST /chat
Send a message:
curl -X POST http://localhost:8000/chat \
-d "query=Hello!" \
-d "userId=user123" \
-d "domain=default"
Response:
{
"response": "Hello! How can I help you?",
"status": "success",
"response_time": 1.23
}
GET /health
Check server status:
curl http://localhost:8000/health
GET /frontend
Access the web interface at http://localhost:8000/frontend
The frontend interface is automatically available when you run the server.
Next Steps
Built with ❤️ by NeuroBrain