Skip to main content

Class: LangChat

Main interface for using LangChat.

Constructor

LangChat(
    llm,
    vector_db,
    db,
    reranker=None,
    prompt_template=None,
    standalone_question_prompt=None,
    verbose=False,
    max_chat_history=20
)
Parameters:
llm
Any
required
LLM provider instance (e.g., OpenAI, Anthropic, Gemini)
vector_db
Any
required
Vector database provider (e.g., Pinecone)
db
Any
required
Database provider (e.g., Supabase)
reranker
Any
default:"None"
Optional reranker provider (defaults to Flashrank)
prompt_template
str | None
default:"None"
Custom system prompt template
standalone_question_prompt
str | None
default:"None"
Custom standalone question prompt
verbose
bool
default:"False"
Enable verbose logging
max_chat_history
int
default:"20"
Maximum chat history messages to keep
Example:
from langchat import LangChat
from langchat.llm import OpenAI
from langchat.vector_db import Pinecone
from langchat.database import Supabase

# Setup providers
llm = OpenAI(api_key="sk-...", model="gpt-4o-mini")
vector_db = Pinecone(api_key="...", index_name="...")
db = Supabase(url="https://...", key="...")

# Create LangChat
ai = LangChat(llm=llm, vector_db=vector_db, db=db)

Methods

async chat()

Process a chat query.
async def chat(
    self,
    query: str,
    user_id: str,
    domain: str = "default"
) -> dict
Parameters:
query
str
required
User query text
user_id
str
required
User identifier for session management
domain
str
default:"default"
Domain for conversation context
Returns: dict with:
  • response (str): AI response
  • status (str): “success” or “error”
  • response_time (float): Time in seconds
Example:
result = await ai.chat(
    query="What is Python?",
    user_id="user123",
    domain="default"
)
print(result["response"])

chat_sync()

Synchronous version of chat.
def chat_sync(
    self,
    query: str,
    user_id: str,
    domain: str = "default"
) -> dict

load_and_index_documents()

Index documents to Pinecone.
def load_and_index_documents(
    self,
    file_path: str,
    chunk_size: int = 1000,
    chunk_overlap: int = 200,
    namespace: Optional[str] = None,
    prevent_duplicates: bool = True
) -> dict
Returns: dict with:
  • chunks_indexed (int): Number of chunks indexed
  • chunks_skipped (int): Number of duplicates skipped

load_and_index_multiple_documents()

Index multiple documents.
def load_and_index_multiple_documents(
    self,
    file_paths: List[str],
    chunk_size: int = 1000,
    chunk_overlap: int = 200,
    namespace: Optional[str] = None,
    prevent_duplicates: bool = True
) -> dict

Properties

  • engine - Access to LangChatEngine instance

Next Steps


Built with ❤️ by NeuroBrain