Skip to main content

Architecture

LangChat uses a hexagonal architecture (Ports and Adapters). This separates the core chatbot logic from the specific tools it uses:
langchat.providers        ← public API (what you use)

langchat.adapters         ← implementations (OpenAI, Pinecone, Supabase, Flashrank)

langchat.core             ← business logic (engine, sessions, chains)
You always interact with langchat.providers. The adapters are implementation details.

Available providers

ProviderImportReplaces
OpenAIlangchat.providerslangchat.adapters.llm.OpenAIAdapter
Anthropiclangchat.providerslangchat.adapters.llm.AnthropicAdapter
Geminilangchat.providerslangchat.adapters.llm.GeminiAdapter
Mistrallangchat.providerslangchat.adapters.llm.MistralAdapter
Coherelangchat.providerslangchat.adapters.llm.CohereAdapter
Ollamalangchat.providerslangchat.adapters.llm.OllamaAdapter
Pineconelangchat.providerslangchat.adapters.vector_db.PineconeAdapter
Supabaselangchat.providerslangchat.adapters.database.SupabaseAdapter

Import patterns

# Recommended — use providers
from langchat.providers import OpenAI, Pinecone, Supabase

# All at once via the module
import langchat.providers as providers
llm = providers.OpenAI("gpt-4o-mini")

Environment variable convention

Every provider follows the same pattern:
  1. Check for explicit api_key parameter
  2. Fall back to a named environment variable
  3. Raise ValueError with the exact variable name if neither is set
# These are equivalent:
llm = OpenAI("gpt-4o-mini", api_key="sk-...")         # explicit
llm = OpenAI("gpt-4o-mini")                           # reads OPENAI_API_KEY

# If OPENAI_API_KEY is not set and no api_key is passed:
# ValueError: OpenAI API key is required. Set OPENAI_API_KEY environment variable
#             or pass api_key parameter.

Detailed reference

OpenAI

LLM provider with multi-key rotation

Pinecone

Vector database with OpenAI embeddings

Supabase

Postgres history and metrics storage

Flashrank

Cross-encoder reranker for better search results