Skip to main content

When to extend

The built-in providers cover the most common cases. Build a custom adapter when you need:
  • A different LLM not yet supported (e.g., a private model, Azure OpenAI)
  • A different vector database (e.g., Weaviate, Qdrant, Chroma)
  • A different reranker
  • Custom pre/post processing in any adapter layer

Custom LLM provider

LangChat’s engine expects an LLM that implements invoke() and ainvoke() methods returning an object with a .content attribute.
from langchat.adapters.llm._base_llm import BaseLLM, AIMessage
from typing import Any

def my_model_call(messages: list[Any]) -> str:
    """Call your model and return the response text."""
    # Extract the last user message
    last_message = messages[-1].content if hasattr(messages[-1], "content") else str(messages[-1])

    # Call your custom model here
    response = my_custom_api.generate(last_message)
    return response.text

# Wrap in BaseLLM
llm = BaseLLM(invoke_func=my_model_call)

# Use like any other provider
from langchat import LangChat
from langchat.providers import Pinecone, Supabase

lc = LangChat(
    llm=llm,
    vector_db=Pinecone("my-index"),
    db=Supabase(),
)

Azure OpenAI example

from openai import AzureOpenAI
from langchat.adapters.llm._base_llm import BaseLLM

azure_client = AzureOpenAI(
    api_key=os.environ["AZURE_OPENAI_KEY"],
    api_version="2024-02-01",
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
)

def azure_call(messages: list) -> str:
    formatted = [{"role": "user", "content": m.content} for m in messages]
    response = azure_client.chat.completions.create(
        model="gpt-4o",
        messages=formatted,
    )
    return response.choices[0].message.content

llm = BaseLLM(invoke_func=azure_call)

Custom vector database

LangChat’s engine calls vector_adapter.get_retriever() to get a LangChain retriever. Any LangChain-compatible vector store retriever works.
import os
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings

class ChromaAdapter:
    """Custom Chroma vector database adapter."""

    def __init__(self, collection_name: str, persist_dir: str = "./chroma_db"):
        self.embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
        self.store = Chroma(
            collection_name=collection_name,
            embedding_function=self.embeddings,
            persist_directory=persist_dir,
        )

    def get_retriever(self, k: int = 5):
        return self.store.as_retriever(search_kwargs={"k": k})

# Use with LangChat
from langchat import LangChat
from langchat.providers import OpenAI, Supabase

lc = LangChat(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=ChromaAdapter("my-collection"),
    db=Supabase(),
)

Custom database adapter

LangChat calls db.client to get the Supabase client. For a different database backend, implement the same interface.
class MockDatabaseAdapter:
    """In-memory database adapter for testing."""

    def __init__(self):
        self._data = {}

    @property
    def client(self):
        return self  # return self as the "client"

    def table(self, name: str):
        return self

    def insert(self, data: dict):
        return self

    def select(self, cols: str = "*"):
        return self

    def eq(self, col: str, val):
        return self

    def execute(self):
        class Result:
            data = []
        return Result()

Contributing a new provider

If your adapter could benefit other users, consider contributing it to LangChat:
  1. Add the adapter in src/langchat/adapters/<category>/
  2. Add the provider wrapper in src/langchat/providers/__init__.py
  3. Add tests in tests/adapters/<category>/
  4. Update the documentation in docs/adapters/
  5. Open a PR on GitHub
See CONTRIBUTING.md for the full contribution guide.