Custom LLM Provider
Extend existing providers:
from langchat.llm import OpenAI
class CustomOpenAI(OpenAI):
def ainvoke(self, messages, **kwargs):
# Custom preprocessing
processed = self.preprocess(messages)
# Call parent
response = await super().ainvoke(processed, **kwargs)
# Custom postprocessing
return self.postprocess(response)
def preprocess(self, messages):
# Your custom logic
return messages
def postprocess(self, response):
# Your custom logic
return response
Custom Vector Adapter
Extend Pinecone adapter:
from langchat.vector_db import Pinecone
class CustomPinecone(Pinecone):
def get_retriever(self, k=5):
retriever = super().get_retriever(k=k)
# Add custom filtering or logic
return retriever
Using Custom Adapters
from langchat import LangChat
# Use custom adapters
custom_llm = CustomOpenAI(api_key="sk-...", model="gpt-4o-mini")
custom_vector = CustomPinecone(api_key="...", index_name="...")
db = Supabase(url="https://...", key="...")
ai = LangChat(
llm=custom_llm,
vector_db=custom_vector,
db=db
)
Custom adapters must implement the same interface as the base adapters.
Next Steps
Built with ❤️ by NeuroBrain