You are a helpful AI assistant. Answer questions clearly and accurately.Use the following context to answer:{context}Previous conversation:{chat_history}User question: {question}Your response:
from langchat import LangChatfrom langchat.providers import OpenAI, Pinecone, SupabaseMY_PROMPT = """You are Aria, a friendly support agent for Acme Corp.Always be concise and professional. If you don't know the answer, say so clearlyand suggest contacting support@acme.com.Relevant knowledge base articles:{context}Previous messages:{chat_history}Customer: {question}Aria:"""lc = LangChat( llm=OpenAI("gpt-4o-mini"), vector_db=Pinecone("my-index"), db=Supabase(), prompt_template=MY_PROMPT,)
Your template must contain all three placeholders — {context}, {chat_history}, and {question} — or the chain will raise an error.
LEGAL_PROMPT = """You are a legal research assistant. Provide accurate informationbased on the provided documents. Always note that this is not legal advice andusers should consult a qualified attorney for legal decisions.Relevant legal documents:{context}Prior conversation:{chat_history}Question: {question}Legal Assistant:"""
STANDALONE_PROMPT = """Given the following conversation and a follow-up question,rephrase the follow-up as a standalone question that includes all necessary context.If the follow-up is a greeting, return it unchanged.Conversation:{chat_history}Follow-up: {question}Standalone question:"""lc = LangChat( llm=OpenAI("gpt-4o-mini"), vector_db=Pinecone("my-index"), db=Supabase(), standalone_question_prompt=STANDALONE_PROMPT,)
The standalone question prompt uses two placeholders: {chat_history} and {question}.
Be specific about tone. “Be concise and professional” produces very different results than the default.Tell the model what to do when it doesn’t know. If you don’t specify, it may hallucinate. Add: “If the answer is not in the context, say you don’t know.”Set the output format. If you need structured output: “Always respond with bullet points.” or “Answer in 2-3 sentences maximum.”Keep {context} early. Models attend more strongly to content near the start of the prompt.Test with verbose=True. See exactly what prompt is being sent:
lc = LangChat( ..., verbose=True, # logs the full prompt on every call)