Skip to main content

Usage

from langchat.providers import OpenAI

llm = OpenAI("gpt-4o-mini")              # reads OPENAI_API_KEY
llm = OpenAI("gpt-4o", temperature=0.3)  # explicit model + temp
Pass to LangChat:
from langchat import LangChat
from langchat.providers import OpenAI, Pinecone, Supabase

lc = LangChat(
    llm=OpenAI("gpt-4o-mini"),
    vector_db=Pinecone("my-index"),
    db=Supabase(),
)

Parameters

model
str
default:"gpt-4o-mini"
OpenAI model name. First positional argument.
api_key
str | None
default:"None"
Single API key. Falls back to OPENAI_API_KEY environment variable.
api_keys
list[str] | None
default:"None"
Multiple API keys for automatic rotation. Takes precedence over api_key.
temperature
float
default:"1.0"
Sampling temperature. 0.0 = deterministic. 1.0 = creative.
max_retries_per_key
int
default:"2"
Number of retries per key before rotating to the next one.

API key rotation

When you have multiple API keys (e.g., to handle rate limits across multiple OpenAI projects), pass them as a list:
llm = OpenAI(
    "gpt-4o-mini",
    api_keys=[
        "sk-proj-key1...",
        "sk-proj-key2...",
        "sk-proj-key3...",
    ],
    max_retries_per_key=2,
)
If a request fails on key1, LangChat retries up to max_retries_per_key times, then rotates to key2, and so on. Total maximum attempts = len(api_keys) × max_retries_per_key.

Environment variable

Set OPENAI_API_KEY in your .env:
OPENAI_API_KEY=sk-...

Available models

ModelContextSpeedQuality
gpt-4o-mini128kFastGood — recommended default
gpt-4o128kMediumHigh
gpt-4-turbo128kMediumHigh
gpt-3.5-turbo16kFastestBasic
Any valid OpenAI chat completion model name is accepted.