Skip to main content

Overview

The Pinecone provider connects LangChat to Pinecone for vector search and document storage.

Features

  • Vector Search - Find similar documents
  • Automatic Embeddings - OpenAI embeddings included
  • Fast Search - Millisecond query times
  • Scalable - Handles millions of documents

Basic Usage

from langchat.vector_db import Pinecone

vector_db = Pinecone(
    api_key="pcsk-...",
    index_name="your-index"
)

Configuration

vector_db = Pinecone(
    api_key="pcsk-...",  # Required
    index_name="your-index",  # Required
    embedding_model="text-embedding-3-large"  # Optional
)
Make sure your Pinecone index exists before using it.

Embedding Models

vector_db = Pinecone(
    api_key="...",
    index_name="...",
    embedding_model="text-embedding-3-large"
)
Best for:
  • High quality search
  • Complex queries
  • Production use

text-embedding-3-small

vector_db = Pinecone(
    api_key="...",
    index_name="...",
    embedding_model="text-embedding-3-small"
)
Best for:
  • Faster queries
  • Lower cost
  • Simple use cases

Using with LangChat

from langchat import LangChat
from langchat.llm import OpenAI
from langchat.vector_db import Pinecone
from langchat.database import Supabase

llm = OpenAI(api_key="sk-...", model="gpt-4o-mini")
vector_db = Pinecone(api_key="...", index_name="...")
db = Supabase(url="https://...", key="...")

ai = LangChat(llm=llm, vector_db=vector_db, db=db)

Getting a Retriever

Get a retriever for custom search:
# Get retriever
retriever = vector_db.get_retriever(k=10)  # Retrieve 10 documents

# Use in custom code
docs = retriever.invoke("your query")

Best Practices

1. Create Index First

Create your Pinecone index before using:
# In Pinecone console or API
# Create index with correct dimensions:
# - text-embedding-3-large: 3072 dimensions
# - text-embedding-3-small: 1536 dimensions

2. Use Appropriate Embedding Model

# ✅ Good: Match model to needs
embedding_model="text-embedding-3-large"  # High quality

# ✅ Also good: Faster option
embedding_model="text-embedding-3-small"  # Faster

Troubleshooting

Index Not Found

Solution: Create index in Pinecone console first.

Dimension Mismatch

Solution: Ensure index dimensions match embedding model:
  • text-embedding-3-large: 3072 dimensions
  • text-embedding-3-small: 1536 dimensions

Next Steps


Built with ❤️ by NeuroBrain