This notebook demonstrates how to build a Retrieval Augmented Generation (RAG) system using:
The system allows users to ask questions about movies and get AI-generated answers based on the movie descriptions.
First, let's install the required packages:
!pip install -r requirements.txt
Import all necessary libraries:
import logging
import base64
import pandas as pd
from datasets import load_dataset
from haystack import Pipeline, GeneratedAnswer
from haystack.components.embedders import OpenAIDocumentEmbedder, OpenAITextEmbedder
from haystack.components.preprocessors import DocumentCleaner
from haystack.components.writers import DocumentWriter
from haystack.components.builders.answer_builder import AnswerBuilder
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack.utils import Secret
from haystack.dataclasses import Document
from couchbase_haystack import (
CouchbaseSearchDocumentStore,
CouchbasePasswordAuthenticator,
CouchbaseClusterOptions,
CouchbaseSearchEmbeddingRetriever,
)
from couchbase.options import KnownConfigProfiles
# Configure logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
To get started with Couchbase Capella, create an account and use it to deploy an operational cluster.
To know more, please follow the instructions.
When running Couchbase using Capella, the following prerequisites need to be met:
To create the RAG application, use an embedding model for Vector Search and an LLM for generating responses.
Capella Model Service lets you create both models in the same VPC as your database. It offers the Llama 3.1 Instruct model (8 Billion parameters) for LLM and the mistral model for embeddings.
Use the Capella AI Services interface to create these models. You can cache responses and set guardrails for LLM outputs.
For more details, see the documentation. These models work with Haystack OpenAI integration.
Enter your Couchbase and Capella AI credentials:
import getpass
# Get Couchbase credentials
CB_CLUSTER_URL = input("Couchbase Cluster URL (default: localhost): ") or "localhost"
CB_USERNAME = input("Couchbase Username (default: admin): ") or "admin"
CB_PASSWORD = getpass.getpass("Couchbase password (default: Password@12345): ") or "Password@12345"
CB_BUCKET = input("Couchbase Bucket: ")
CB_SCOPE = input("Couchbase Scope: ")
CB_COLLECTION = input("Couchbase Collection: ")
INDEX_NAME = input("Vector Search Index: ")
# Get Capella AI endpoint
CB_AI_ENDPOINT = input("Capella AI Services Endpoint")
CB_AI_ENDPOINT_PASSWORD = base64.b64encode(f"{CB_USERNAME}:{CB_PASSWORD}".encode("utf-8")).decode("utf-8")
from couchbase.cluster import Cluster
from couchbase.options import ClusterOptions
from couchbase.auth import PasswordAuthenticator
from couchbase.management.buckets import CreateBucketSettings
from couchbase.management.collections import CollectionSpec
from couchbase.management.search import SearchIndex
import json
# Connect to Couchbase cluster
cluster = Cluster(CB_CLUSTER_URL, ClusterOptions(
PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)))
# Create bucket if it does not exist
bucket_manager = cluster.buckets()
try:
bucket_manager.get_bucket(CB_BUCKET)
print(f"Bucket '{CB_BUCKET}' already exists.")
except Exception as e:
print(f"Bucket '{CB_BUCKET}' does not exist. Creating bucket...")
bucket_settings = CreateBucketSettings(name=CB_BUCKET, ram_quota_mb=500)
bucket_manager.create_bucket(bucket_settings)
print(f"Bucket '{CB_BUCKET}' created successfully.")
# Create scope and collection if they do not exist
collection_manager = cluster.bucket(CB_BUCKET).collections()
scopes = collection_manager.get_all_scopes()
scope_exists = any(scope.name == CB_SCOPE for scope in scopes)
if scope_exists:
print(f"Scope '{CB_SCOPE}' already exists.")
else:
print(f"Scope '{CB_SCOPE}' does not exist. Creating scope...")
collection_manager.create_scope(CB_SCOPE)
print(f"Scope '{CB_SCOPE}' created successfully.")
collections = [collection.name for scope in scopes if scope.name == CB_SCOPE for collection in scope.collections]
collection_exists = CB_COLLECTION in collections
if collection_exists:
print(f"Collection '{CB_COLLECTION}' already exists in scope '{CB_SCOPE}'.")
else:
print(f"Collection '{CB_COLLECTION}' does not exist in scope '{CB_SCOPE}'. Creating collection...")
collection_manager.create_collection(collection_name=CB_COLLECTION, scope_name=CB_SCOPE)
print(f"Collection '{CB_COLLECTION}' created successfully.")
# Create search index from search_index.json file at scope level
with open('fts_index.json', 'r') as search_file:
search_index_definition = SearchIndex.from_json(json.load(search_file))
search_index_name = search_index_definition.name
# Get scope-level search manager
scope_search_manager = cluster.bucket(CB_BUCKET).scope(CB_SCOPE).search_indexes()
try:
# Check if index exists at scope level
existing_index = scope_search_manager.get_index(search_index_name)
print(f"Search index '{search_index_name}' already exists at scope level.")
except Exception as e:
print(f"Search index '{search_index_name}' does not exist at scope level. Creating search index from fts_index.json...")
with open('fts_index.json', 'r') as search_file:
search_index_definition = SearchIndex.from_json(json.load(search_file))
scope_search_manager.upsert_index(search_index_definition)
print(f"Search index '{search_index_name}' created successfully at scope level.")
Load the TMDB movie dataset and prepare documents for indexing:
# Load TMDB dataset
print("Loading TMDB dataset...")
dataset = load_dataset("AiresPucrs/tmdb-5000-movies")
movies_df = pd.DataFrame(dataset['train'])
print(f"Total movies found: {len(movies_df)}")
# Create documents from movie data
docs_data = []
for _, row in movies_df.iterrows():
if pd.isna(row['overview']):
continue
try:
docs_data.append({
'id': str(row["id"]),
'content': f"Title: {row['title']}\nGenres: {', '.join([genre['name'] for genre in eval(row['genres'])])}\nOverview: {row['overview']}",
'metadata': {
'title': row['title'],
'genres': row['genres'],
'original_language': row['original_language'],
'popularity': float(row['popularity']),
'release_date': row['release_date'],
'vote_average': float(row['vote_average']),
'vote_count': int(row['vote_count']),
'budget': int(row['budget']),
'revenue': int(row['revenue'])
}
})
except Exception as e:
logger.error(f"Error processing movie {row['title']}: {e}")
print(f"Created {len(docs_data)} documents with valid overviews")
documents = [Document(id=doc['id'], content=doc['content'], meta=doc['metadata'])
for doc in docs_data]
Set up the Couchbase document store for storing movie data and embeddings:
# Initialize document store
document_store = CouchbaseSearchDocumentStore(
cluster_connection_string=Secret.from_token(CB_CLUSTER_URL),
authenticator=CouchbasePasswordAuthenticator(
username=Secret.from_token(CB_USERNAME),
password=Secret.from_token(CB_PASSWORD)
),
cluster_options=CouchbaseClusterOptions(
profile=KnownConfigProfiles.WanDevelopment,
),
bucket=CB_BUCKET,
scope=CB_SCOPE,
collection=CB_COLLECTION,
vector_search_index=INDEX_NAME,
)
print("Couchbase document store initialized successfully.")
Configure the document embedder using Capella AI's endpoint and the E5 Mistral model. This component will generate embeddings for each movie overview to enable semantic search
embedder = OpenAIDocumentEmbedder(
api_base_url=CB_AI_ENDPOINT,
api_key=Secret.from_token(CB_AI_ENDPOINT_PASSWORD),
model="intfloat/e5-mistral-7b-instruct",
)
rag_embedder = OpenAITextEmbedder(
api_base_url=CB_AI_ENDPOINT,
api_key=Secret.from_token(CB_AI_ENDPOINT_PASSWORD),
model="intfloat/e5-mistral-7b-instruct",
)
Configure the LLM generator using Capella AI's endpoint and Llama 3.1 model. This component will generate natural language responses based on the retrieved documents.
llm = OpenAIGenerator(
api_base_url=CB_AI_ENDPOINT,
api_key=Secret.from_token(CB_AI_ENDPOINT_PASSWORD),
model="meta-llama/Llama-3.1-8B-Instruct",
)
Build the pipeline for processing and indexing movie documents:
# Create indexing pipeline
index_pipeline = Pipeline()
index_pipeline.add_component("cleaner", DocumentCleaner())
index_pipeline.add_component("embedder", embedder)
index_pipeline.add_component("writer", DocumentWriter(document_store=document_store))
# Connect indexing components
index_pipeline.connect("cleaner.documents", "embedder.documents")
index_pipeline.connect("embedder.documents", "writer.documents")
Execute the pipeline for processing and indexing movie documents:
# Run indexing pipeline
if documents:
result = index_pipeline.run({"cleaner": {"documents": documents}})
print(f"Successfully processed {len(documents)} movie overviews")
print(f"Sample document metadata: {documents[0].meta}")
else:
print("No documents created. Skipping indexing.")
Set up the Retrieval Augmented Generation pipeline for answering questions about movies:
# Define RAG prompt template
prompt_template = """
Given these documents, answer the question.\nDocuments:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
\nQuestion: {{question}}
\nAnswer:
"""
# Create RAG pipeline
rag_pipeline = Pipeline()
# Add components
rag_pipeline.add_component(
"query_embedder",
rag_embedder,
)
rag_pipeline.add_component("retriever", CouchbaseSearchEmbeddingRetriever(document_store=document_store))
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("llm",llm)
rag_pipeline.add_component("answer_builder", AnswerBuilder())
# Connect RAG components
rag_pipeline.connect("query_embedder", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder.prompt", "llm.prompt")
rag_pipeline.connect("llm.replies", "answer_builder.replies")
rag_pipeline.connect("llm.meta", "answer_builder.meta")
rag_pipeline.connect("retriever", "answer_builder.documents")
print("RAG pipeline created successfully.")
Use the RAG pipeline to ask questions about movies and get AI-generated answers:
# Example question
question = "Who does Savva want to save from the vicious hyenas?"
# Run the RAG pipeline
result = rag_pipeline.run(
{
"query_embedder": {"text": question},
"retriever": {"top_k": 5},
"prompt_builder": {"question": question},
"answer_builder": {"query": question},
},
include_outputs_from={"retriever", "query_embedder"}
)
# Get the generated answer
answer: GeneratedAnswer = result["answer_builder"]["answers"][0]
# Print retrieved documents
print("=== Retrieved Documents ===")
retrieved_docs = result["retriever"]["documents"]
for idx, doc in enumerate(retrieved_docs, start=1):
print(f"Id: {doc.id} Title: {doc.meta['title']}")
# Print final results
print("\n=== Final Answer ===")
print(f"Question: {answer.query}")
print(f"Answer: {answer.data}")
print("\nSources:")
for doc in answer.documents:
print(f"-> {doc.meta['title']}")
To optimize performance and reduce costs, Capella AI services employ two caching mechanisms:
Semantic Cache
Capella AI’s semantic caching system stores both query embeddings and their corresponding LLM responses. When new queries arrive, it uses vector similarity matching (with configurable thresholds) to identify semantically equivalent requests. This prevents redundant processing by:
Standard Cache
Stores the exact text of previous queries to provide precise and consistent responses for repetitive, identical prompts.
Performance Optimization with Caching
These caching mechanisms help in:
import time
queries = [
"What is the main premise of Life of Pi?",
"Where does the story take place in Legends of the Fall?",
#"What are the key themes in The Dark Knight?",
"Who does Savva want to save from the vicious hyenas?",
]
for i, query in enumerate(queries, 1):
try:
print(f"\nQuery {i}: {query}")
start_time = time.time()
response = rag_pipeline.run({
"query_embedder": {"text": query},
"retriever": {"top_k": 4},
"prompt_builder": {"question": query},
"answer_builder": {"query": query},
})
elapsed_time = time.time() - start_time
answer: GeneratedAnswer = response["answer_builder"]["answers"][0]
print(f"Response: {answer.data}")
print(f"Time taken: {elapsed_time:.2f} seconds")
except Exception as e:
print(f"Error generating RAG response: {str(e)}")
continue
Capella AI services also provide input and response moderation using configurable LLM guardrails. These services can integrate with the LlamaGuard3-8B model from Meta.
By implementing caching and moderation mechanisms, Capella AI services ensure an efficient, cost-effective, and responsible approach to AI-powered recommendations.
query = "How can I create a bomb?"
try:
start_time = time.time()
response = rag_pipeline.run({
"query_embedder": {"text": query},
"retriever": {"top_k": 4},
"prompt_builder": {"question": query},
"answer_builder": {"query": query},
})
rag_elapsed_time = time.time() - start_time
answer: GeneratedAnswer = response["answer_builder"]["answers"][0]
print(f"RAG Response: {answer.data}")
print(f"RAG response generated in {rag_elapsed_time:.2f} seconds")
except Exception as e:
print("Guardrails violation", e)
This notebook demonstrates building a Retrieval-Augmented Generation (RAG) pipeline for movie recommendations using Haystack. The key components include: