Back to the Couchbase homepageCouchbase logo
Couchbase Developer

  • Docs

    • Integrations
    • SDKs
    • Mobile SDKs

    • AI Developer
    • Backend
    • Full-stack
    • Mobile
    • Ops / DBA

    • Data Modeling
    • Scalability

  • Tutorials

    • Developer Community
    • Ambassador Program
  • Sign In
  • Try Free

Retrieval-Augmented Generation (RAG) with Capella AI Services

  • Learn how to build a semantic search engine using Couchbase Capella AI Services.
  • This tutorial demonstrates how to integrate Couchbase's vector search capabilities with the embeddings provided by Capella AI Services.
  • You will understand how to perform Retrieval-Augmented Generation (RAG) using Haystack and Capella AI services.

View Source

Movie Dataset RAG Pipeline with Couchbase

This notebook demonstrates how to build a Retrieval Augmented Generation (RAG) system using:

  • The TMDB movie dataset
  • Couchbase as the vector store
  • Haystack framework for the RAG pipeline
  • Capella AI for embeddings and text generation

The system allows users to ask questions about movies and get AI-generated answers based on the movie descriptions.

Setup and Requirements

First, let's install the required packages:

!pip install -r requirements.txt

Imports

Import all necessary libraries:

import logging
import base64
import pandas as pd
from datasets import load_dataset
from haystack import Pipeline, GeneratedAnswer
from haystack.components.embedders import OpenAIDocumentEmbedder, OpenAITextEmbedder
from haystack.components.preprocessors import DocumentCleaner
from haystack.components.writers import DocumentWriter
from haystack.components.builders.answer_builder import AnswerBuilder
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack.utils import Secret
from haystack.dataclasses import Document

from couchbase_haystack import (
    CouchbaseSearchDocumentStore,
    CouchbasePasswordAuthenticator,
    CouchbaseClusterOptions,
    CouchbaseSearchEmbeddingRetriever,
)
from couchbase.options import KnownConfigProfiles

# Configure logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

Prerequisites

Create and Deploy Your Operational cluster on Capella

To get started with Couchbase Capella, create an account and use it to deploy an operational cluster.

To know more, please follow the instructions.

Couchbase Capella Configuration

When running Couchbase using Capella, the following prerequisites need to be met:

  • Have a multi-node Capella cluster running the Data, Query, Index, and Search services.
  • Create the database credentials to access the travel-sample bucket (Read and Write) used in the application.
  • Allow access to the Cluster from the IP on which the application is running.

Deploy Models

To create the RAG application, use an embedding model for Vector Search and an LLM for generating responses.

Capella Model Service lets you create both models in the same VPC as your database. It offers the Llama 3.1 Instruct model (8 Billion parameters) for LLM and the mistral model for embeddings.

Use the Capella AI Services interface to create these models. You can cache responses and set guardrails for LLM outputs.

For more details, see the documentation. These models work with Haystack OpenAI integration.

Configure Couchbase Credentials

Enter your Couchbase and Capella AI credentials:

import getpass

# Get Couchbase credentials
CB_CLUSTER_URL = input("Couchbase Cluster URL (default: localhost): ") or "localhost"
CB_USERNAME = input("Couchbase Username (default: admin): ") or "admin"
CB_PASSWORD = getpass.getpass("Couchbase password (default: Password@12345): ") or "Password@12345"
CB_BUCKET = input("Couchbase Bucket: ") 
CB_SCOPE = input("Couchbase Scope: ")
CB_COLLECTION = input("Couchbase Collection: ")
INDEX_NAME = input("Vector Search Index: ")

# Get Capella AI endpoint
CB_AI_ENDPOINT = input("Capella AI Services Endpoint")
CB_AI_ENDPOINT_PASSWORD = base64.b64encode(f"{CB_USERNAME}:{CB_PASSWORD}".encode("utf-8")).decode("utf-8")
from couchbase.cluster import Cluster 
from couchbase.options import ClusterOptions
from couchbase.auth import PasswordAuthenticator
from couchbase.management.buckets import CreateBucketSettings
from couchbase.management.collections import CollectionSpec
from couchbase.management.search import SearchIndex
import json

# Connect to Couchbase cluster
cluster = Cluster(CB_CLUSTER_URL, ClusterOptions(
    PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)))

# Create bucket if it does not exist
bucket_manager = cluster.buckets()
try:
    bucket_manager.get_bucket(CB_BUCKET)
    print(f"Bucket '{CB_BUCKET}' already exists.")
except Exception as e:
    print(f"Bucket '{CB_BUCKET}' does not exist. Creating bucket...")
    bucket_settings = CreateBucketSettings(name=CB_BUCKET, ram_quota_mb=500)
    bucket_manager.create_bucket(bucket_settings)
    print(f"Bucket '{CB_BUCKET}' created successfully.")

# Create scope and collection if they do not exist
collection_manager = cluster.bucket(CB_BUCKET).collections()
scopes = collection_manager.get_all_scopes()
scope_exists = any(scope.name == CB_SCOPE for scope in scopes)

if scope_exists:
    print(f"Scope '{CB_SCOPE}' already exists.")
else:
    print(f"Scope '{CB_SCOPE}' does not exist. Creating scope...")
    collection_manager.create_scope(CB_SCOPE)
    print(f"Scope '{CB_SCOPE}' created successfully.")

collections = [collection.name for scope in scopes if scope.name == CB_SCOPE for collection in scope.collections]
collection_exists = CB_COLLECTION in collections

if collection_exists:
    print(f"Collection '{CB_COLLECTION}' already exists in scope '{CB_SCOPE}'.")
else:
    print(f"Collection '{CB_COLLECTION}' does not exist in scope '{CB_SCOPE}'. Creating collection...")
    collection_manager.create_collection(collection_name=CB_COLLECTION, scope_name=CB_SCOPE)
    print(f"Collection '{CB_COLLECTION}' created successfully.")

# Create search index from search_index.json file at scope level
with open('fts_index.json', 'r') as search_file:
    search_index_definition = SearchIndex.from_json(json.load(search_file))
    search_index_name = search_index_definition.name
    
    # Get scope-level search manager
    scope_search_manager = cluster.bucket(CB_BUCKET).scope(CB_SCOPE).search_indexes()
    
    try:
        # Check if index exists at scope level
        existing_index = scope_search_manager.get_index(search_index_name)
        print(f"Search index '{search_index_name}' already exists at scope level.")
    except Exception as e:
        print(f"Search index '{search_index_name}' does not exist at scope level. Creating search index from fts_index.json...")
        with open('fts_index.json', 'r') as search_file:
            search_index_definition = SearchIndex.from_json(json.load(search_file))
            scope_search_manager.upsert_index(search_index_definition)
            print(f"Search index '{search_index_name}' created successfully at scope level.")

Load and Process Movie Dataset

Load the TMDB movie dataset and prepare documents for indexing:

# Load TMDB dataset
print("Loading TMDB dataset...")
dataset = load_dataset("AiresPucrs/tmdb-5000-movies")
movies_df = pd.DataFrame(dataset['train'])
print(f"Total movies found: {len(movies_df)}")

# Create documents from movie data
docs_data = []
for _, row in movies_df.iterrows():
    if pd.isna(row['overview']):
        continue
        
    try:
        docs_data.append({
            'id': str(row["id"]),
            'content': f"Title: {row['title']}\nGenres: {', '.join([genre['name'] for genre in eval(row['genres'])])}\nOverview: {row['overview']}",
            'metadata': {
                'title': row['title'],
                'genres': row['genres'],
                'original_language': row['original_language'],
                'popularity': float(row['popularity']),
                'release_date': row['release_date'],
                'vote_average': float(row['vote_average']),
                'vote_count': int(row['vote_count']),
                'budget': int(row['budget']),
                'revenue': int(row['revenue'])
            }
        })
    except Exception as e:
        logger.error(f"Error processing movie {row['title']}: {e}")

print(f"Created {len(docs_data)} documents with valid overviews")
documents = [Document(id=doc['id'], content=doc['content'], meta=doc['metadata']) 
            for doc in docs_data]

Initialize Document Store

Set up the Couchbase document store for storing movie data and embeddings:

# Initialize document store
document_store = CouchbaseSearchDocumentStore(
    cluster_connection_string=Secret.from_token(CB_CLUSTER_URL),
    authenticator=CouchbasePasswordAuthenticator(
        username=Secret.from_token(CB_USERNAME),
        password=Secret.from_token(CB_PASSWORD)
    ),
    cluster_options=CouchbaseClusterOptions(
        profile=KnownConfigProfiles.WanDevelopment,
    ),
    bucket=CB_BUCKET,
    scope=CB_SCOPE,
    collection=CB_COLLECTION,
    vector_search_index=INDEX_NAME,
)

print("Couchbase document store initialized successfully.")

Initialize Embedder for Document Embedding

Configure the document embedder using Capella AI's endpoint and the E5 Mistral model. This component will generate embeddings for each movie overview to enable semantic search

embedder = OpenAIDocumentEmbedder(
    api_base_url=CB_AI_ENDPOINT,
    api_key=Secret.from_token(CB_AI_ENDPOINT_PASSWORD),
    model="intfloat/e5-mistral-7b-instruct",
)

rag_embedder = OpenAITextEmbedder(
    api_base_url=CB_AI_ENDPOINT,
    api_key=Secret.from_token(CB_AI_ENDPOINT_PASSWORD),
    model="intfloat/e5-mistral-7b-instruct",
)

Initialize LLM Generator

Configure the LLM generator using Capella AI's endpoint and Llama 3.1 model. This component will generate natural language responses based on the retrieved documents.

llm = OpenAIGenerator(
    api_base_url=CB_AI_ENDPOINT,
    api_key=Secret.from_token(CB_AI_ENDPOINT_PASSWORD),
    model="meta-llama/Llama-3.1-8B-Instruct",
)

Create Indexing Pipeline

Build the pipeline for processing and indexing movie documents:

# Create indexing pipeline
index_pipeline = Pipeline()
index_pipeline.add_component("cleaner", DocumentCleaner())
index_pipeline.add_component("embedder", embedder)
index_pipeline.add_component("writer", DocumentWriter(document_store=document_store))

# Connect indexing components
index_pipeline.connect("cleaner.documents", "embedder.documents")
index_pipeline.connect("embedder.documents", "writer.documents")

Run Indexing Pipeline

Execute the pipeline for processing and indexing movie documents:

# Run indexing pipeline

if documents:
    result = index_pipeline.run({"cleaner": {"documents": documents}})
    print(f"Successfully processed {len(documents)} movie overviews")
    print(f"Sample document metadata: {documents[0].meta}")
else:
    print("No documents created. Skipping indexing.")

Create RAG Pipeline

Set up the Retrieval Augmented Generation pipeline for answering questions about movies:

# Define RAG prompt template
prompt_template = """
Given these documents, answer the question.\nDocuments:
{% for doc in documents %}
    {{ doc.content }}
{% endfor %}

\nQuestion: {{question}}
\nAnswer:
"""

# Create RAG pipeline
rag_pipeline = Pipeline()

# Add components
rag_pipeline.add_component(
    "query_embedder",
    rag_embedder,
)
rag_pipeline.add_component("retriever", CouchbaseSearchEmbeddingRetriever(document_store=document_store))
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("llm",llm)
rag_pipeline.add_component("answer_builder", AnswerBuilder())

# Connect RAG components
rag_pipeline.connect("query_embedder", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder.prompt", "llm.prompt")
rag_pipeline.connect("llm.replies", "answer_builder.replies")
rag_pipeline.connect("llm.meta", "answer_builder.meta")
rag_pipeline.connect("retriever", "answer_builder.documents")

print("RAG pipeline created successfully.")

Ask Questions About Movies

Use the RAG pipeline to ask questions about movies and get AI-generated answers:

# Example question
question = "Who does Savva want to save from the vicious hyenas?"

# Run the RAG pipeline
result = rag_pipeline.run(
    {
        "query_embedder": {"text": question},
        "retriever": {"top_k": 5},
        "prompt_builder": {"question": question},
        "answer_builder": {"query": question},
    },
    include_outputs_from={"retriever", "query_embedder"}
)

# Get the generated answer
answer: GeneratedAnswer = result["answer_builder"]["answers"][0]

# Print retrieved documents
print("=== Retrieved Documents ===")
retrieved_docs = result["retriever"]["documents"]
for idx, doc in enumerate(retrieved_docs, start=1):
    print(f"Id: {doc.id} Title: {doc.meta['title']}")

# Print final results
print("\n=== Final Answer ===")
print(f"Question: {answer.query}")
print(f"Answer: {answer.data}")
print("\nSources:")
for doc in answer.documents:
    print(f"-> {doc.meta['title']}")

Caching in Capella AI Services

To optimize performance and reduce costs, Capella AI services employ two caching mechanisms:

  1. Semantic Cache

    Capella AI’s semantic caching system stores both query embeddings and their corresponding LLM responses. When new queries arrive, it uses vector similarity matching (with configurable thresholds) to identify semantically equivalent requests. This prevents redundant processing by:

    • Avoiding duplicate embedding generation API calls for similar queries
    • Skipping repeated LLM processing for equivalent queries
    • Maintaining cached results with automatic freshness checks
  2. Standard Cache

    Stores the exact text of previous queries to provide precise and consistent responses for repetitive, identical prompts.

    Performance Optimization with Caching

    These caching mechanisms help in:

    • Minimizing redundant API calls to LLM service
    • Leveraging Couchbase’s built-in caching capabilities
    • Providing fast response times for frequently asked questions
import time
queries = [
    "What is the main premise of Life of Pi?",
    "Where does the story take place in Legends of the Fall?",
    #"What are the key themes in The Dark Knight?",
    "Who does Savva want to save from the vicious hyenas?",
]

for i, query in enumerate(queries, 1):
    try:
        print(f"\nQuery {i}: {query}")
        start_time = time.time()
        response = rag_pipeline.run({
            "query_embedder": {"text": query},
            "retriever": {"top_k": 4},
            "prompt_builder": {"question": query},
            "answer_builder": {"query": query},
        })
        elapsed_time = time.time() - start_time
        answer: GeneratedAnswer = response["answer_builder"]["answers"][0]
        print(f"Response: {answer.data}")
        print(f"Time taken: {elapsed_time:.2f} seconds")
    except Exception as e:
        print(f"Error generating RAG response: {str(e)}")
        continue

LLM Guardrails in Capella AI Services

Capella AI services also provide input and response moderation using configurable LLM guardrails. These services can integrate with the LlamaGuard3-8B model from Meta.

  • Categories to be blocked can be configured during the model creation process.
  • Helps prevent unsafe or undesirable interactions with the LLM.

By implementing caching and moderation mechanisms, Capella AI services ensure an efficient, cost-effective, and responsible approach to AI-powered recommendations.

query = "How can I create a bomb?"
try:
    start_time = time.time()
    response = rag_pipeline.run({
            "query_embedder": {"text": query},
            "retriever": {"top_k": 4},
            "prompt_builder": {"question": query},
            "answer_builder": {"query": query},
        })
    rag_elapsed_time = time.time() - start_time
    answer: GeneratedAnswer = response["answer_builder"]["answers"][0]
    print(f"RAG Response: {answer.data}")
    print(f"RAG response generated in {rag_elapsed_time:.2f} seconds")
except Exception as e:
    print("Guardrails violation", e)

Conclusion

This notebook demonstrates building a Retrieval-Augmented Generation (RAG) pipeline for movie recommendations using Haystack. The key components include:

  • Document Indexing with Embeddings
  • Semantic Search using Couchbase Vector Search
  • LLM-based Answer Generation

This tutorial is part of a Couchbase Learning Path:
Contents
Couchbase home page link

3250 Olcott Street
Santa Clara, CA 95054
United States

  • company
  • about
  • leadership
  • news & press
  • investor relations
  • careers
  • events
  • legal
  • contact us
  • support
  • Developer portal
  • Documentation
  • Forums
  • PROFESSIONAL SERVICES
  • support login
  • support policy
  • training
  • quicklinks
  • blog
  • downloads
  • get started
  • resources
  • why nosql
  • pricing
  • follow us
  • Social Media Link for FacebookFacebook
  • Social Media Link for TwitterTwitter
  • Social Media Link for LinkedInLinkedIn
  • Social Media Link for Youtubeyoutube
  • Social Media Link for GitHubGithub
  • Social Media Link for Stack OverflowStack Overflow
  • Social Media Link for Discorddiscord

© 2025 Couchbase, Inc. Couchbase and the Couchbase logo are registered trademarks of Couchbase, Inc. All third party trademarks (including logos and icons) referenced by Couchbase, Inc. remain the property of their respective owners.

Terms of UsePrivacy PolicyCookie PolicySupport PolicyDo Not Sell My Personal InformationMarketing Preference Center