In this guide, we will walk you through building a powerful semantic search engine using Couchbase as the backend database, OpenAI as the AI-powered embedding and Anthropic as the language model provider. Semantic search goes beyond simple keyword matching by understanding the context and meaning behind the words in a query, making it an essential tool for applications that require intelligent information retrieval. This tutorial is designed to be beginner-friendly, with clear, step-by-step instructions that will equip you with the knowledge to create a fully functional semantic search system from scratch.
This tutorial is available as a Jupyter Notebook (.ipynb
file) that you can run interactively. You can access the original notebook here.
You can either download the notebook file and run it on Google Colab or run it on your system by setting up the Python environment.
To get started with Couchbase Capella, create an account and use it to deploy a forever free tier operational cluster. This account provides you with a environment where you can explore and learn about Capella with no time constraint.
To know more, please follow the instructions.
When running Couchbase using Capella, the following prerequisites need to be met.
To build our semantic search engine, we need a robust set of tools. The libraries we install handle everything from connecting to databases to performing complex machine learning tasks. Each library has a specific role: Couchbase libraries manage database operations, LangChain handles AI model integrations, and OpenAI provides advanced AI models for generating embeddings and Claude(by Anthropic) for understanding natural language. By setting up these libraries, we ensure our environment is equipped to handle the data-intensive and computationally complex tasks required for semantic search.
!pip install datasets langchain-couchbase langchain-anthropic langchain-openai
[Output too long, omitted for brevity]
The script starts by importing a series of libraries required for various tasks, including handling JSON, logging, time tracking, Couchbase connections, embedding generation, and dataset loading. These libraries provide essential functions for working with data, managing database connections, and processing machine learning models.
import json
import logging
import time
import sys
import getpass
from datetime import timedelta
from uuid import uuid4
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.exceptions import CouchbaseException, InternalServerFailureException, QueryIndexAlreadyExistsException
from couchbase.management.search import SearchIndex
from couchbase.options import ClusterOptions
from datasets import load_dataset
from langchain_anthropic import ChatAnthropic
from langchain_core.documents import Document
from langchain_core.globals import set_llm_cache
from langchain_core.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_couchbase.cache import CouchbaseCache
from langchain_couchbase.vectorstores import CouchbaseVectorStore
from langchain_openai import OpenAIEmbeddings
from tqdm import tqdm
Logging is configured to track the progress of the script and capture any errors or warnings. This is crucial for debugging and understanding the flow of execution. The logging output includes timestamps, log levels (e.g., INFO, ERROR), and messages that describe what is happening in the script.
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', force=True)
In this section, we prompt the user to input essential configuration settings needed. These settings include sensitive information like API keys, database credentials, and specific configuration names. Instead of hardcoding these details into the script, we request the user to provide them at runtime, ensuring flexibility and security.
The script also validates that all required inputs are provided, raising an error if any crucial information is missing. This approach ensures that your integration is both secure and correctly configured without hardcoding sensitive information, enhancing the overall security and maintainability of your code.
ANTHROPIC_API_KEY = getpass.getpass('Enter your Anthropic API key: ')
OPENAI_API_KEY = getpass.getpass('Enter your OpenAI API key: ')
CB_HOST = input('Enter your Couchbase host (default: couchbase://localhost): ') or 'couchbase://localhost'
CB_USERNAME = input('Enter your Couchbase username (default: Administrator): ') or 'Administrator'
CB_PASSWORD = getpass.getpass('Enter your Couchbase password (default: password): ') or 'password'
CB_BUCKET_NAME = input('Enter your Couchbase bucket name (default: vector-search-testing): ') or 'vector-search-testing'
INDEX_NAME = input('Enter your index name (default: vector_search_claude): ') or 'vector_search_claude'
SCOPE_NAME = input('Enter your scope name (default: shared): ') or 'shared'
COLLECTION_NAME = input('Enter your collection name (default: claude): ') or 'claude'
CACHE_COLLECTION = input('Enter your cache collection name (default: cache): ') or 'cache'
# Check if the variables are correctly loaded
if not ANTHROPIC_API_KEY:
raise ValueError("ANTHROPIC_API_KEY is not set in the environment.")
if not OPENAI_API_KEY:
raise ValueError("OPENAI_API_KEY is not set in the environment.")
Enter your Anthropic API key: ··········
Enter your OpenAI API key: ··········
Enter your Couchbase host (default: couchbase://localhost): couchbases://cb.hlcup4o4jmjr55yf.cloud.couchbase.com
Enter your Couchbase username (default: Administrator): vector-search-rag-demos
Enter your Couchbase password (default: password): ··········
Enter your Couchbase bucket name (default: vector-search-testing):
Enter your index name (default: vector_search_claude):
Enter your scope name (default: shared):
Enter your collection name (default: claude):
Enter your cache collection name (default: cache):
Connecting to a Couchbase cluster is the foundation of our project. Couchbase will serve as our primary data store, handling all the storage and retrieval operations required for our semantic search engine. By establishing this connection, we enable our application to interact with the database, allowing us to perform operations such as storing embeddings, querying data, and managing collections. This connection is the gateway through which all data will flow, so ensuring it's set up correctly is paramount.
try:
auth = PasswordAuthenticator(CB_USERNAME, CB_PASSWORD)
options = ClusterOptions(auth)
cluster = Cluster(CB_HOST, options)
cluster.wait_until_ready(timedelta(seconds=5))
logging.info("Successfully connected to Couchbase")
except Exception as e:
raise ConnectionError(f"Failed to connect to Couchbase: {str(e)}")
2024-08-29 13:33:54,574 - INFO - Successfully connected to Couchbase
In Couchbase, data is organized in buckets, which can be further divided into scopes and collections. Think of a collection as a table in a traditional SQL database. Before we can store any data, we need to ensure that our collections exist. If they don't, we must create them. This step is important because it prepares the database to handle the specific types of data our application will process. By setting up collections, we define the structure of our data storage, which is essential for efficient data retrieval and management.
Moreover, setting up collections allows us to isolate different types of data within the same bucket, providing a more organized and scalable data structure. This is particularly useful when dealing with large datasets, as it ensures that related data is stored together, making it easier to manage and query.
def setup_collection(cluster, bucket_name, scope_name, collection_name):
try:
bucket = cluster.bucket(bucket_name)
bucket_manager = bucket.collections()
# Check if collection exists, create if it doesn't
collections = bucket_manager.get_all_scopes()
collection_exists = any(
scope.name == scope_name and collection_name in [col.name for col in scope.collections]
for scope in collections
)
if not collection_exists:
logging.info(f"Collection '{collection_name}' does not exist. Creating it...")
bucket_manager.create_collection(scope_name, collection_name)
logging.info(f"Collection '{collection_name}' created successfully.")
else:
logging.info(f"Collection '{collection_name}' already exists.Skipping creation.")
collection = bucket.scope(scope_name).collection(collection_name)
# Ensure primary index exists
try:
cluster.query(f"CREATE PRIMARY INDEX IF NOT EXISTS ON `{bucket_name}`.`{scope_name}`.`{collection_name}`").execute()
logging.info("Primary index present or created successfully.")
except Exception as e:
logging.warning(f"Error creating primary index: {str(e)}")
# Clear all documents in the collection
try:
query = f"DELETE FROM `{bucket_name}`.`{scope_name}`.`{collection_name}`"
cluster.query(query).execute()
logging.info("All documents cleared from the collection.")
except Exception as e:
logging.warning(f"Error while clearing documents: {str(e)}. The collection might be empty.")
return collection
except Exception as e:
raise RuntimeError(f"Error setting up collection: {str(e)}")
setup_collection(cluster, CB_BUCKET_NAME, SCOPE_NAME, COLLECTION_NAME)
setup_collection(cluster, CB_BUCKET_NAME, SCOPE_NAME, CACHE_COLLECTION)
2024-08-29 13:33:54,801 - INFO - Collection 'claude' already exists.Skipping creation.
2024-08-29 13:33:54,836 - INFO - Primary index present or created successfully.
2024-08-29 13:33:55,491 - INFO - All documents cleared from the collection.
2024-08-29 13:33:55,529 - INFO - Collection 'cache' already exists.Skipping creation.
2024-08-29 13:33:55,567 - INFO - Primary index present or created successfully.
2024-08-29 13:33:55,605 - INFO - All documents cleared from the collection.
<couchbase.collection.Collection at 0x7c1c3a201d20>
Semantic search requires an efficient way to retrieve relevant documents based on a user's query. This is where the Couchbase Vector Search Index comes into play. In this step, we load the Vector Search Index definition from a JSON file, which specifies how the index should be structured. This includes the fields to be indexed, the dimensions of the vectors, and other parameters that determine how the search engine processes queries based on vector similarity.
For more information on creating a vector search index, please follow the instructions.
# If you are running this script locally (not in Google Colab), uncomment the following line
# and provide the path to your index definition file.
# index_definition_path = '/path_to_your_index_file/claude_index.json' # Local setup: specify your file path here
# If you are running in Google Colab, use the following code to upload the index definition file
from google.colab import files
print("Upload your index definition file")
uploaded = files.upload()
index_definition_path = list(uploaded.keys())[0]
try:
with open(index_definition_path, 'r') as file:
index_definition = json.load(file)
except Exception as e:
raise ValueError(f"Error loading index definition from {index_definition_path}: {str(e)}")
Upload your index definition file
Saving claude_index.json to claude_index.json
With the index definition loaded, the next step is to create or update the Vector Search Index in Couchbase. This step is crucial because it optimizes our database for vector similarity search operations, allowing us to perform searches based on the semantic content of documents rather than just keywords. By creating or updating a Vector Search Index, we enable our search engine to handle complex queries that involve finding semantically similar documents using vector embeddings, which is essential for a robust semantic search engine.
try:
scope_index_manager = cluster.bucket(CB_BUCKET_NAME).scope(SCOPE_NAME).search_indexes()
# Check if index already exists
existing_indexes = scope_index_manager.get_all_indexes()
index_name = index_definition["name"]
if index_name in [index.name for index in existing_indexes]:
logging.info(f"Index '{index_name}' found")
else:
logging.info(f"Creating new index '{index_name}'...")
# Create SearchIndex object from JSON definition
search_index = SearchIndex.from_json(index_definition)
# Upsert the index (create if not exists, update if exists)
scope_index_manager.upsert_index(search_index)
logging.info(f"Index '{index_name}' successfully created/updated.")
except QueryIndexAlreadyExistsException:
logging.info(f"Index '{index_name}' already exists. Skipping creation/update.")
except InternalServerFailureException as e:
error_message = str(e)
logging.error(f"InternalServerFailureException raised: {error_message}")
try:
# Accessing the response_body attribute from the context
error_context = e.context
response_body = error_context.response_body
if response_body:
error_details = json.loads(response_body)
error_message = error_details.get('error', '')
if "collection: 'claude' doesn't belong to scope: 'shared'" in error_message:
raise ValueError("Collection 'claude' does not belong to scope 'shared'. Please check the collection and scope names.")
except ValueError as ve:
logging.error(str(ve))
raise
except Exception as json_error:
logging.error(f"Failed to parse the error message: {json_error}")
raise RuntimeError(f"Internal server error while creating/updating search index: {error_message}")
2024-08-29 13:35:01,109 - INFO - Index 'vector_search_claude' found
2024-08-29 13:35:01,317 - INFO - Index 'vector_search_claude' already exists. Skipping creation/update.
To build a search engine, we need data to search through. We use the TREC dataset, a well-known benchmark in the field of information retrieval. This dataset contains a wide variety of text data that we'll use to train our search engine. Loading the dataset is a crucial step because it provides the raw material that our search engine will work with. The quality and diversity of the data in the TREC dataset make it an excellent choice for testing and refining our search engine, ensuring that it can handle a wide range of queries effectively.
The TREC dataset's rich content allows us to simulate real-world scenarios where users ask complex questions, enabling us to fine-tune our search engine's ability to understand and respond to various types of queries.
try:
trec = load_dataset('trec', split='train[:1000]')
logging.info(f"Successfully loaded TREC dataset with {len(trec)} samples")
except Exception as e:
raise ValueError(f"Error loading TREC dataset: {str(e)}")
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:89: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
Downloading builder script: 0%| | 0.00/5.09k [00:00<?, ?B/s]
Downloading readme: 0%| | 0.00/10.6k [00:00<?, ?B/s]
The repository for trec contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/trec.
You can avoid this prompt in future by passing the argument `trust_remote_code=True`.
Do you wish to run the custom code? [y/N] y
Downloading data: 0%| | 0.00/336k [00:00<?, ?B/s]
Downloading data: 0%| | 0.00/23.4k [00:00<?, ?B/s]
Generating train split: 0%| | 0/5452 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/500 [00:00<?, ? examples/s]
2024-08-29 13:35:10,138 - INFO - Successfully loaded TREC dataset with 1000 samples
Embeddings are at the heart of semantic search. They are numerical representations of text that capture the semantic meaning of the words and phrases. Unlike traditional keyword-based search, which looks for exact matches, embeddings allow our search engine to understand the context and nuances of language, enabling it to retrieve documents that are semantically similar to the query, even if they don't contain the exact keywords. By creating embeddings using OpenAI, we equip our search engine with the ability to understand and process natural language in a way that's much closer to how humans understand language. This step transforms our raw text data into a format that the search engine can use to find and rank relevant documents.
try:
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model='text-embedding-ada-002')
logging.info("Successfully created OpenAIEmbeddings")
except Exception as e:
raise ValueError(f"Error creating OpenAIEmbeddings: {str(e)}")
2024-08-29 13:35:10,317 - INFO - Successfully created OpenAIEmbeddings
A vector store is where we'll keep our embeddings. Unlike the FTS index, which is used for text-based search, the vector store is specifically designed to handle embeddings and perform similarity searches. When a user inputs a query, the search engine converts the query into an embedding and compares it against the embeddings stored in the vector store. This allows the engine to find documents that are semantically similar to the query, even if they don't contain the exact same words. By setting up the vector store in Couchbase, we create a powerful tool that enables our search engine to understand and retrieve information based on the meaning and context of the query, rather than just the specific words used.
try:
vector_store = CouchbaseVectorStore(
cluster=cluster,
bucket_name=CB_BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=COLLECTION_NAME,
embedding=embeddings,
index_name=INDEX_NAME,
)
logging.info("Successfully created vector store")
except Exception as e:
raise ValueError(f"Failed to create vector store: {str(e)}")
2024-08-29 13:35:10,989 - INFO - Successfully created vector store
With the vector store set up, the next step is to populate it with data. We save the TREC dataset to the vector store in batches. This method is efficient and ensures that our search engine can handle large datasets without running into performance issues. By saving the data in this way, we prepare our search engine to quickly and accurately respond to user queries. This step is essential for making the dataset searchable, transforming raw data into a format that can be easily queried by our search engine.
Batch processing is particularly important when dealing with large datasets, as it prevents memory overload and ensures that the data is stored in a structured and retrievable manner. This approach not only optimizes performance but also ensures the scalability of our system.
try:
batch_size = 50
logging.disable(sys.maxsize) # Disable logging to prevent tqdm output
for i in tqdm(range(0, len(trec['text']), batch_size), desc="Processing Batches"):
batch = trec['text'][i:i + batch_size]
documents = [Document(page_content=text) for text in batch]
uuids = [str(uuid4()) for _ in range(len(documents))]
vector_store.add_documents(documents=documents, ids=uuids)
logging.disable(logging.NOTSET) # Re-enable logging
except Exception as e:
raise RuntimeError(f"Failed to save documents to vector store: {str(e)}")
Processing Batches: 100%|██████████| 20/20 [00:16<00:00, 1.22it/s]
To further optimize our system, we set up a Couchbase-based cache. A cache is a temporary storage layer that holds data that is frequently accessed, speeding up operations by reducing the need to repeatedly retrieve the same information from the database. In our setup, the cache will help us accelerate repetitive tasks, such as looking up similar documents. By implementing a cache, we enhance the overall performance of our search engine, ensuring that it can handle high query volumes and deliver results quickly.
Caching is particularly valuable in scenarios where users may submit similar queries multiple times or where certain pieces of information are frequently requested. By storing these in a cache, we can significantly reduce the time it takes to respond to these queries, improving the user experience.
try:
cache = CouchbaseCache(
cluster=cluster,
bucket_name=CB_BUCKET_NAME,
scope_name=SCOPE_NAME,
collection_name=CACHE_COLLECTION,
)
logging.info("Successfully created cache")
set_llm_cache(cache)
except Exception as e:
raise ValueError(f"Failed to create cache: {str(e)}")
2024-08-29 13:35:27,788 - INFO - Successfully created cache
Language models are AI systems that are trained to understand and generate human language. We'll be using the Claude 3.5 Sonnet
language model to process user queries and generate meaningful responses. This model is a key component of our semantic search engine, allowing it to go beyond simple keyword matching and truly understand the intent behind a query. By creating this language model, we equip our search engine with the ability to interpret complex queries, understand the nuances of language, and provide more accurate and contextually relevant responses.
The language model's ability to understand context and generate coherent responses is what makes our search engine truly intelligent. It can not only find the right information but also present it in a way that is useful and understandable to the user.
try:
llm = ChatAnthropic(temperature=0, anthropic_api_key=ANTHROPIC_API_KEY, model_name='claude-3-5-sonnet-20240620')
logging.info("Successfully created ChatAnthropic")
except Exception as e:
logging.error(f"Error creating ChatAnthropic: {str(e)}. Please check your API key and network connection.")
raise
2024-08-29 13:35:27,933 - INFO - Successfully created ChatAnthropic
Semantic search in Couchbase involves converting queries and documents into vector representations using an embeddings model. These vectors capture the semantic meaning of the text and are stored directly in Couchbase. When a query is made, Couchbase performs a similarity search by comparing the query vector against the stored document vectors. The similarity metric used for this comparison is configurable, allowing flexibility in how the relevance of documents is determined. Common metrics include cosine similarity, Euclidean distance, or dot product, but other metrics can be implemented based on specific use cases. Different embedding models like BERT, Word2Vec, or GloVe can also be used depending on the application's needs, with the vectors generated by these models stored and searched within Couchbase itself.
In the provided code, the search process begins by recording the start time, followed by executing the similarity_search_with_score method of the CouchbaseVectorStore. This method searches Couchbase for the most relevant documents based on the vector similarity to the query. The search results include the document content and a similarity score that reflects how closely each document aligns with the query in the defined semantic space. The time taken to perform this search is then calculated and logged, and the results are displayed, showing the most relevant documents along with their similarity scores. This approach leverages Couchbase as both a storage and retrieval engine for vector data, enabling efficient and scalable semantic searches. The integration of vector storage and search capabilities within Couchbase allows for sophisticated semantic search operations without relying on external services for vector storage or comparison.
query = "What caused the 1929 Great Depression?"
try:
# Perform the semantic search
start_time = time.time()
search_results = vector_store.similarity_search_with_score(query, k=10)
search_elapsed_time = time.time() - start_time
logging.info(f"Semantic search completed in {search_elapsed_time:.2f} seconds")
# Display search results
print(f"\nSemantic Search Results (completed in {search_elapsed_time:.2f} seconds):")
for doc, score in search_results:
print(f"Distance: {score:.4f}, Text: {doc.page_content}")
except CouchbaseException as e:
raise RuntimeError(f"Error performing semantic search: {str(e)}")
except Exception as e:
raise RuntimeError(f"Unexpected error: {str(e)}")
2024-08-29 13:35:28,094 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
2024-08-29 13:35:28,285 - INFO - Semantic search completed in 0.34 seconds
Semantic Search Results (completed in 0.34 seconds):
Distance: 0.9178, Text: Why did the world enter a global depression in 1929 ?
Distance: 0.8714, Text: When was `` the Great Depression '' ?
Distance: 0.8115, Text: What crop failure caused the Irish Famine ?
Distance: 0.7985, Text: What historical event happened in Dogtown in 1899 ?
Distance: 0.7917, Text: What caused the Lynmouth floods ?
Distance: 0.7912, Text: When did the Dow first reach ?
Distance: 0.7908, Text: When was the first Wall Street Journal published ?
Distance: 0.7885, Text: What were popular songs and types of songs in the 1920s ?
Distance: 0.7857, Text: When did World War I start ?
Distance: 0.7842, Text: What caused Harry Houdini 's death ?
Couchbase and LangChain can be seamlessly integrated to create RAG (Retrieval-Augmented Generation) chains, enhancing the process of generating contextually relevant responses. In this setup, Couchbase serves as the vector store, where embeddings of documents are stored. When a query is made, LangChain retrieves the most relevant documents from Couchbase by comparing the query’s embedding with the stored document embeddings. These documents, which provide contextual information, are then passed to a generative language model within LangChain.
The language model, equipped with the context from the retrieved documents, generates a response that is both informed and contextually accurate. This integration allows the RAG chain to leverage Couchbase’s efficient storage and retrieval capabilities, while LangChain handles the generation of responses based on the context provided by the retrieved documents. Together, they create a powerful system that can deliver highly relevant and accurate answers by combining the strengths of both retrieval and generation.
system_template = "You are a helpful assistant that answers questions based on the provided context."
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
human_template = "Context: {context}\n\nQuestion: {question}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt
])
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": lambda x: format_docs(vector_store.similarity_search(x)), "question": RunnablePassthrough()}
| chat_prompt
| llm
)
logging.info("Successfully created RAG chain")
2024-08-29 13:35:28,305 - INFO - Successfully created RAG chain
# Get responses
logging.disable(sys.maxsize) # Disable logging to prevent tqdm output
start_time = time.time()
rag_response = rag_chain.invoke(query)
rag_elapsed_time = time.time() - start_time
print(f"RAG Response: {rag_response.content}")
print(f"RAG response generated in {rag_elapsed_time:.2f} seconds")
RAG Response: Based on the context provided, the world entered a global depression in 1929. This event is commonly known as "the Great Depression." While the context doesn't provide specific causes for the Great Depression, it's generally understood that it was triggered by a combination of factors, including:
1. The stock market crash of 1929
2. Bank failures
3. A decline in consumer spending and investment
4. International economic issues, such as the gold standard and trade policies
It's important to note that the exact causes of the Great Depression are complex and still debated by historians and economists. The context doesn't provide detailed information about the specific triggers, but it does confirm that 1929 was the year when the global depression began.
RAG response generated in 3.16 seconds
Couchbase can be effectively used as a caching mechanism for RAG (Retrieval-Augmented Generation) responses by storing and retrieving precomputed results for specific queries. This approach enhances the system's efficiency and speed, particularly when dealing with repeated or similar queries. When a query is first processed, the RAG chain retrieves relevant documents, generates a response using the language model, and then stores this response in Couchbase, with the query serving as the key.
For subsequent requests with the same query, the system checks Couchbase first. If a cached response is found, it is retrieved directly from Couchbase, bypassing the need to re-run the entire RAG process. This significantly reduces response time because the computationally expensive steps of document retrieval and response generation are skipped. Couchbase's role in this setup is to provide a fast and scalable storage solution for caching these responses, ensuring that frequently asked queries can be answered more quickly and efficiently.
try:
queries = [
"Why do heavier objects travel downhill faster?",
"What caused the 1929 Great Depression?", # Repeated query
"Why do heavier objects travel downhill faster?", # Repeated query
]
for i, query in enumerate(queries, 1):
print(f"\nQuery {i}: {query}")
start_time = time.time()
response = rag_chain.invoke(query)
elapsed_time = time.time() - start_time
print(f"Response: {response.content}")
print(f"Time taken: {elapsed_time:.2f} seconds")
except Exception as e:
raise ValueError(f"Error generating RAG response: {str(e)}")
Query 1: Why do heavier objects travel downhill faster?
Response: Based on the provided context, I can answer the question about why heavier objects travel downhill faster.
Heavier objects generally travel downhill faster due to the relationship between mass, gravity, and friction. Here's a more detailed explanation:
1. Gravitational force: The force of gravity acting on an object is proportional to its mass. Heavier objects have more mass, so they experience a stronger gravitational pull.
2. Friction: While friction does act against the motion of objects rolling downhill, it doesn't increase proportionally with mass. This means that the ratio of gravitational force to friction is generally more favorable for heavier objects.
3. Inertia: Heavier objects have more inertia, which means they resist changes in motion more than lighter objects. Once they start moving, they tend to keep moving more easily.
4. Air resistance: While air resistance does affect objects moving downhill, its effect is relatively smaller on heavier objects compared to lighter ones, as the ratio of air resistance to mass is lower for heavier objects.
As a result of these factors, heavier objects typically accelerate faster and reach higher speeds when traveling downhill compared to lighter objects of similar shape and size.
It's important to note that in a vacuum (where there's no air resistance), all objects would fall at the same rate regardless of their mass. However, in real-world conditions with air resistance and friction, heavier objects generally move faster downhill.
Time taken: 5.61 seconds
Query 2: What caused the 1929 Great Depression?
Response: Based on the context provided, the world entered a global depression in 1929. This event is commonly known as "the Great Depression." While the context doesn't provide specific causes for the Great Depression, it's generally understood that it was triggered by a combination of factors, including:
1. The stock market crash of 1929
2. Bank failures
3. A decline in consumer spending and investment
4. International economic issues, such as the gold standard and trade policies
It's important to note that the exact causes of the Great Depression are complex and still debated by historians and economists. The context doesn't provide detailed information about the specific triggers, but it does confirm that 1929 was the year when the global depression began.
Time taken: 2.22 seconds
Query 3: Why do heavier objects travel downhill faster?
Response: Based on the provided context, I can answer the question about why heavier objects travel downhill faster.
Heavier objects generally travel downhill faster due to the relationship between mass, gravity, and friction. Here's a more detailed explanation:
1. Gravitational force: The force of gravity acting on an object is proportional to its mass. Heavier objects have more mass, so they experience a stronger gravitational pull.
2. Friction: While friction does act against the motion of objects rolling downhill, it doesn't increase proportionally with mass. This means that the ratio of gravitational force to friction is generally more favorable for heavier objects.
3. Inertia: Heavier objects have more inertia, which means they resist changes in motion more than lighter objects. Once they start moving, they tend to keep moving more easily.
4. Air resistance: While air resistance does affect objects moving downhill, its effect is relatively smaller on heavier objects compared to lighter ones, as the ratio of air resistance to mass is lower for heavier objects.
As a result of these factors, heavier objects typically accelerate faster and reach higher speeds when traveling downhill compared to lighter objects of similar shape and size.
It's important to note that in a vacuum (where there's no air resistance), all objects would fall at the same rate regardless of their mass. However, in real-world conditions with air resistance and friction, heavier objects generally move faster downhill.
Time taken: 0.35 seconds
By following these steps, you’ll have a fully functional semantic search engine that leverages the strengths of Couchbase and Claude(by Anthropic). This guide is designed not just to show you how to build the system, but also to explain why each step is necessary, giving you a deeper understanding of the principles behind semantic search and how to implement it effectively. Whether you’re a newcomer to software development or an experienced developer looking to expand your skills, this guide will provide you with the knowledge and tools you need to create a powerful, AI-driven search engine.