Back to the Couchbase homepageCouchbase logo
Couchbase Developer

  • Docs

    • Integrations
    • SDKs
    • Mobile SDKs

    • AI Developer
    • Backend
    • Full-stack
    • Mobile
    • Ops / DBA

    • Data Modeling
    • Scalability

  • Tutorials

    • Developer Community
    • Ambassador Program
  • Sign In
  • Try Free

Using Mistral AI Embeddings with Couchbase Vector Search

  • Learn how to generate embeddings using Mistral AI and store them in Couchbase.
  • This tutorial demonstrates how to use Couchbase's vector search capabilities with Mistral AI embeddings.
  • You'll understand how to perform vector search to find relevant documents based on similarity.

View Source

Introduction

Couchbase is a NoSQL distributed document database (JSON) with many of the best features of a relational DBMS: SQL, distributed ACID transactions, and much more. Couchbase Capella™ is the easiest way to get started, but you can also download and run Couchbase Server on-premises.

Mistral AI is a research lab building the best open source models in the world. La Plateforme enables developers and enterprises to build new products and applications, powered by Mistral’s open source and commercial LLMs.

The Mistral AI APIs empower LLM applications via:

  • Text generation, enables streaming and provides the ability to display partial model results in real-time
  • Code generation, enpowers code generation tasks, including fill-in-the-middle and code completion
  • Embeddings, useful for RAG where it represents the meaning of text as a list of numbers
  • Function calling, enables Mistral models to connect to external tools
  • Fine-tuning, enables developers to create customized and specilized models
  • JSON mode, enables developers to set the response format to json_object
  • Guardrailing, enables developers to enforce policies at the system level of Mistral models

How to run this tutorial

This tutorial is available as a Jupyter Notebook (.ipynb file) that you can run interactively. You can access the original notebook here.

You can either download the notebook file and run it on Google Colab or run it on your system by setting up the Python environment.

Before you start

Get Credentials for Mistral AI

Please follow the instructions to generate the Mistral AI credentials.

Create and Deploy Your Free Tier Operational cluster on Capella

To get started with Couchbase Capella, create an account and use it to deploy a forever free tier operational cluster. This account provides you with a environment where you can explore and learn about Capella with no time constraint.

To know more, please follow the instructions.

Couchbase Capella Configuration

When running Couchbase using Capella, the following prerequisites need to be met.

  • Create the database credentials to access the travel-sample bucket (Read and Write) used in the application.
  • Allow access to the Cluster from the IP on which the application is running.

Install necessary libraries

!pip install couchbase==4.3.5 mistralai==1.7.0
[Output too long, omitted for brevity]

Imports

from pathlib import Path
from datetime import timedelta
from mistralai import Mistral
from couchbase.auth import PasswordAuthenticator
from couchbase.cluster import Cluster
from couchbase.options import (ClusterOptions, ClusterTimeoutOptions,
                               QueryOptions)
import couchbase.search as search
from couchbase.options import SearchOptions
from couchbase.vector_search import VectorQuery, VectorSearch
import uuid

Prerequisites

import getpass
couchbase_cluster_url = input("Cluster URL:")
couchbase_username = input("Couchbase username:")
couchbase_password = getpass.getpass("Couchbase password:")
couchbase_bucket = input("Couchbase bucket:")
couchbase_scope = input("Couchbase scope:")
couchbase_collection = input("Couchbase collection:")
Cluster URL: localhost
Couchbase username: Administrator
Couchbase password: ········
Couchbase bucket: mistralai
Couchbase scope: _default
Couchbase collection: mistralai

Couchbase Connection

auth = PasswordAuthenticator(
    couchbase_username,
    couchbase_password
)
cluster = Cluster(couchbase_cluster_url, ClusterOptions(auth))
cluster.wait_until_ready(timedelta(seconds=5))

bucket = cluster.bucket(couchbase_bucket)
scope = bucket.scope(couchbase_scope)
collection = scope.collection(couchbase_collection)

Creating Couchbase Vector Search Index

In order to store Mistral embeddings onto a Couchbase Cluster, a vector search index needs to be created first. We included a sample index definition that will work with this tutorial in the mistralai_index.json file. The definition can be used to create a vector index using Couchbase server web console, on more information on vector indexes, please read Create a Vector Search Index with the Server Web Console.

search_index_name = couchbase_bucket + "._default.vector_test"
search_index = cluster.search_indexes().get_index(search_index_name)

Mistral Connection

MISTRAL_API_KEY = getpass.getpass("Mistral API Key:")
mistral_client = Mistral(api_key=MISTRAL_API_KEY)

Embedding Documents

Mistral client can be used to generate vector embeddings for given text fragments. These embeddings represent the sentiment of corresponding fragments and can be stored in Couchbase for further retrieval. A custom embedding text can also be added into the embedding texts array by running this code block:

texts = [
    "Couchbase Server is a multipurpose, distributed database that fuses the strengths of relational databases such as SQL and ACID transactions with JSON’s versatility, with a foundation that is extremely fast and scalable.",
    "It’s used across industries for things like user profiles, dynamic product catalogs, GenAI apps, vector search, high-speed caching, and much more.",
    input("custom embedding text")
]
embeddings = mistral_client.embeddings.create(
    model="mistral-embed",
    inputs=texts,
)

print("Output embeddings: " + str(len(embeddings.data)))

The output embeddings is an EmbeddingResponse object with the embeddings and the token usage information:

EmbeddingResponse(
    id='eb4c2c739780415bb3af4e47580318cc', object='list', data=[
        Data(object='embedding', embedding=[-0.0165863037109375,...], index=0),
        Data(object='embedding', embedding=[-0.0234222412109375,...], index=1)],
        Data(object='embedding', embedding=[-0.0466222735279375,...], index=2)],
    model='mistral-embed', usage=EmbeddingResponseUsage(prompt_tokens=15, total_tokens=15)
)

Storing Embeddings in Couchbase

Each embedding needs to be stored as a couchbase document. According to provided search index, embedding vector values need to be stored in the vector field. The original text of the embedding can be stored in the same document:

for i in range(0, len(texts)):
    doc = {
        "id": str(uuid.uuid4()),
        "text": texts[i],
        "vector": embeddings.data[i].embedding,
    }
    collection.upsert(doc["id"], doc)

Searching For Embeddings

Stored in Couchbase embeddings later can be searched using the vector index to, for example, find text fragments that would be the most relevant to some user-entered prompt:

search_embedding = mistral_client.embeddings.create(
    model="mistral-embed",
    inputs=["name a multipurpose database with distributed capability"],
).data[0]

search_req = search.SearchRequest.create(search.MatchNoneQuery()).with_vector_search(
    VectorSearch.from_vector_query(
        VectorQuery(
            "vector", search_embedding.embedding, num_candidates=1
        )
    )
)
result = scope.search(
    "vector_test", 
    search_req, 
    SearchOptions(
        limit=13, 
        fields=["vector", "id", "text"]
    )
)
for row in result.rows():
    print("Found answer: " + row.id + "; score: " + str(row.score))
    doc = collection.get(row.id)
    print("Answer text: " + doc.value["text"])
    
Found answer: 7a4c24dd-393f-4f08-ae42-69ea7009dcda; score: 1.7320726542316662
Answer text: Couchbase Server is a multipurpose, distributed database that fuses the strengths of relational databases such as SQL and ACID transactions with JSON’s versatility, with a foundation that is extremely fast and scalable.

This tutorial is part of a Couchbase Learning Path:
Contents
Couchbase home page link

3250 Olcott Street
Santa Clara, CA 95054
United States

  • company
  • about
  • leadership
  • news & press
  • investor relations
  • careers
  • events
  • legal
  • contact us
  • support
  • Developer portal
  • Documentation
  • Forums
  • PROFESSIONAL SERVICES
  • support login
  • support policy
  • training
  • quicklinks
  • blog
  • downloads
  • get started
  • resources
  • why nosql
  • pricing
  • follow us
  • Social Media Link for FacebookFacebook
  • Social Media Link for TwitterTwitter
  • Social Media Link for LinkedInLinkedIn
  • Social Media Link for Youtubeyoutube
  • Social Media Link for GitHubGithub
  • Social Media Link for Stack OverflowStack Overflow
  • Social Media Link for Discorddiscord

© 2025 Couchbase, Inc. Couchbase and the Couchbase logo are registered trademarks of Couchbase, Inc. All third party trademarks (including logos and icons) referenced by Couchbase, Inc. remain the property of their respective owners.

Terms of UsePrivacy PolicyCookie PolicySupport PolicyDo Not Sell My Personal InformationMarketing Preference Center