This demo showcases the Semantic Kernel Couchbase connector - a .NET library that bridges Microsoft's Semantic Kernel framework with Couchbase's vector search capabilities. The connector provides a seamless integration that allows developers to build AI-powered applications using familiar Semantic Kernel abstractions while leveraging Couchbase's vector indexing for high-performance semantic search.
The connector supports three index types:
This makes the connector ideal for RAG (Retrieval-Augmented Generation) applications, semantic search engines, hybrid search, and recommendation systems.
text-embedding-3-small modelgit clone https://github.com/couchbase-examples/couchbase-semantic-kernel-quickstart.git
cd couchbase-semantic-kernel-quickstart/CouchbaseVectorSearchDemodotnet restoreUpdate appsettings.Development.json with your credentials:
{
"OpenAI": {
"ApiKey": "your-openai-api-key-here",
"EmbeddingModel": "text-embedding-3-small"
},
"Couchbase": {
"ConnectionString": "couchbase://localhost",
"Username": "Administrator",
"Password": "your-password",
"BucketName": "demo",
"ScopeName": "semantic-kernel",
"CollectionName": "glossary"
}
}Note: The
BucketName,ScopeName, andCollectionNamevalues can be changed to match your Couchbase setup, but you'll need to update the corresponding code references in the demo application accordingly.
The demo uses a Glossary class that demonstrates Semantic Kernel's vector store data model. The model uses attributes to define how properties are stored and indexed in the vector database.
For a comprehensive guide on data modeling in Semantic Kernel, refer to Defining your data model in the official documentation.
internal sealed class Glossary
{
[VectorStoreKey]
public string Key { get; set; }
[VectorStoreData(IsIndexed = true)]
public string Category { get; set; }
[VectorStoreData]
public string Term { get; set; }
[VectorStoreData]
public string Definition { get; set; }
[VectorStoreVector(Dimensions: 1536)]
public ReadOnlyMemory<float> DefinitionEmbedding { get; set; }
}Ensure you have the bucket, scope, and collection ready in Couchbase:
demosemantic-kernelglossaryThis step demonstrates how the connector works with Semantic Kernel's vector store abstractions:
Getting the Collection - The demo uses CouchbaseVectorStore.GetCollection<TKey, TRecord>() to obtain a collection reference configured for Hyperscale index:
var vectorStore = new CouchbaseVectorStore(scope);
var collection = vectorStore.GetCollection<string, Glossary>(
"glossary",
new CouchbaseQueryCollectionOptions
{
SimilarityMetric = "cosine"
}
);The CouchbaseQueryCollectionOptions works with both Hyperscale and Composite indexes - simply specify the appropriate index name. For Search Vector indexes, use CouchbaseSearchCollection with CouchbaseSearchCollectionOptions instead.
Automatic Embedding Generation - The connector integrates with Semantic Kernel's IEmbeddingGenerator interface to automatically generate embeddings from text. When you provide an embedding generator (in this case, OpenAI's text-embedding-3-small), the text is automatically converted to vectors:
// Generate embedding from text
var embedding = await embeddingGenerator.GenerateAsync(glossary.Definition);
glossary.DefinitionEmbedding = embedding.Vector;For more details on embedding generation in Semantic Kernel, see Embedding Generation Documentation.
Upserting Records - The demo uses the connector's UpsertAsync() method to insert or update records in the collection:
await collection.UpsertAsync(glossaryEntries);This creates 6 sample glossary entries with technical terms, generates embeddings for each definition, and stores them in Couchbase with the following structure:
Document ID: "1" (from Key field)
Document Content:
{
"Category": "Software",
"Term": "API",
"Definition": "Application Programming Interface. A set of rules...",
"DefinitionEmbedding": [0.123, -0.456, 0.789, ...] // 1536 floats
}While the application works without creating indexes manually, you can optionally create a vector index for better performance.
This demo uses a Hyperscale Vector Index - optimized for pure vector searches without heavy scalar filtering.
After documents are inserted, the demo creates the Hyperscale index:
CREATE VECTOR INDEX `hyperscale_glossary_index`
ON `demo`.`semantic-kernel`.`glossary` (DefinitionEmbedding VECTOR)
INCLUDE (Category, Term, Definition)
USING GSI WITH {
"dimension": 1536,
"similarity": "cosine",
"description": "IVF,SQ8"
}Hyperscale Index Configuration:
DefinitionEmbedding (1536 dimensions)cosine (optimal for OpenAI embeddings)IVF,SQ8 (Inverted File with 8-bit scalar quantization)Note: Composite vector indexes can be created similarly by adding scalar fields to the index definition. Use Composite indexes when your queries frequently filter on scalar values before vector comparison. For this demo, we use Hyperscale since we are demonstrating pure semantic search capabilities.
The demo performs two types of searches using the connector's SearchAsync() method with the Hyperscale index:
Using the connector's search API:
// Generate embedding from search query
var searchVector = (await embeddingGenerator.GenerateAsync(
"What is an Application Programming Interface?")).Vector;
// Search using the connector
var results = await collection.SearchAsync(searchVector, top: 1)
.ToListAsync();Behind the scenes, this executes a SQL++ query with ANN_DISTANCE:
SELECT META().id AS _id, Category, Term, Definition,
ANN_DISTANCE(DefinitionEmbedding, [0.1,0.2,...], 'cosine') AS _distance
FROM `demo`.`semantic-kernel`.`glossary`
ORDER BY _distance ASC
LIMIT 1Note: The distance metric (
'cosine'in this example) comes from theSimilarityMetricproperty configured when creating the collection:
Expected Result: Finds "API" entry with high similarity
Even with a Hyperscale index (designed for pure vector search), the connector supports filtering using LINQ expressions with VectorSearchOptions:
// Search with scalar filter
var results = await collection.SearchAsync(
searchVector,
top: 1,
new VectorSearchOptions<Glossary>
{
Filter = g => g.Category == "AI"
}).ToListAsync();This translates to SQL++ with a WHERE clause:
SELECT META().id AS _id, Category, Term, Definition,
ANN_DISTANCE(DefinitionEmbedding, [0.1,0.2,...], 'cosine') AS _distance
FROM `demo`.`semantic-kernel`.`glossary`
WHERE Category = 'AI'
ORDER BY _distance ASC
LIMIT 1Query: "How do I provide additional context to an LLM?"
Expected Result: Finds "RAG" entry within AI category
Note: While Hyperscale indexes support filtering as shown above, for scenarios where you frequently filter on scalar values with highly selective filters, consider using a Composite vector index instead. The index creation syntax is similar - just add the scalar fields to the index definition. The connector's
SearchAsync()method works identically with both index types.
Couchbase offers three types of vector indexes optimized for different use cases:
1. Hyperscale Vector Indexes ← This demo uses Hyperscale
CouchbaseQueryCollectionCREATE VECTOR INDEX as shown in Step 32. Composite Vector Indexes
CouchbaseQueryCollection3. Search Vector Indexes
CouchbaseSearchCollectionAll three index types work with the same Semantic Kernel abstractions (SearchAsync(), UpsertAsync(), etc.). The main difference is which collection class you instantiate and the underlying query engine.
Choosing the Right Type:
For more details, see the Couchbase Vector Index Documentation.
The description parameter in the index definition controls vector storage optimization through centroids and quantization:
Format: IVF[<centroids>],{PQ|SQ}<settings>
Centroids (IVF - Inverted File)
IVF,SQ8), Couchbase auto-selects based on dataset sizeQuantization Options
SQ4, SQ6, SQ8 (4, 6, or 8 bits per dimension)PQx (e.g., PQ32x8)Common Examples:
IVF,SQ8 - Auto centroids, 8-bit quantization (good default)IVF1000,SQ6 - 1000 centroids, 6-bit quantization (faster, less accurate)IVF,PQ32x8 - Auto centroids, product quantization (better accuracy)For detailed configuration options, see the Quantization & Centroid Settings documentation.
dotnet build
dotnet runCouchbase Hyperscale Vector Search Demo
====================================
Using OpenAI model: text-embedding-3-small
Step 1: Ingesting data into Couchbase vector store...
Data ingestion completed
Step 2: Creating Hyperscale vector index manually...
Executing Hyperscale index creation query...
Hyperscale vector index 'hyperscale_glossary_index' created successfully!
Step 3: Performing vector search...
Found: API
Definition: Application Programming Interface. A set of rules and specifications that allow software components to communicate and exchange data.
Score: 0.1847
Step 4: Performing filtered vector search...
Found (AI category only): RAG
Definition: Retrieval Augmented Generation - a term that refers to the process of retrieving additional data to provide as context to an LLM to use when generating a response (completion) to a user's question (prompt).
Score: 0.4226
Demo completed successfully!The Couchbase Semantic Kernel connector provides a seamless integration between Semantic Kernel's vector store abstractions and Couchbase's vector search capabilities:
CouchbaseVectorStore instance using a Couchbase scopeGetCollection<TKey, TRecord>() to get a typed collection referenceIEmbeddingGenerator to convert text to vectorsUpsertAsync() to insert/update records with embeddingsSearchAsync() with optional VectorSearchOptions for filtered searchesVector Store Classes:
CouchbaseVectorStore - Main entry point for vector store operationsCouchbaseQueryCollection - Collection class for Hyperscale and Composite indexes (SQL++)CouchbaseSearchCollection - Collection class for Search Vector indexes (Search, formerly known as Full Text service)Common Methods (all index types):
GetCollection<TKey, TRecord>() - Returns a typed collection for CRUD operationsUpsertAsync() - Inserts or updates records in the collectionSearchAsync() - Performs vector similarity search with optional filtersVectorSearchOptions - Configures search behavior including filters and result countConfiguration Options:
CouchbaseQueryCollectionOptions - For Hyperscale and Composite indexesCouchbaseSearchCollectionOptions - For Search Vector indexesFor more documentation, visit the connector repository.