Couchbase Server is a fully distributed database, making connection management and efficient communication key components of the architecture. This section provides information about client to cluster, node to node, cluster to cluster, and cluster to external products communications. It also describes the phases of establishing a connection.
Client to Cluster CommunicationClient applications communicate with Couchbase Server through a set of access points tuned for the data access category such as CRUD operations, N1QL queries, and so on. Each access point supports clear text and encrypted communication ports.
There are four main types of access points that drive the majority of client to server communications.
|REST||8091, 18091 (SSL)||Admin operations with the REST Admin API||Direct Connect to a single node in the cluster to perform admin operations, monitoring, and alerting.|
|REST||8092, 18092 (SSL)||Query with View (View and Spatial View API)||Load balanced connection across nodes of the cluster that run the data service for View queries.|
|REST||8093, 18093 (SSL)||Query with N1QL (N1QL API)||Load balanced connection across nodes of the cluster that run the query service for N1QL queries.|
|ONLINE||11210, 11207 (SSL)||Core Data operations||Stateful connections from client app to nodes of the cluster that runs data service for CRUD operations.|
|REST||8094||Search Service (Developer Preview)||Load balanced connections across nodes of the cluster that run the search service for full text search queries.|
Node to Node CommunicationNodes of the cluster communicate with each other to replicate data, maintain indexes, check health of nodes, communicate changes to the configuration of the cluster, and much more.
Node to node communication is optimized for high efficiency operations and may not go through all the connectivity phases (authentication, discovery, and service connection). For more information about connectivity phases, see Client to Cluster Communication.
Cluster to Cluster CommunicationCouchbase Server clusters can communicate with each other using the Cross Datacenter Replication (XDCR) capability.
XDCR communication is set up from a source cluster to a destination cluster. For more information, see Cross Datacenter Replication.
External Connector CommunicationCouchbase Server also communicates with external products through connectors.
Couchbase has built and supports connectors for Spark, Kafka, Elasticsearch, SOLR, and so on.
The community and other companies have also built more connectors for ODBC driver, JDBC driver, Flume, Storm, Nagios connectors for Couchbase, and so on. External connectors are typically built using the existing client SDKs, the direct service or admin APIs listed in the client to cluster communication section, or feed directly from the internal APIs such as the Database Change Protocol (DCP) API. For more information about Database Change Protocol, see Intra-cluster Replication.
When a connection request comes in from the client side, the connection is established in three phases: authentication, discovery, and service connection.
- Authentication: In the first phase, the connection to a bucket is authenticated based on the credentials provided by the client. In case of Admin REST API, admin users are authenticated for the cluster and not just a bucket.
- Discovery: In the second phase, the connection gets a cluster map which represents the topology of the cluster, including the list of nodes, how data is distributed on these nodes, and the services that run on these nodes. Client applications using the SDKs only need to know the URL or address to one of the nodes in the cluster. Client applications with the cluster map discover all other nodes and the entire topology of the cluster.
- Service Connection: Armed with the cluster map, client SDKs figure out the connections needed to establish and perform the service level operations through key-value, N1QL, or View APIs. Service connections require a secondary authentication to the service to ensure the credentials passed on to the service have access to the service level operations. With authentication cleared, the connection to the service is established.
At times, the topology of the cluster may change and the service connection may get exceptions on its requests to the services. In such cases, client SDKs go back to the previous phase to rerun discovery and retry the operation with a new connection.