Skip to content
logo
Percona Operator for MongoDB
Exposing the cluster
Initializing search
    percona/k8spsmdb-docs
    percona/k8spsmdb-docs
    • Welcome
      • System requirements
      • Design and architecture
      • Comparison with other solutions
      • Install with Helm
      • Install with kubectl
      • Install on Minikube
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
      • Install on Microsoft Azure Kubernetes Service (AKS)
      • Generic Kubernetes installation
      • Install on OpenShift
      • Application and system users
      • Changing MongoDB options
      • Anti-affinity and tolerations
      • Labels and annotations
      • Exposing the cluster
        • Using single entry point in a sharded cluster
        • Accessing replica set Pods
        • Service per Pod
        • Controlling hostnames in replset configuration
      • Local storage support
      • Arbiter and non-voting nodes
      • MongoDB sharding
      • Transport encryption (TLS/SSL)
      • Data at rest encryption
      • Telemetry
        • About backups
        • Configure storage for backups
        • Making scheduled backups
        • Making on-demand backup
        • Storing operations logs for point-in-time recovery
        • Restore from a previously saved backup
        • Delete the unneeded backup
      • Upgrade MongoDB and the Operator
      • Horizontal and vertical scaling
      • Multi-cluster and multi-region deployment
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Restart or pause the cluster
      • Debug and troubleshoot
      • OpenLDAP integration
      • How to use private registry
      • Creating a private S3-compatible cloud for backups
      • Restore backup to a new Kubernetes-based environment
      • How to use backups to move the external database to Kubernetes
      • Install Percona Server for MongoDB in multi-namespace (cluster-wide) mode
      • Upgrading Percona Server for MongoDB manually
      • Custom Resource options
      • Percona certified images
      • Operator API
      • Frequently asked questions
      • Old releases (documentation archive)
      • Release notes index
      • Percona Operator for MongoDB 1.14.0 (2023-03-13)
      • Percona Operator for MongoDB 1.13.0 (2022-09-15)
      • Percona Operator for MongoDB 1.12.0 (2022-05-05)
      • Percona Distribution for MongoDB Operator 1.11.0 (2021-12-21)
      • Percona Distribution for MongoDB Operator 1.10.0 (2021-09-30)
      • Percona Distribution for MongoDB Operator 1.9.0 (2021-07-29)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.8.0 (2021-05-06)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.7.0 (2021-03-08)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.6.0 (2020-12-22)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.5.0 (2020-09-07)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.4.0 (2020-03-31)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.3.0 (2019-12-11)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.2.0 (2019-09-20)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.1.0 (2019-07-15)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.0.0 (2019-05-29)

    • Using single entry point in a sharded cluster
    • Accessing replica set Pods
    • Service per Pod
    • Controlling hostnames in replset configuration

    Exposing cluster¶

    The Operator provides entry points for accessing the database by client applications in several scenarios. In either way the cluster is exposed with regular Kubernetes Service objects, configured by the Operator.

    This document describes the usage of Custom Resource manifest options to expose the clusters deployed with the Operator.

    Using single entry point in a sharded cluster¶

    If Percona Server for MongoDB Sharding mode is turned on (default behavior), then database cluster runs special mongos Pods - query routers, which acts as an entry point for client applications,

    image

    If this feature is enabled, the URI looks like follows (taking into account the need in a proper password obtained from the Secret, and a proper namespace name instead of the <namespace name> placeholder):

    $ mongo "mongodb://userAdmin:userAdminPassword@my-cluster-name-mongos.<namespace name>.svc.cluster.local/admin?ssl=false"
    

    You can find more on sharding in the official MongoDB documentation.

    Accessing replica set Pods¶

    If Percona Server for MongoDB Sharding mode is turned off, the application needs access to all MongoDB Pods of the replica set:

    image

    When Kubernetes creates Pods, each Pod has an IP address in the internal virtual network of the cluster. Creating and destroying Pods is a dynamic process, therefore binding communication between Pods to specific IP addresses would cause problems as things change over time as a result of the cluster scaling, maintenance, etc. Due to this changing environment, you should connect to Percona Server for MongoDB via Kubernetes internal DNS names in URI (e.g. using mongodb+srv://userAdmin:userAdmin123456@<cluster-name>-rs0.<namespace>.svc.cluster.local/admin?replicaSet=rs0&ssl=false to access one of the Replica Set Pods).

    In this case, the URI looks like follows (taking into account the need in a proper password obtained from the Secret, and a proper namespace name instead of the <namespace name> placeholder):

    $ mongodb://databaseAdmin:databaseAdminPassword@my-cluster-name-rs0.<namespace name>.svc.cluster.local/admin?replicaSet=rs0&ssl=false"
    

    Service per Pod¶

    URI-based access is strictly recommended.

    Still sometimes you cannot communicate with the Pods using the Kubernetes internal DNS names. To make Pods of the Replica Set accessible, Percona Operator for MongoDB can assign a Kubernetes Service to each Pod.

    This feature can be configured in the replsets (for MondgoDB instances Pod) and sharding (for mongos Pod) sections of the deploy/cr.yaml file:

    • set expose.enabled option to true to allow exposing Pods via services,
    • set expose.exposeType option specifying the IP address type to be used:
      • ClusterIP - expose the Pod’s service with an internal static IP address. This variant makes MongoDB Pod only reachable from within the Kubernetes cluster.
      • NodePort - expose the Pod’s service on each Kubernetes node’s IP address at a static port. ClusterIP service, to which the node port will be routed, is automatically created in this variant. As an advantage, the service will be reachable from outside the cluster by node address and port number, but the address will be bound to a specific Kubernetes node.
      • LoadBalancer - expose the Pod’s service externally using a cloud provider’s load balancer. Both ClusterIP and NodePort services are automatically created in this variant.

    If this feature is enabled, URI looks like mongodb://databaseAdmin:databaseAdminPassword@<ip1>:<port1>,<ip2>:<port2>,<ip3>:<port3>/admin?replicaSet=rs0&ssl=false All IP adresses should be directly reachable by application.

    Controlling hostnames in replset configuration¶

    Starting from v1.14, the Operator configures replica set members using local fully-qualified domain names (FQDN), which are resolvable and available only from inside the Kubernetes cluster. Exposing the replica set using the options described above will not affect hostname usage in the replica set configuration.

    Note

    Before v1.14, the Operator used the exposed IP addresses in the replica set configuration in the case of the exposed replica set.

    It is still possible to restore the old behavior. For example, it may be useful to have the replica set configured with external IP addresses for multi-cluster deployments. The clusterServiceDNSMode field in the Custom Resource controls this Operator behavior. You can set clusterServiceDNSMode to one of the following values:

    1. Internal: Use local FQDNs (i.e., cluster1-rs0-0.cluster1-rs0.psmdb.svc.cluster.local) in replica set configuration even if the replica set is exposed. This is the default value.
    2. ServiceMesh: Use a special FQDN using the Pod name (i.e., cluster1-rs0-0.psmdb.svc.cluster.local), assuming it’s resolvable and available in all clusters.
    3. External: Use exposed IP in replica set configuration if replica set is exposed; else, use local FQDN. This copies the behavior of the Operator v1.13.

    If backups are enabled in your cluster, you need to restart replset and config servers after changing clusterServiceDNSMode. This option changes the hostnames inside the replset configuration and running pbm-agents don’t discover the change until they’re restarted. You may have errors in backup-agent container logs and your backups may not work until you restarted the agents.

    Restart can be done manually with the kubectl rollout restart sts <clusterName>-<replsetName> command executed for each replica set in the spec.replsets; also, if sharding enabled, do the same for config servers with kubectl rollout restart sts <clusterName>-cfg. Alternatively, you can simply restart your cluster.

    Warning

    You should be careful with the clusterServiceDNSMode=External variant. Using IP addresses instead of DNS hostnames is discouraged in MongoDB. IP addresses make configuration changes and recovery more complicated. Also, they are particularly problematic in scenarios where IP addresses change (i.e., deleting and recreating the cluster).

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-03-30
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.