Skip to content

Install Percona Server for MongoDB on Google Kubernetes Engine (GKE)

This guide shows you how to deploy Percona Operator for MongoDB on Google Kubernetes Engine (GKE). The document assumes some experience with the platform. For more information on the GKE, see the Kubernetes Engine Quickstart .

Prerequisites

All commands from this guide can be run either in the Google Cloud shell or in your local shell.

To use Google Cloud shell, you need nothing but a modern web browser.

If you would like to use your local shell, install the following:

  1. gcloud . This tool is part of the Google Cloud SDK. To install it, select your operating system on the official Google Cloud SDK documentation page and then follow the instructions.

  2. kubectl . It is the Kubernetes command-line tool you will use to manage and deploy applications. To install the tool, run the following command:

$ gcloud auth login
$ gcloud components install kubectl

Create and configure the GKE cluster

You can configure the settings using the gcloud tool. You can run it either in the Cloud Shell or in your local shell (if you have installed Google Cloud SDK locally on the previous step). The following command will create a cluster named my-cluster-name:

$ gcloud container clusters create my-cluster-name --project <project ID> --zone us-central1-a --cluster-version 1.30 --machine-type n1-standard-4 --num-nodes=3

Note

You must edit the following command and other command-line statements to replace the <project ID> placeholder with your project ID (see available projects with gcloud projects list command). You may also be required to edit the zone location, which is set to us-central1 in the above example. Other parameters specify that we are creating a cluster with 3 nodes and with machine type of 4 x86_64 vCPUs. If you need ARM64, use different --machine-type, for example, t2a-standard-4.

You may wait a few minutes for the cluster to be generated.

When the process is over, you can see it listed in the Google Cloud console

Select Kubernetes EngineClusters in the left menu panel:

image

Now you should configure the command-line access to your newly created cluster to make kubectl be able to use it.

In the Google Cloud Console, select your cluster and then click the Connect shown on the above image. You will see the connect statement which configures the command-line access. After you have edited the statement, you may run the command in your local shell:

$ gcloud container clusters get-credentials my-cluster-name --zone us-central1-a --project <project name>

Finally, use your Cloud Identity and Access Management (Cloud IAM) to control access to the cluster. The following command will give you the ability to create Roles and RoleBindings:

$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account)
Expected output
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created

Install the Operator and deploy your MongoDB cluster

  1. Deploy the Operator. By default deployment will be done in the default namespace. If that’s not the desired one, you can create a new namespace and/or set the context for the namespace as follows (replace the <namespace name> placeholder with some descriptive name):

    $ kubectl create namespace <namespace name>
    $ kubectl config set-context $(kubectl config current-context) --namespace=<namespace name>
    

    At success, you will see the message that namespace/<namespace name> was created, and the context (gke_<project name>_<zone location>_<cluster name>) was modified.

    Deploy the Operator by applying the deploy/bundle.yaml manifest from the Operator source tree.

    You can apply it without downloading, using the following command:

    $ kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.18.0/deploy/bundle.yaml
    
    Expected output
    customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied
    role.rbac.authorization.k8s.io/percona-server-mongodb-operator serverside-applied
    serviceaccount/percona-server-mongodb-operator serverside-applied
    rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator serverside-applied
    deployment.apps/percona-server-mongodb-operator serverside-applied
    

    Clone the repository with all manifests and source code by executing the following command:

    $ git clone -b v1.18.0 https://github.com/percona/percona-server-mongodb-operator
    

    Edit the deploy/bundle.yaml file: add the following affinity rules to the spec part of the percona-server-mongodb-operator Deployment:

        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: percona-server-mongodb-operator
        spec:
          replicas: 1
          selector:
            matchLabels:
              name: percona-server-mongodb-operator
          template:
            metadata:
              labels:
                name: percona-server-mongodb-operator
            spec:
              affinity:
                nodeAffinity:
                  requiredDuringSchedulingIgnoredDuringExecution:
                    nodeSelectorTerms:
                      - matchExpressions:
                        - key: kubernetes.io/arch
                          operator: In
                          values:
                            - arm64
    

    After editing, apply your modified deploy/bundle.yaml file as follows:

    $ kubectl apply --server-side -f deploy/bundle.yaml
    
    Expected output
    customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied
    role.rbac.authorization.k8s.io/percona-server-mongodb-operator serverside-applied
    serviceaccount/percona-server-mongodb-operator serverside-applied
    rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator serverside-applied
    deployment.apps/percona-server-mongodb-operator serverside-applied
    
  2. The Operator has been started, and you can deploy your MongoDB cluster:

    $ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.18.0/deploy/cr.yaml
    
    Expected output
    perconaservermongodb.psmdb.percona.com/my-cluster-name created
    

    Note

    This deploys default MongoDB cluster configuration, three mongod, three mongos, and three config server instances. Please see deploy/cr.yaml and Custom Resource Options for the configuration options. You can clone the repository with all manifests and source code by executing the following command:

    $ git clone -b v1.18.0 https://github.com/percona/percona-server-mongodb-operator
    

    After editing the needed options, apply your modified deploy/cr.yaml file as follows:

    $ kubectl apply -f deploy/cr.yaml
    

    Edit the deploy/cr.yaml file: set the following affinity rules in all affinity subsections:

    ....
    affinity:
      advanced:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - arm64
    

    Also, set image and backup.image Custom Resource options to special multi-architecture image versions by adding a -multi suffix to their tags:

    ....
    image: percona/percona-server-mongodb:7.0.14-8-multi
    ...
    backup:
      ...
      image: percona/percona-backup-mongodb:2.7.0-multi
    

    Please note, that currently monitoring with PMM is not supported on ARM64 configurations.

    After editing, apply your modified deploy/cr.yaml file as follows:

    $ kubectl apply -f deploy/cr.yaml
    
    Expected output
    perconaservermongodb.psmdb.percona.com/my-cluster-name created
    

    The creation process may take some time. When the process is over your cluster will obtain the ready status. You can check it with the following command:

    $ kubectl get psmdb
    
    Expected output
    NAME              ENDPOINT                                           STATUS   AGE
    my-cluster-name   my-cluster-name-mongos.default.svc.cluster.local   ready    5m26s
    
You can also track the creation process in Google Cloud console via the Object Browser

When the creation process is finished, it will look as follows:

image

Verifying the cluster operation

It may take ten minutes to get the cluster started. When kubectl get psmdb command finally shows you the cluster status as ready, you can try to connect to the cluster.

To connect to Percona Server for MongoDB you need to construct the MongoDB connection URI string. It includes the credentials of the admin user, which are stored in the Secrets object.

  1. List the Secrets objects

    $ kubectl get secrets -n <namespace>
    

    The Secrets object you are interested in has the my-cluster-name-secrets name by default.

  2. View the Secret contents to retrive the admin user credentials.

    $ kubectl get secret my-cluster-name-secrets -o yaml
    
    The command returns the YAML file with generated Secrets, including the MONGODB_DATABASE_ADMIN_USER and MONGODB_DATABASE_ADMIN_PASSWORD strings, which should look as follows:

    Sample output
    ...
    data:
      ...
      MONGODB_DATABASE_ADMIN_PASSWORD: aDAzQ0pCY3NSWEZ2ZUIzS1I=
      MONGODB_DATABASE_ADMIN_USER: ZGF0YWJhc2VBZG1pbg==
    

    The actual login name and password on the output are base64-encoded. To bring it back to a human-readable form, run:

    $ echo 'MONGODB_DATABASE_ADMIN_USER' | base64 --decode
    $ echo 'MONGODB_DATABASE_ADMIN_PASSWORD' | base64 --decode
    
  3. Run a container with a MongoDB client and connect its console output to your terminal. The following command does this, naming the new Pod percona-client:

    $ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:7.0.14-8 --restart=Never -- bash -il
    

    Executing it may require some time to deploy the corresponding Pod.

  4. Now run mongosh tool inside the percona-client command shell using the admin user credentialds you obtained from the Secret, and a proper namespace name instead of the <namespace name> placeholder. The command will look different depending on whether sharding is on (the default behavior) or off:

    $ mongosh "mongodb://databaseAdmin:databaseAdminPassword@my-cluster-name-mongos.<namespace name>.svc.cluster.local/admin?ssl=false"
    
    $ mongosh "mongodb+srv://databaseAdmin:databaseAdminPassword@my-cluster-name-rs0.<namespace name>.svc.cluster.local/admin?replicaSet=rs0&ssl=false"
    

    Note

    If you are using MongoDB versions earler than 6.x (such as 5.0.29-25 instead of the default 7.0.14-8 variant), substitute mongosh command with mongo in the above examples.

Troubleshooting

If kubectl get psmdb command doesn’t show ready status too long, you can check the creation process with the kubectl get pods command:

$ kubectl get pods
Expected output
NAME                                               READY   STATUS    RESTARTS   AGE
my-cluster-name-cfg-0                              2/2     Running   0          11m
my-cluster-name-cfg-1                              2/2     Running   1          10m
my-cluster-name-cfg-2                              2/2     Running   1          9m
my-cluster-name-mongos-0                           1/1     Running   0          11m
my-cluster-name-mongos-1                           1/1     Running   0          11m
my-cluster-name-mongos-2                           1/1     Running   0          11m
my-cluster-name-rs0-0                              2/2     Running   0          11m
my-cluster-name-rs0-1                              2/2     Running   0          10m
my-cluster-name-rs0-2                              2/2     Running   0          9m
percona-server-mongodb-operator-665cd69f9b-xg5dl   1/1     Running   0          37m

If the command output had shown some errors, you can examine the problematic Pod with the kubectl describe <pod name> command as follows:

$ kubectl describe pod my-cluster-name-rs0-2

Review the detailed information for Warning statements and then correct the configuration. An example of a warning is as follows:

Warning FailedScheduling 68s (x4 over 2m22s) default-scheduler 0/1 nodes are available: 1 node(s) didn’t match pod affinity/anti-affinity, 1 node(s) didn’t satisfy existing pods anti-affinity rules.

Alternatively, you can examine your Pods via the object browser

The errors will look as follows:

image

Clicking the problematic Pod will bring you to the details page with the same warning:

image

Removing the GKE cluster

There are several ways that you can delete the cluster.

You can clean up the cluster with the gcloud command as follows:

$ gcloud container clusters delete <cluster name> --zone us-central1-a --project <project ID>

The return statement requests your confirmation of the deletion. Type y to confirm.

Also, you can delete your cluster via the Google Cloud console

Just click the Delete popup menu item in the clusters list:

image

The cluster deletion may take time.

Warning

After deleting the cluster, all data stored in it will be lost!

Get expert help

If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. Join K8S Squad to benefit from early access to features and “ask me anything” sessions with the Experts.


Last update: 2024-11-15