Skip to content

Install Percona Distribution for PostgreSQL on Google Kubernetes Engine (GKE)

Following steps will allow you to install the Operator and use it to manage Percona Distribution for PostgreSQL with the Google Kubernetes Engine. The document assumes some experience with Google Kubernetes Engine (GKE). For more information on the GKE, see the Kubernetes Engine Quickstart.

Prerequisites

All commands from this quickstart can be run either in the Google Cloud shell or in your local shell.

To use Google Cloud shell, you need nothing but a modern web browser.

If you would like to use your local shell, install the following:

  1. gcloud. This tool is part of the Google Cloud SDK. To install it, select your operating system on the official Google Cloud SDK documentation page and then follow the instructions.

  2. kubectl. It is the Kubernetes command-line tool you will use to manage and deploy applications. To install the tool, run the following command:

    $ gcloud auth login
    $ gcloud components install kubectl
    

Configuring default settings for the cluster

You can configure the settings using the gcloud tool. You can run it either in the Cloud Shell or in your local shell (if you have installed Google Cloud SDK locally on the previous step). The following command will create a cluster named my-cluster-1:

$ gcloud container clusters create cluster-1 --project <project name> --zone us-central1-a --cluster-version {{ gkerecommended }} --machine-type n1-standard-4 --num-nodes=3

Note

You must edit the following command and other command-line statements to replace the <project name> placeholder with your project name. You may also be required to edit the zone location, which is set to us-central1 in the above example. Other parameters specify that we are creating a cluster with 3 nodes and with machine type of 4 vCPUs and 45 GB memory.

You may wait a few minutes for the cluster to be generated, and then you will see it listed in the Google Cloud console (select Kubernetes EngineClusters in the left menu panel):

image

Now you should configure the command-line access to your newly created cluster to make kubectl be able to use it.

In the Google Cloud Console, select your cluster and then click the Connect shown on the above image. You will see the connect statement configures command-line access. After you have edited the statement, you may run the command in your local shell:

$ gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project <project name>

Installing the Operator

  1. First of all, use your Cloud Identity and Access Management (Cloud IAM) to control access to the cluster. The following command will give you the ability to create Roles and RoleBindings:

    $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account)
    
    Expected output
    clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
    
  2. Use the following git clone command to download the correct branch of the percona-postgresql-operator repository:

    $ git clone -b v1.5.1 https://github.com/percona/percona-postgresql-operator
    $ cd percona-postgresql-operator
    
  3. The next thing to do is to add the pgo namespace to Kubernetes, not forgetting to set the correspondent context for further steps:

    $ kubectl create namespace pgo
    $ kubectl config set-context $(kubectl config current-context) --namespace=pgo
    

    Note

    To use different namespace, you should edit all occurrences of the namespace: pgo line in both deploy/cr.yaml and deploy/operator.yaml configuration files.

  4. Deploy the operator with the following command:

    $ kubectl apply -f deploy/operator.yaml
    
    Expected output
    serviceaccount/pgo-deployer-sa created
    clusterrole.rbac.authorization.k8s.io/pgo-deployer-cr created
    configmap/pgo-deployer-cm created
    clusterrolebinding.rbac.authorization.k8s.io/pgo-deployer-crb created
    job.batch/pgo-deploy created
    

    The last line of the command output mentions the pgo-deploy Kubernetes Job created to carry on the Operator deployment process. It can take several minutes to be completed. You can track it with the following command:

    $ kubectl get job/pgo-deploy
    
    Expected output
    NAME         COMPLETIONS   DURATION   AGE
    pgo-deploy   1/1           81s        5m53s
    

    When it reaches the COMPLETIONS count of 1/1, you can safely delete the job as follows:

    $ kubectl delete  job/pgo-deploy
    

    Note

    Deleting the pgo-deploy job will be needed before upgrading the Operator.

  5. After the operator is started Percona Distribution for PostgreSQL can be created at any time with the following commands:

    $ kubectl apply -f deploy/cr.yaml
    

    Creation process will take some time. The process is over when the Operator and PostgreSQL Pods have reached their Running status:

    $ kubectl get pods
    
    Expected output
    NAME                                              READY   STATUS    RESTARTS   AGE
    backrest-backup-cluster1-4nq2x                    0/1     Completed 0          10m
    cluster1-6c9d4f9678-qdfx2                         1/1     Running   0          10m
    cluster1-backrest-shared-repo-7cb4dd8f8f-sh5gg    1/1     Running   0          10m
    cluster1-pgbouncer-6cd69d8966-vlxdt               1/1     Running   0          10m
    pgo-deploy-bp2ts                                  0/1     Completed 0          5m
    postgres-operator-67f58bcb8c-9p4tl                4/4     Running   1          5m
    

    Also, you can see the same information when browsing Pods of your cluster in Google Cloud console via the Object Browser:

    image

  6. During previous steps, the Operator has generated several secrets, including the password for the pguser user, which you will need to access the cluster.

    Use kubectl get secrets command to see the list of Secrets objects (by default Secrets object you are interested in has cluster1-pguser-secret name). Then you can use kubectl get secret cluster1-pguser-secret -o yaml to look through the YAML file with generated secrets (the actual password will be base64-encoded), or just get the needed password with the following command:

    $ kubectl get secrets cluster1-users -o yaml -o jsonpath='{.data.pguser}' | base64 --decode | tr '\n' ' ' && echo " "
    
  7. Check connectivity to newly created cluster. Run a new Pod to use it as a client and connect its console output to your terminal (running it may require some time to deploy). When you see the command line prompt of the newly created Pod, run psql tool using the password obtained from the secret. The following command will do this, naming the new Pod pg-client:

    $ kubectl run -i --rm --tty pg-client --image=perconalab/percona-distribution-postgresql:14.9 --restart=Never -- bash -il
    [postgres@pg-client /]$ PGPASSWORD='pguser_password' psql -h cluster1-pgbouncer -p 5432 -U pguser pgdb
    

    This command will connect you to the PostgreSQL interactive terminal.

    $ psql (14.9)
    Type "help" for help.
    pgdb=>
    

Last update: 2024-05-02