Skip to content

Install Percona Distribution for PostgreSQL on Google Kubernetes Engine (GKE)

Following steps help you install the Operator and use it to manage Percona Distribution for PostgreSQL with the Google Kubernetes Engine. The document assumes some experience with Google Kubernetes Engine (GKE). For more information on GKE, see the Kubernetes Engine Quickstart .

Prerequisites

All commands from this installation guide can be run either in the Google Cloud shell or in your local shell.

To use Google Cloud shell, you need nothing but a modern web browser.

If you would like to use your local shell, install the following:

  1. gcloud . This tool is part of the Google Cloud SDK. To install it, select your operating system on the official Google Cloud SDK documentation page and then follow the instructions.

  2. kubectl . This is the Kubernetes command-line tool you will use to manage and deploy applications. To install the tool, run the following command:

    $ gcloud auth login
    $ gcloud components install kubectl
    

Create and configure the GKE cluster

You can configure the settings using the gcloud tool. You can run it either in the Cloud Shell or in your local shell (if you have installed Google Cloud SDK locally on the previous step). The following command creates a cluster named cluster-1:

$ gcloud container clusters create cluster-1 --project <project ID> --zone us-central1-a --cluster-version 1.30 --machine-type n1-standard-4 --num-nodes=3

Note

You must edit the above command and other command-line statements to replace the <project ID> placeholder with your project ID (see available projects with gcloud projects list command). You may also be required to edit the zone location, which is set to us-central1 in the above example. Other parameters specify that we are creating a cluster with 3 nodes and with machine type of 4 vCPUs.

You may wait a few minutes for the cluster to be generated.

When the process is over, you can see it listed in the Google Cloud console

Select Kubernetes EngineClusters in the left menu panel:

image

Now you should configure the command-line access to your newly created cluster to make kubectl be able to use it.

In the Google Cloud Console, select your cluster and then click the Connect shown on the above image. You will see the connect statement which configures the command-line access. After you have edited the statement, you may run the command in your local shell:

$ gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project <project name>
Finally, use your Cloud Identity and Access Management (Cloud IAM) to control access to the cluster. The following command will give you the ability to create Roles and RoleBindings:

$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account)
Expected output
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created

Install the Operator and deploy your PostgreSQL cluster

  1. First of all, use the following git clone command to download the correct branch of the percona-postgresql-operator repository:

    $ git clone -b v2.5.0 https://github.com/percona/percona-postgresql-operator
    $ cd percona-postgresql-operator
    
  2. Create the Kubernetes namespace for your cluster if needed (for example, let’s name it postgres-operator):

    $ kubectl create namespace postgres-operator
    
    Expected output
    namespace/postgres-operator was created
    

    Note

    To use different namespace, specify other name instead of postgres-operator in the above command, and modify the -n postgres-operator parameter with it in the following steps. You can also omit this parameter completely to deploy everything in the default namespace.

  3. Deploy the Operator using the following command:

    $ kubectl apply --server-side -f deploy/bundle.yaml -n postgres-operator
    
    Expected output
    customresourcedefinition.apiextensions.k8s.io/crunchybridgeclusters.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgbackups.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgclusters.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgrestores.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgupgrades.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/pgadmins.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/pgupgrades.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/postgresclusters.postgres-operator.crunchydata.com serverside-applied
    serviceaccount/percona-postgresql-operator serverside-applied
    role.rbac.authorization.k8s.io/percona-postgresql-operator serverside-applied
    rolebinding.rbac.authorization.k8s.io/service-account-percona-postgresql-operator serverside-applied
    deployment.apps/percona-postgresql-operator serverside-applied
    

    As the result you will have the Operator Pod up and running.

  4. Deploy Percona Distribution for PostgreSQL:

    $ kubectl apply -f deploy/cr.yaml -n postgres-operator
    
    Expected output
    perconapgcluster.pgv2.percona.com/cluster1 created
    

    The creation process may take some time. When the process is over your cluster will obtain the ready status. You can check it with the following command:

    $ kubectl get pg -n postgres-operator
    
    Expected output
    NAME       ENDPOINT                         STATUS   POSTGRES   PGBOUNCER   AGE
    cluster1   cluster1-pgbouncer.default.svc   ready    3          3           30m
    
    You can also track the creation process in Google Cloud console via the Object Browser

    When the creation process is finished, it will look as follows:

    image

Verifying the cluster operation

When creation process is over, kubectl get pg -n <namespace> command will show you the cluster status as ready, and you can try to connect to the cluster.

During the installation, the Operator has generated several secrets , including the one with password for default PostgreSQL user. This default user has the same login name as the cluster name.

  1. Use kubectl get secrets command to see the list of Secrets objects. The Secrets object you are interested in is named as <cluster_name>-pguser-<cluster_name> (substitute <cluster_name> with the name of your Percona Distribution for PostgreSQL Cluster). The default variant will be cluster1-pguser-cluster1.

  2. Use the following command to get the password of this user. Replace the <cluster_name> and <namespace> placeholders with your values:

    $ kubectl get secret <cluster_name>-<user_name>-<cluster_name> -n <namespace> --template='{{.data.password | base64decode}}{{"\n"}}'
    
  3. Create a pod and start Percona Distribution for PostgreSQL inside. The following command will do this, naming the new Pod pg-client:

    $ kubectl run -i --rm --tty pg-client --image=perconalab/percona-distribution-postgresql:16.4 --restart=Never -- bash -il
    
    Executing it may require some time to deploy the corresponding Pod.

  4. Run a container with psql tool and connect its console output to your terminal. The following command will connect you as a cluster1 user to a cluster1 database via the PostgreSQL interactive terminal.

    [postgres@pg-client /]$ PGPASSWORD='pguser_password' psql -h cluster1-pgbouncer.postgres-operator.svc -p 5432 -U cluster1 cluster1
    
    Sample output
    psql (16.4)
    SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
    Type "help" for help.
    pgdb=>
    

Removing the cluster

If you need to delete the Operator and PostgreSQL cluster (for example, to clean up the testing deployment before adopting it for production use), check this HowTo.

Also, there are several ways that you can delete your Kubernetes cluster in GKE.

You can clean up the cluster with the gcloud command as follows:

$ gcloud container clusters delete <cluster name> --zone us-central1-a --project <project ID>

The return statement requests your confirmation of the deletion. Type y to confirm.

Also, you can delete your cluster via the Google Cloud console

Just click the Delete popup menu item in the clusters list:

image

The cluster deletion may take time.

Warning

After deleting the cluster, all data stored in it will be lost!

Get expert help

If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. Join K8S Squad to benefit from early access to features and “ask me anything” sessions with the Experts.


Last update: 2024-11-25