Skip to content

Install Percona Distribution for PostgreSQL on Amazon Elastic Kubernetes Service (EKS)

This guide shows you how to deploy Percona Operator for PostgreSQL on Amazon Elastic Kubernetes Service (EKS). The document assumes some experience with the platform. For more information on the EKS, see the Amazon EKS official documentation .

Prerequisites

Software installation

The following tools are used in this guide and therefore should be preinstalled:

  1. AWS Command Line Interface (AWS CLI) for interacting with the different parts of AWS. You can install it following the official installation instructions for your system .

  2. eksctl to simplify cluster creation on EKS. It can be installed along its installation notes on GitHub .

  3. kubectl to manage and deploy applications on Kubernetes. Install it following the official installation instructions .

Also, you need to configure AWS CLI with your credentials according to the official guide .

Creating the EKS cluster

  1. To create your cluster, you will need the following data:

    • name of your EKS cluster,
    • AWS region in which you wish to deploy your cluster,
    • the amount of nodes you would like tho have,
    • the desired ratio between on-demand and spot instances in the total number of nodes.

    Note

    spot instances are not recommended for production environment, but may be useful e.g. for testing purposes.

    After you have settled all the needed details, create your EKS cluster following the official cluster creation instructions .

  2. After you have created the EKS cluster, you also need to install the Amazon EBS CSI driver on your cluster. See the official documentation on adding it as an Amazon EKS add-on.

    Note

    CSI driver is needed for the Operator to work propely, and is not included by default starting from the Amazon EKS version 1.22. Therefore servers with existing EKS cluster based on the version 1.22 or earlier need to install CSI driver before updating the EKS cluster to 1.23 or above.

Install the Operator and Percona Distribution for PostgreSQL

The following steps are needed to deploy the Operator and Percona Distribution for PostgreSQL in your Kubernetes environment:

  1. Create the Kubernetes namespace for your cluster if needed (for example, let’s name it postgres-operator):

    $ kubectl create namespace postgres-operator
    
    Expected output
    namespace/postgres-operator was created
    

    Note

    To use different namespace, specify other name instead of postgres-operator in the above command, and modify the -n postgres-operator parameter with it in the following two steps. You can also omit this parameter completely to deploy everything in the default namespace.

  2. Deploy the Operator using the following command:

    $ kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.5.0/deploy/bundle.yaml -n postgres-operator
    
    Expected output
    customresourcedefinition.apiextensions.k8s.io/crunchybridgeclusters.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgbackups.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgclusters.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgrestores.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/perconapgupgrades.pgv2.percona.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/pgadmins.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/pgupgrades.postgres-operator.crunchydata.com serverside-applied
    customresourcedefinition.apiextensions.k8s.io/postgresclusters.postgres-operator.crunchydata.com serverside-applied
    serviceaccount/percona-postgresql-operator serverside-applied
    role.rbac.authorization.k8s.io/percona-postgresql-operator serverside-applied
    rolebinding.rbac.authorization.k8s.io/service-account-percona-postgresql-operator serverside-applied
    deployment.apps/percona-postgresql-operator serverside-applied
    

    As the result you will have the Operator Pod up and running.

  3. The operator has been started, and you can deploy your Percona Distribution for PostgreSQL cluster:

    $ kubectl apply -f https://raw.githubusercontent.com/percona/percona-postgresql-operator/v2.5.0/deploy/cr.yaml -n postgres-operator
    
    Expected output
    perconapgcluster.pgv2.percona.com/cluster1 created
    

    Note

    This deploys default Percona Distribution for PostgreSQL configuration. Please see deploy/cr.yaml and Custom Resource Options for the configuration options. You can clone the repository with all manifests and source code by executing the following command:

    $ git clone -b v2.5.0 https://github.com/percona/percona-postgresql-operator
    

    After editing the needed options, apply your modified deploy/cr.yaml file as follows:

    $ kubectl apply -f deploy/cr.yaml -n postgres-operator
    

    The creation process may take some time. When the process is over your cluster will obtain the ready status. You can check it with the following command:

    $ kubectl get pg
    
    Expected output
    NAME       ENDPOINT                         STATUS   POSTGRES   PGBOUNCER   AGE
    cluster1   cluster1-pgbouncer.default.svc   ready    3          3           30m
    

Verifying the cluster operation

When creation process is over, kubectl get pg command will show you the cluster status as ready, and you can try to connect to the cluster.

During the installation, the Operator has generated several secrets , including the one with password for default PostgreSQL user. This default user has the same login name as the cluster name.

  1. Use kubectl get secrets command to see the list of Secrets objects. The Secrets object you are interested in is named as <cluster_name>-pguser-<cluster_name> (substitute <cluster_name> with the name of your Percona Distribution for PostgreSQL Cluster). The default variant will be cluster1-pguser-cluster1.

  2. Use the following command to get the password of this user. Replace the <cluster_name> and <namespace> placeholders with your values:

    $ kubectl get secret <cluster_name>-<user_name>-<cluster_name> -n <namespace> --template='{{.data.password | base64decode}}{{"\n"}}'
    
  3. Create a pod and start Percona Distribution for PostgreSQL inside. The following command will do this, naming the new Pod pg-client:

    $ kubectl run -i --rm --tty pg-client --image=perconalab/percona-distribution-postgresql:16.4 --restart=Never -- bash -il
    
    Executing it may require some time to deploy the corresponding Pod.

  4. Run a container with psql tool and connect its console output to your terminal. The following command will connect you as a cluster1 user to a cluster1 database via the PostgreSQL interactive terminal.

    [postgres@pg-client /]$ PGPASSWORD='pguser_password' psql -h cluster1-pgbouncer.postgres-operator.svc -p 5432 -U cluster1 cluster1
    
    Sample output
    psql (16.4)
    SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
    Type "help" for help.
    pgdb=>
    

Removing the cluster

If you need to delete the Operator and PostgreSQL cluster (for example, to clean up the testing deployment before adopting it for production use), check this HowTo.

To delete your Kubernetes cluster in EKS, you will need the following data:

  • name of your EKS cluster,
  • AWS region in which you have deployed your cluster.

You can clean up the cluster with the eksctl command as follows (with real names instead of <region> and <cluster name> placeholders):

$ eksctl delete cluster --region=<region> --name="<cluster name>"

The cluster deletion may take time.

Warning

After deleting the cluster, all data stored in it will be lost!

Get expert help

If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. Join K8S Squad to benefit from early access to features and “ask me anything” sessions with the Experts.


Last update: 2024-11-13