Skip to content
logo
Percona Operator for MySQL
Install on Amazon Elastic Kubernetes Service (AWS EKS)
Initializing search
    percona/k8spxc-docs
    percona/k8spxc-docs
    • Welcome
      • System Requirements
      • Design and architecture
      • Comparison with other solutions
      • Install with Helm
      • Install with kubectl
      • Install on Minikube
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
        • Prerequisites
        • Create the EKS cluster
        • Install the Operator
      • Install on Microsoft Azure Kubernetes Service (AKS)
      • Install on OpenShift
      • Generic Kubernetes installation
      • Multi-cluster and multi-region deployment
      • Application and system users
      • Changing MySQL Options
      • Anti-affinity and tolerations
      • Labels and annotations
      • Local Storage support
      • Defining environment variables
      • Load Balancing with HAProxy
      • Load Balancing with ProxySQL
      • Transport Encryption (TLS/SSL)
      • Data at rest encryption
      • Telemetry
      • Backup and restore
      • Upgrade Database and Operator
      • Horizontal and vertical scaling
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Restart or pause the cluster
      • Crash recovery
      • Debug and troubleshoot
      • How to install Percona XtraDB Cluster in multi-namespace (cluster-wide) mode
      • How to upgrade Percona XtraDB Cluster manually
      • How to use private registry
      • Custom Resource options
      • Percona certified images
      • Operator API
      • Frequently Asked Questions
      • Old releases (documentation archive)
      • Release notes index
      • Percona Operator for MySQL based on Percona XtraDB Cluster 1.12.0 (2022-12-07)
      • Percona Operator for MySQL based on Percona XtraDB Cluster 1.11.0 (2022-06-03)
      • Percona Distribution for MySQL Operator 1.10.0 (2021-11-24)
      • Percona Distribution for MySQL Operator 1.9.0 (2021-08-09)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.8.0 (2021-05-26)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.7.0 (2021-02-02)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.6.0 (2020-09-09)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.5.0 (2020-07-21)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.4.0 (2020-04-29)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.3.0 (2020-01-06)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.2.0 (2019-09-20)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.1.0 (2019-07-15)
      • Percona Kubernetes Operator for Percona XtraDB Cluster 1.0.0 (2019-05-29)

    • Prerequisites
    • Create the EKS cluster
    • Install the Operator

    Install Percona XtraDB Cluster on Amazon Elastic Kubernetes Service (EKS)¶

    This quickstart shows you how to deploy the Operator and Percona XtraDB Cluster on Amazon Elastic Kubernetes Service (EKS). The document assumes some experience with Amazon EKS. For more information on the EKS, see the Amazon EKS official documentation.

    Prerequisites¶

    The following tools are used in this guide and therefore should be preinstalled:

    1. AWS Command Line Interface (AWS CLI) for interacting with the different parts of AWS. You can install it following the official installation instructions for your system.

    2. eksctl to simplify cluster creation on EKS. It can be installed along its installation notes on GitHub.

    3. kubectl to manage and deploy applications on Kubernetes. Install it following the official installation instructions.

    Also, you need to configure AWS CLI with your credentials according to the official guide.

    Create the EKS cluster¶

    To create your cluster, you will need the following data:

    • name of your EKS cluster,

    • AWS region in which you wish to deploy your cluster,

    • the amount of nodes you would like tho have,

    • the amount of on-demand and spot instances to use.

    Note

    spot instances are not recommended for production environment, but may be useful e.g. for testing purposes.

    The most easy and visually clear way is to describe the desired cluster in YAML and to pass this configuration to the eksctl command.

    The following example configures a EKS cluster with one managed node group:

    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
        name: test-cluster
        region: eu-west-2
    
    nodeGroups:
        - name: ng-1
          minSize: 3
          maxSize: 5
          instancesDistribution:
            maxPrice: 0.15
            instanceTypes: ["m5.xlarge", "m5.2xlarge"] # At least two instance types should be specified
            onDemandBaseCapacity: 0
            onDemandPercentageAboveBaseCapacity: 50
            spotInstancePools: 2
          tags:
            'iit-billing-tag': 'cloud'
          preBootstrapCommands:
              - "echo 'OPTIONS=\"--default-ulimit nofile=1048576:1048576\"' >> /etc/sysconfig/docker"
              - "systemctl restart docker"
    

    Note

    preBootstrapCommands section is used in the above example to increase the limits for the amount of opened files: this is important and shouldn’t be omitted, taking into account the default EKS soft limit of 65536 files.

    When the cluster configuration file is ready, you can actually create your cluster by the following command:

    $ eksctl create cluster -f ~/cluster.yaml
    

    Install the Operator¶

    1. Create a namespace and set the context for the namespace. The resource names must be unique within the namespace and provide a way to divide cluster resources between users spread across multiple projects.

      So, create the namespace and save it in the namespace context for subsequent commands as follows (replace the <namespace name> placeholder with some descriptive name):

      $ kubectl create namespace <namespace name>
      $ kubectl config set-context $(kubectl config current-context) --namespace=<namespace name>
      

      At success, you will see the message that namespace/ was created, and the context was modified.

    2. Use the following git clone command to download the correct branch of the percona-xtradb-cluster-operator repository:

      $ git clone -b v1.12.0 https://github.com/percona/percona-xtradb-cluster-operator
      

      After the repository is downloaded, change the directory to run the rest of the commands in this document:

      $ cd percona-xtradb-cluster-operator
      
    3. Deploy the Operator with the following command:

      $ kubectl apply -f deploy/bundle.yaml
      

      The following confirmation is returned:

      customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com created
      customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com created
      customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com created
      customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created
      role.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created
      serviceaccount/percona-xtradb-cluster-operator created
      rolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created
      deployment.apps/percona-xtradb-cluster-operator created
      
    4. The operator has been started, and you can create the Percona XtraDB cluster:

      $ kubectl apply -f deploy/cr.yaml
      

      The process could take some time. The return statement confirms the creation:

      perconaxtradbcluster.pxc.percona.com/cluster1 created
      
    5. During previous steps, the Operator has generated several secrets, including the password for the root user, which you will need to access the cluster.

      Use kubectl get secrets command to see the list of Secrets objects (by default Secrets object you are interested in has cluster1-secrets name). Then kubectl get secret cluster1-secrets -o yaml will return the YAML file with generated secrets, including the root password which should look as follows:

      ...
      data:
        ...
        root: cm9vdF9wYXNzd29yZA==
      

      Here the actual password is base64-encoded, and echo 'cm9vdF9wYXNzd29yZA==' | base64 --decode will bring it back to a human-readable form (in this example it will be a root_password string).

    6. Now you can check wether you are able to connect to MySQL from the outside with the help of the kubectl port-forward command as follows:

      $ kubectl port-forward svc/example-proxysql 3306:3306 &
      $ mysql -h 127.0.0.1 -P 3306 -uroot -proot_password
      

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-02-09
    Back to top
    Percona LLC and/or its affiliates, © 2009 - 2022
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.