Install Percona XtraDB Cluster on Azure Kubernetes Service (AKS)¶
This guide shows you how to deploy Percona Operator for MySQL based on Percona XtraDB Cluster on Microsoft Azure Kubernetes Service (AKS). The document assumes some experience with the platform. For more information on the AKS, see the Microsoft AKS official documentation .
Prerequisites¶
The following tools are used in this guide and therefore should be preinstalled:
-
Azure Command Line Interface (Azure CLI) for interacting with the different parts of AKS. You can install it following the official installation instructions for your system .
-
kubectl to manage and deploy applications on Kubernetes. Install it following the official installation instructions .
Also, you need to sign in with Azure CLI using your credentials according to the official guide .
Create and configure the AKS cluster¶
To create your cluster, you will need the following data:
- name of your AKS cluster,
- an Azure resource group , in which resources of your cluster will be deployed and managed.
- the amount of nodes you would like tho have.
You can create your cluster via command line using az aks create
command.
The following command will create a 3-node cluster named cluster1
within some already existing resource group named my-resource-group
:
$ az aks create --resource-group my-resource-group --name cluster1 --enable-managed-identity --node-count 3 --node-vm-size Standard_B4ms --node-osdisk-size 30 --network-plugin kubenet --generate-ssh-keys --outbound-type loadbalancer
Other parameters in the above example specify that we are creating a cluster with machine type of Standard_B4ms and OS disk size reduced to 30 GiB. You can see detailed information about cluster creation options in the AKS official documentation .
You may wait a few minutes for the cluster to be generated.
Now you should configure the command-line access to your newly created cluster
to make kubectl
be able to use it.
az aks get-credentials --resource-group my-resource-group --name cluster1
Install the Operator and deploy your Percona XtraDB Cluster¶
-
Deploy the Operator. By default deployment will be done in the
default
namespace. If that’s not the desired one, you can create a new namespace and/or set the context for the namespace as follows (replace the<namespace name>
placeholder with some descriptive name):$ kubectl create namespace <namespace name> $ kubectl config set-context $(kubectl config current-context) --namespace=<namespace name>
At success, you will see the message that
namespace/<namespace name>
was created, and the context (<cluster name>
) was modified.Deploy the Operator using the following command:
$ kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-xtradb-cluster-operator/v1.17.0/deploy/bundle.yaml
Expected output
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created role.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created serviceaccount/percona-xtradb-cluster-operator created rolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created deployment.apps/percona-xtradb-cluster-operator created
-
The operator has been started, and you can deploy Percona XtraDB Cluster:
$ kubectl apply -f https://raw.githubusercontent.com/percona/percona-xtradb-cluster-operator/v1.17.0/deploy/cr.yaml
Expected output
perconaxtradbcluster.pxc.percona.com/ cluster1 created
Note
This deploys default Percona XtraDB Cluster configuration with three HAProxy and three XtraDB Cluster instances. Please see deploy/cr.yaml and Custom Resource Options for the configuration options. You can clone the repository with all manifests and source code by executing the following command:
$ git clone -b v1.17.0 https://github.com/percona/percona-xtradb-cluster-operator
After editing the needed options, apply your modified
deploy/cr.yaml
file as follows:$ kubectl apply -f deploy/cr.yaml
The creation process may take some time. When the process is over your cluster will obtain the
ready
status. You can check it with the following command:$ kubectl get pxc
Expected output
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE cluster1 cluster1-haproxy.default ready 3 3 5m51s
Verifying the cluster operation¶
It may take ten minutes to get the cluster started. When kubectl get pxc
command finally shows you the cluster status as ready
, you can try to connect
to the cluster.
To connect to Percona XtraDB Cluster you will need the password for the root user. Passwords are stored in the Secrets object.
Here’s how to get it:
-
List the Secrets objects.
The Secrets object you are interested in has the$ kubectl get secrets
cluster1-secrets
name by default. -
Use the following command to get the password of the
root
user. Substitute the<namespace>
placeholder with your value (and use the different Secrets object name instead of thecluster1-secrets
, if needed):$ kubectl get secret cluster1-secrets -n <namespace> --template='{{.data.root | base64decode}}{{"\n"}}'
-
Run a container with
mysql
tool and connect its console output to your terminal. The following command does this, naming the new Podpercona-client
:$ kubectl run -n <namespace> -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il
Executing it may require some time to deploy the corresponding Pod.
-
Now run the
mysql
tool in thepercona-client
command shell using the password obtained from the Secret instead of the<root_password>
placeholder. The command will look different depending on whether your cluster provides load balancing with HAProxy (the default choice) or ProxySQL:$ mysql -h cluster1-haproxy -uroot -p'<root_password>'
$ mysql -h cluster1-proxysql -uroot -p'<root_password>'
This command will connect you to the MySQL server.
Troubleshooting¶
If kubectl get pxc
command doesn’t show ready
status too long, you can
check the creation process with the kubectl get pods
command:
$ kubectl get pods
Expected output
NAME READY STATUS RESTARTS AGE
cluster1-haproxy-0 2/2 Running 0 6m17s
cluster1-haproxy-1 2/2 Running 0 4m59s
cluster1-haproxy-2 2/2 Running 0 4m36s
cluster1-pxc-0 3/3 Running 0 6m17s
cluster1-pxc-1 3/3 Running 0 5m3s
cluster1-pxc-2 3/3 Running 0 3m56s
percona-xtradb-cluster-operator-79966668bd-rswbk 1/1 Running 0 9m54s
If the command output had shown some errors, you can examine the problematic
Pod with the kubectl describe <pod name>
command as follows:
$ kubectl describe pod cluster1-pxc-2
Review the detailed information for Warning
statements and then correct the
configuration. An example of a warning is as follows:
Warning FailedScheduling 68s (x4 over 2m22s) default-scheduler 0/1 nodes are available: 1 node(s) didn’t match pod affinity/anti-affinity, 1 node(s) didn’t satisfy existing pods anti-affinity rules.
Removing the AKS cluster¶
To delete your cluster, you will need the following data:
- name of your AKS cluster,
- AWS region in which you have deployed your cluster.
You can clean up the cluster with the az aks delete
command as follows (with
real names instead of <resource group>
and <cluster name>
placeholders):
$ az aks delete --name <cluster name> --resource-group <resource group> --yes --no-wait
It may take ten minutes to get the cluster actually deleted after executing this command.
Warning
After deleting the cluster, all data stored in it will be lost!