Skip to content

Binding Distribution for MySQL components to Specific Kubernetes Nodes

The Operator does good job automatically assigning new Pods to nodes with sufficient to achieve balanced distribution across the cluster. Still there are situations when it worth to ensure that pods will land on specific nodes: for example, to get speed advantages of the SSD equipped machine, or to reduce costs choosing nodes in a same availability zone.

That’s why mysql section of the deploy/cr.yaml file contain keys which can be used to configure node affinity .

Affinity and anti-affinity

Affinity makes Pod eligible (or not eligible - so called “anti-affinity”) to be scheduled on the node which already has Pods with specific labels. Particularly this approach is good to to reduce costs making sure several Pods with intensive data exchange will occupy the same availability zone or even the same node - or, on the contrary, to make them land on different nodes or even different availability zones for the high availability and balancing purposes.

Percona Operator for MySQL provides two approaches for doing this:

  • simple way to set anti-affinity for Pods, built-in into the Operator,

  • more advanced approach based on using standard Kubernetes constraints.

Simple approach - use topologyKey of the Percona Operator for MySQL

Percona Operator for MySQL provides the antiAffinityTopologyKey option, which may have one of the following values:

  • kubernetes.io/hostname - Pods will avoid residing within the same host,

  • topology.kubernetes.io/zone - Pods will avoid residing within the same zone,

  • topology.kubernetes.io/region - Pods will avoid residing within the same region,

  • none - no constraints are applied.

The following example forces Percona Server for MySQL Pods to avoid occupying the same node:

affinity:
  antiAffinityTopologyKey: "kubernetes.io/hostname"

Advanced approach - use standard Kubernetes constraints

Previous way can be used with no special knowledge of the Kubernetes way of assigning Pods to specific nodes. Still in some cases more complex tuning may be needed. In this case advanced option placed in the deploy/cr.yaml file turns off the effect of the topologyKey and allows to use standard Kubernetes affinity constraints of any complexity:

affinity:
   advanced:
     podAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
       - labelSelector:
           matchExpressions:
           - key: security
             operator: In
             values:
             - S1
         topologyKey: topology.kubernetes.io/zone
     podAntiAffinity:
       preferredDuringSchedulingIgnoredDuringExecution:
       - weight: 100
         podAffinityTerm:
           labelSelector:
             matchExpressions:
             - key: security
               operator: In
               values:
               - S2
           topologyKey: kubernetes.io/hostname
     nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:
         - matchExpressions:
           - key: kubernetes.io/e2e-az-name
             operator: In
             values:
             - e2e-az1
             - e2e-az2
       preferredDuringSchedulingIgnoredDuringExecution:
       - weight: 1
         preference:
           matchExpressions:
           - key: another-node-label-key
             operator: In
             values:
             - another-node-label-value

See explanation of the advanced affinity options in Kubernetes documentation .

Get expert help

If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. Join K8S Squad to benefit from early access to features and “ask me anything” sessions with the Experts.


Last update: 2024-06-17