Skip to content
logo
Percona Operator for PostgreSQL
Anti-affinity and tolerations
Initializing search
    percona/k8spg-docs
    percona/k8spg-docs
    • Welcome
      • System Requirements
      • Design and architecture
      • Comparison with other solutions
      • Install with kubectl
      • Install on Google Kubernetes Engine (GKE)
      • Generic Kubernetes installation
      • Application and system users
      • Exposing the cluster
      • Anti-affinity and tolerations
        • Affinity and anti-affinity
        • Topology Spread Constraints
        • Tolerations
      • Transport Encryption (TLS/SSL)
      • Telemetry
      • Backup and restore
      • High availability and scaling
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Restart or pause the cluster
      • Custom Resource options
      • Percona certified images
      • Release notes index
      • Percona Operator for PostgreSQL 2.1.0 Tech preview (2023-05-04)
      • Percona Operator for PostgreSQL 2.0.0 Tech preview (2022-12-30)
    • Join K8S Squad

    • Affinity and anti-affinity
    • Topology Spread Constraints
    • Tolerations

    Binding Percona Distribution for PostgreSQL components to Specific Kubernetes/OpenShift Nodes¶

    The operator does good job automatically assigning new Pods to nodes with sufficient resources to achieve balanced distribution across the cluster. Still there are situations when it is worth to ensure that pods will land on specific nodes: for example, to get speed advantages of the SSD equipped machine, or to reduce network costs choosing nodes in a same availability zone.

    Appropriate sections of the deploy/cr.yaml file (such as proxy.pgBouncer) contain keys which can be used to do this, depending on what is the best for a particular situation.

    Affinity and anti-affinity¶

    Affinity makes Pod eligible (or not eligible - so called “anti-affinity”) to be scheduled on the node which already has Pods with specific labels, or has specific labels itself (so called “Node affinity”). Particularly, Pod anti-affinity is good to reduce costs making sure several Pods with intensive data exchange will occupy the same availability zone or even the same node - or, on the contrary, to make them land on different nodes or even different availability zones for the high availability and balancing purposes. Node affinity is useful to assign PostgreSQL instances to specific Kubernetes Nodes (ones with specific hardware, zone, etc.).

    Pod anti-affinity is controlled by the affinity.podAntiAffinity subsection, which can be put into proxy.pgBouncer and backups.pgbackrest.repoHost sections of the deploy/cr.yaml configuration file.

    podAntiAffinity allows you to use standard Kubernetes affinity constraints of any complexity:

    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          podAffinityTerm:
            labelSelector:
              matchLabels:
                postgres-operator.crunchydata.com/cluster: keycloakdb
                postgres-operator.crunchydata.com/role: pgbouncer
            topologyKey: kubernetes.io/hostname
    

    You can see the explanation of these affinity options in Kubernetes documentation.

    Topology Spread Constraints¶

    Topology Spread Constraints allow you to control how Pods are distributed across the cluster based on regions, zones, nodes, and other topology specifics. This can be useful for both high availability and resource efficiency.

    Pod topology spread constraints are controlled by the topologySpreadConstraints subsection, which can be put into proxy.pgBouncer and backups.pgbackrest.repoHost sections of the deploy/cr.yaml configuration file as follows:

    topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: my-node-label
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            postgres-operator.crunchydata.com/instance-set: instance1
    

    You can see the explanation of these affinity options in Kubernetes documentation.

    Tolerations¶

    Tolerations allow Pods having them to be able to land onto nodes with matching taints. Toleration is expressed as a key with and operator, which is either exists or equal (the latter variant also requires a value the key is equal to). Moreover, toleration should have a specified effect, which may be a self-explanatory NoSchedule, less strict PreferNoSchedule, or NoExecute. The last variant means that if a taint with NoExecute is assigned to node, then any Pod not tolerating this taint will be removed from the node, immediately or after the tolerationSeconds interval, like in the following example.

    You can use instances.tolerations and backups.pgbackrest.jobs.tolerations subsections in the deploy/cr.yaml configuration file as follows:

    tolerations:
    - effect: NoSchedule
      key: role
      operator: Equal
      value: connection-poolers
    

    The Kubernetes Taints and Toleratins contains more examples on this topic.

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles, join K8S Squad.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-05-22
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.