Skip to content

Manual management of database clusters deployed with Percona Operator for PostgreSQL

The purpose of the Operator is to automate database management tasks for you. However, you may need to manage the database cluster manually. For example, to troubleshoot issues or for maintenance.

The following sections explain how you can manage your cluster manually.

Disable health check probes for maintenance

Probes are tasks Kubernetes runs to gather information about the health and status of containers running within Pods. They serve as a mechanism to ensure the system is running smoothly by periodically checking the state of applications and services.

Kubernetes has various types of probes:

  • Startup probe verifies whether the application within a container is started
  • Liveness probe determines when to restart a Pod
  • Readiness probe checks that the container is ready to start accepting traffic

Sometimes it’s necessary to take a manual control over the postgres process for maintenance. This means you need to disable a Kubernetes liveness probe so that it doesn’t restart the database container during the maintenance period.

Here’s what you need to do:

  1. Create a sleep-forever file in the data directory with the following command:

    $ kubectl exec cluster1-instance1-24b8-0 -- touch /pgdata/sleep-forever
    
  2. Delete the Pod:

    $ kubectl delete pod cluster1-instance1-24b8-0
    
  3. After the Pod restarts, it won’t start PostgreSQL. You can check it with the following command:

    $ kubectl logs cluster1-instance1-24b8-0 database
    
    Expected output
    The pgdata/sleep-forever file is detected, node entered an infinite sleep
    If you want to exit from the infinite sleep, remove the pgdata/sleep-forever file
    
  4. Now you can start PostgreSQL manually:

    $ kubectl exec cluster1-instance1-24b8-0 -- pg_ctl -D /pgdata/pg17 start
    
    Expected output
    2025-04-01 16:27:41.850 UTC [1434] LOG:  pgaudit extension initialized
    2025-04-01 16:27:42.075 UTC [1434] LOG:  redirecting log output to logging collector process
    2025-04-01 16:27:42.075 UTC [1434] HINT:  Future log output will appear in directory "log".
     done
    server started
    
  5. When you are done with the maintenance, remove the sleep-forever file to re-enable the liveness probe.

    $ kubectl exec cluster1-instance1-24b8-0 -- rm /pgdata/sleep-forever
    

Stop reconciliation by putting a cluster into an unmanaged mode

The Operator reconciles the database cluster to ensure its current state doesn’t differ from the state defined in the configuration. It can automatically install, update, or repair the cluster when needed.

By doing this, the Operator might interfere with your operations during the maintenance. Therefore, you can put a cluster in an unmanaged mode to stop the Operator from reconciling the cluster at all.

Edit the deploy/cr.yaml Custom Resource manifest and set the spec.unmanaged option to true:

apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
  name: cluster1
spec:
  unmanaged: true
  ...

Apply the changes:

$ kubectl apply -f deploy/cr.yaml -n <namespace>

Warning

Putting a cluster in an unmanaged mode doesn’t disable any of the health check probes already configured for containers. The Operator is only responsible for configuring the probes, not for running them. Refer to the Disabling health check probes for maintenance section for the steps.

Override Patroni configuration

For a whole cluster

The Operator creates a ConfigMap called <cluster-name>-config to store a Patroni cluster configuration. If you just edit the ConfigMap contents, the Operator will immediately rewrite and remove your changes. To override anything in this ConfigMap and keep the changes, you need to annotate it using a special annotation pgv2.percona.com/override-config.

Here is the example command for the cluster named cluster1:

$ kubectl annotate cm cluster1-config pgv2.percona.com/override-config=true
Expected output
configmap/cluster1-config annotated

As long as the ConfigMap has this pgv2.percona.com/override-config annotation, the Operator doesn’t rewrite your changes. You can edit the ConfigMap’s contents however you want.

Warning

The Operator does not validate your configuration changes.

Before applying any changes, consult the Patroni documentation to ensure your configuration is correct. This will help you avoid issues caused by invalid settings.

It takes some time for your changes of ConfigMap to propagate to running containers. You can verify if changes are propagated by checking the mounted file in containers. For example:

$ kubectl exec -it cluster1-instance1-24b8-0 -- cat /etc/patroni/~postgres-operator_cluster.yaml

Operator doesn’t apply a new configuration for Patroni automatically. You must run patronictl reload <cluster_name> <pod-name> to apply it after your changes are propagated to the container.

Warning

Don’t forget to remove this annotation once you’ve finished. It’s not recommended to use this feature to permanently override Patroni configuration. As long as this annotation exists, the Operator won’t touch the ConfigMap and you might have problems with your cluster.

To remove the annotation, use the following command:

$ kubectl annotate cm cluster1-config pgv2.percona.com/override-config-

For an individual Pod

Operator creates a ConfigMap called <pod-name>-config to store Patroni instance configuration for each Pod. If you just edit the ConfigMap contents, the Operator will immediately rewrite and remove your changes. To override anything in these ConfigMaps and keep the changes, you need to annotate them using a special annotation:

$ kubectl annotate cm cluster1-instance1-24b8-config pgv2.percona.com/override-config=true
Expected output
configmap/cluster1-instance1-24b8-config annotated

As long as the ConfigMap has the pgv2.percona.com/override-config annotation, the Operator doesn’t rewrite your changes. You can edit the ConfigMap’s contents however you want.

Warning

The Operator does not validate your configuration changes.

Before applying any changes, consult the Patroni documentation to ensure your configuration is correct. This will help you avoid problems caused by invalid settings.

It takes some time for your changes of ConfigMap to propagate to running containers. You can verify if changes are propagated by checking the mounted file in containers for a Pod. For example:

$ kubectl exec -it cluster1-instance1-24b8-0 -- cat /etc/patroni/~postgres-operator_cluster.yaml

Operator doesn’t apply a new configuration automatically. You must run patronictl reload <cluster_name> <pod_name> to apply it after your changes are propagated to the container.

To find the cluster name, run:

$ kubectl exec -it cluster1-instance1-24b8-0 -- patronictl list
Expected output
Cluster: cluster1-ha (7523193408153182293) -------------------------+---------+-----------+----+-----------+
| Member                    | Host                                    | Role    | State     | TL | Lag in MB |
+---------------------------+-----------------------------------------+---------+-----------+----+-----------+
| cluster1-instance1-24b8-0 | cluster1-instance1-bw58-0.cluster1-pods | Replica | streaming |  3 |         0 |
| cluster1-instance1-tmqj-0 | cluster1-instance1-tmqj-0.cluster1-pods | Leader  | running   |  3 |           |
| cluster1-instance1-xf85-0 | cluster1-instance1-xf85-0.cluster1-pods | Replica | streaming |  3 |         0 |
+---------------------------+-----------------------------------------+---------+-----------+----+-----------+

Warning

Don’t forget to remove this annotation once you’ve finished. It’s not recommended to use this feature to permanently override Patroni configuration. As long as this annotation exists, the Operator won’t touch the ConfigMap and you might have problems with your cluster.

To remove the annotation, use the following command:

$ kubectl annotate cm cluster1-instance1-24b8-0 pgv2.percona.com/override-config-

Override PostgreSQL parameters

Use the patronictl show-config command to print PostgreSQL parameters used in the cluster. For example:

$ kubectl exec cluster1-instance1-24b8-0 -- patronictl show-config
Expected output
loop_wait: 10
postgresql:
  parameters:
    archive_command: 'pgbackrest --stanza=db archive-push "%p" && timestamp=$(pg_waldump "%p" | grep -oP "COMMIT \K[^;]+" | sed -E "s/([0-9]{4}-[0-9]{2}-[0-9]{2}) ([0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{6}) (UTC|[\\+\\-][0-9]{2})/\1T\2\3/" | sed "s/UTC/Z/" | tail -n 1 | grep -E "^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]{6}(Z|[\+\-][0-9]{2})$"); if [ ! -z ${timestamp} ]; then echo ${timestamp} > /pgdata/latest_commit_timestamp.txt; fi'
    archive_mode: 'on'
    archive_timeout: 60s
    huge_pages: 'off'
    jit: 'off'
    password_encryption: scram-sha-256
    restore_command: pgbackrest --stanza=db archive-get %f "%p"
    ssl: 'on'
    ssl_ca_file: /pgconf/tls/ca.crt
    ssl_cert_file: /pgconf/tls/tls.crt
    ssl_key_file: /pgconf/tls/tls.key
    track_commit_timestamp: 'true'
    unix_socket_directories: /tmp/postgres
    wal_level: logical
  pg_hba:
  - local all "postgres" peer
  - hostssl replication "_crunchyrepl" all cert
  - hostssl "postgres" "_crunchyrepl" all cert
  - host all "_crunchyrepl" all reject
  - host all "monitor" "127.0.0.0/8" scram-sha-256
  - host all "monitor" "::1/128" scram-sha-256
  - host all "monitor" all reject
  - hostssl all "_crunchypgbouncer" all scram-sha-256
  - host all "_crunchypgbouncer" all reject
  - hostssl all all all md5
  use_pg_rewind: true
  use_slots: false
ttl: 30

Use the patronictl edit-config command to change any PostgreSQL parameter.

For example, run the following command to change the restore_command parameter:

$ kubectl exec -it cluster1-instance1-24b8-0 -- patronictl edit-config --pg restore_command=/bin/true
Expected output
---
+++
@@ -9,7 +9,7 @@
     huge_pages: 'off'
     jit: 'off'
     password_encryption: scram-sha-256
-    restore_command: pgbackrest --stanza=db archive-get %f "%p"
+    restore_command: /bin/true
     ssl: 'on'
     ssl_ca_file: /pgconf/tls/ca.crt
     ssl_cert_file: /pgconf/tls/tls.crt    

Apply these changes? [y/N]:

This command changes the shared_preload_libraries parameter:

$ kubectl exec -it cluster1-instance1-24b8-0 -- patronictl edit-config --pg shared_preload_libraries=""
Expected output
---
+++
@@ -11,7 +11,6 @@
     password_encryption: scram-sha-256
     pg_stat_monitor.pgsm_query_max_len: '2048'
     restore_command: pgbackrest --stanza=db archive-get %f "%p"
-    shared_preload_libraries: pg_stat_monitor
     ssl: 'on'
     ssl_ca_file: /pgconf/tls/ca.crt
     ssl_cert_file: /pgconf/tls/tls.crt    

Apply these changes? [y/N]:

Warning

If you update any object controlled by the Operator, it’ll reconcile the cluster and your configuration changes will be reverted. You can put the cluster in an unmanaged mode to prevent this.

Override pg_hba entries

You may want to append entries to pg_hba. You can use the spec.patroni.postgresl.pg_hba field to add your rules.

  patroni:
    dynamicConfiguration:
      postgresql:
        pg_hba:
        - local all all trust
        - reject all all all

The order of parameters matters in pg_hba.conf, so consider overriding the list completely. For this, you can use the patronictl edit-config command:

$ kubectl exec -it cluster1-instance1-24b8-0 -- patronictl edit-config --set postgresql.pg_hba='[
  "local all all trust",
  "reject all all all"
]'

Warning

If you update any object controlled by the Operator, it’ll reconcile the cluster and your configuration changes will be reverted. You can put the cluster in an unmanaged mode to prevent this.


Last update: 2025-07-18