Skip to content

Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Major version upgrade

Major version upgrade allows you to jump from one database major version to another (for example, upgrade from PostgreSQL 17.x to PostgreSQL 18.x).

This feature is generally available starting with the Operator version 2.9.0.

Considerations

  1. A major upgrade introduces a downtime because the whole cluster is shut down during the upgrade. This flow is planned to be improved in future releases.
  2. During the upgrade, the Operator duplicates the data on each PVC and doesn’t remove the old version data automatically. Make sure your PVC has enough free space to store data.
  3. Starting with the Operator 2.6.0, PostgreSQL images are based on Red Hat Universal Base Image (UBI) 9 instead of UBI 8. UBI 9 has a different version of collation library glibc and this introduces a collation mismatch in PostgreSQL. Collation defines how text is sorted and compared based on language-specific rules such as case sensitivity, character order and the like. PostgreSQL stores the collation version used at database creation. When the collation version changes, this may result in corruption of database objects that use it like text-based indexes. Therefore, you need to identify and reindex objects affected by the collation mismatch.

Upgrade steps

To start the upgrade, you need to create a special PerconaPGUpgrade resource. This resource refers to the special Operator upgrade image and contains the information about the existing and target major versions. Find the example PerconaPGUpgrade configuration file in deploy/upgrade.yaml:

apiVersion: pgv2.percona.com/v2
kind: PerconaPGUpgrade
metadata:
  name: cluster1-17-to-18
spec:
  postgresClusterName: cluster1
  image: docker.io/percona/percona-postgresql-operator:2.9.0-upgrade
  fromPostgresVersion: 17
  toPostgresVersion: 18
  toPostgresImage: docker.io/percona/percona-distribution-postgresql:18.3-1
  toPgBouncerImage: docker.io/percona/percona-pgbouncer:1.25.1-1
  toPgBackRestImage: docker.io/percona/percona-pgbackrest:2.58.0-1

As you can see, the manifest includes image names for the database cluster components (PostgreSQL, pgBouncer, and pgBackRest). You can find them in the list of certified images for the current Operator release. For older versions, please refer to the old releases documentation archive .

Apply this manifest to start the upgrade:

kubectl apply -f deploy/upgrade.yaml -n <namespace>

During the upgrade flow, the Operator:

  1. Pauses the cluster, making it unavailable for the duration of the upgrade,
  2. Annotates the cluster with a special pgv2.percona.com/allow-upgrade: <PerconaPGUpgrade.Name> annotation,
  3. Creates jobs to migrate the data,
  4. Starts up the cluster after the upgrade finishes.

Post-upgrade steps

  1. Scan for indexes that rely on collations other than C or POSIX and whose collations were provided by the operating system (c) or dynamic libraries (d). Connect to PostgreSQL with the privileges of the superuser or the database owner and run the following query:

    SELECT DISTINCT
        indrelid::regclass::text,
        indexrelid::regclass::text,
        collname,
        pg_get_indexdef(indexrelid)
    FROM (
        SELECT
            indexrelid,
            indrelid,
            indcollation[i] coll
        FROM
            pg_index,
            generate_subscripts(indcollation, 1) g(i)
    ) s
    JOIN pg_collation c ON coll = c.oid
    WHERE
        collprovider IN ('d', 'c')
        AND collname NOT IN ('C', 'POSIX');
    
  2. If you see the list of affected indexes, find the database names where indexes use a different collation version:

    SELECT * FROM pg_database;
    
    Sample output
    oid  |  datname  | datdba | encoding | datlocprovider | datistemplate | datallowconn | datconnlimit | datfrozenxid | datminmxid | dattablespace | datcollate  |  datctype   | daticulocale | daticurules | datcollversion |                           datacl
    
    -------+-----------+--------+----------+----------------+---------------+--------------+--------------+--------------+------------+---------------+-------------+-------------+--------------+-------------+----------------+------------------------------------------------------------
    
     5 | postgres  |     10 |        6 | c              | f             | t            |
       -1 |          722 |          1 |          1663 | en_US.utf-8 | en_US.utf-8 |
     |             | 2.28           |
     1 | template1 |     10 |        6 | c              | t             | t            |
       -1 |          722 |          1 |          1663 | en_US.utf-8 | en_US.utf-8 |
     |             | 2.28           | {=c/postgres,postgres=CTc/postgres}
     4 | template0 |     10 |        6 | c              | t             | f            |
       -1 |          722 |          1 |          1663 | en_US.utf-8 | en_US.utf-8 |
     |             |                | {=c/postgres,postgres=CTc/postgres}
     16466 | cluster1  |     10 |        6 | c              | f             | t            |
       -1 |          722 |          1 |          1663 | en_US.utf-8 | en_US.utf-8 |
     |             | 2.28           | {=Tc/postgres,postgres=CTc/postgres,cluster1=CTc/postgres}
    (4 rows)
    
  3. Refresh collation metadata and rebuild affected indexes. This command requires the privileges of a superuser or a database owner:

    ALTER DATABASE cluster1 REFRESH COLLATION VERSION;
    

Cleanup

  1. You can remove old version data at your discretion by executing into containers and running the following commands (example for PostgreSQL 17):

    rm -rf /pgdata/pg17
    rm -rf /pgdata/pg17_wal
    
  2. You can also delete the PerconaPGUpgrade resource (this will clean up the jobs and Pods created during the upgrade):

    kubectl delete perconapgupgrade cluster1-17-to-18
    

Troubleshooting upgrade issues

If the upgrade fails for some reason, the cluster will stay in paused mode. Resume the cluster manually to check what went wrong with upgrade (it will start with the old version). You can check the PerconaPGUpgrade resource with kubectl get perconapgupgrade -o yaml command, and check the logs of the upgraded Pods to debug the issue.

Failed first restore after the upgrade

After a major upgrade, PostgreSQL starts a new WAL timeline. PostgreSQL treats the upgraded cluster as a new logical generation, so it increments the timeline ID and begins writing WAL from that new point.

In clusters with very low write traffic, the upgraded primary may generate very few WAL segments after the upgrade.

When you make a first restore after the upgrade, the restored replicas need to replay WAL from the primary to catch up. To do that, PostgreSQL uses the pg_rewind tool. pg_rewind searches a common WAL ancestor — a point in history where both the primary and the replica share the same WAL record. If there are few WAL records, there may not be a common WAL ancestor and the replica may fail to rejoin the primary. When this happens, you see the could not find common ancestor of the source and target cluster's timelines error in pg_rewind.

To address this issue, you must manually reinitialize the failed replica. Before doing so, check if this replica has any transactions that are not replicated anywhere else. Then remove its data directory and let the instance perform a full copy from the primary.

Alternatively, you can automate replica reinitialization with Patroni. Update the cluster configuration by setting the spec.patroni.removeDataDirectoryOnDivergedTimelines in the Custom Resource before the upgrade. When timeline divergence is detected, the Operator instructs Patroni to automatically remove the replica’s data and resync it from the primary.

Warning

The removeDataDirectoryOnDivergedTimelines option can lead to data loss. When the Operator resyncs the replica automatically, some transactions may be lost. The risk is usually small but not zero. Use this option only if you understand and accept this trade-off.


Last update: March 30, 2026
Created: March 30, 2026