Skip to content

Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Use PVC snapshots

Once the PVC snapshots are configured, you can use them to make backups and restores.

Make an on-demand backup from a PVC snapshot

  1. Configure the PerconaPGBackup object. Edit the deploy/backup.yaml manifest and specify the following keys:

    • pgCluster - the name of your cluster. Check it with the kubectl get pg -n $NAMESPACE command

    • method - the backup method. Specify volumeSnapshot.

    Here’s the example configuration:

    apiVersion: pgv2.percona.com/v2
    kind: PerconaPGBackup
    metadata:
      name: my-snapshot-backup
    spec:
      pgCluster: cluster1
      method: volumeSnapshot
    
  2. Apply the configuration to start a backup:

    kubectl apply -f deploy/backup.yaml -n $NAMESPACE
    
  3. Check the backup status:

    kubectl get pg-backup my-snapshot-backup -n $NAMESPACE
    
    Sample output
    NAME      CLUSTER    REPO    DESTINATION   STATUS      TYPE   COMPLETED   AGE
    my-snapshot-backup                      cluster1   repo1                 Succeeded   snapshot   3m38s       3m53s
    

Make a scheduled snapshot-based backup

  1. Configure the backup schedule in your cluster Custom Resource. Edit the deploy/cr.yaml manifest. In the schedule key in the volumeSnapshots subsection under backups, specify the schedule in the Cron format for the snapshots to be made automatically. Your updated configuration should look like this:

    apiVersion: pgv2.percona.com/v2
    kind: PerconaPGCluster
    metadata:
      name: my-cluster
    spec:
      backups:
        volumeSnapshots:
          className: my-snapshot-class
          mode: offline
          schedule: "0 3 * * *"   # Every day at 3:00 AM
    
  2. Apply the configuration to update the cluster:

    kubectl apply -f deploy/cr.yaml -n $NAMESPACE
    

In-place restore from a PVC snapshot

An in-place restore is a restore to the same cluster using the PerconaPGRestore custom resource. You can make a full in-place restore or a point-in-time restore.

When you create the PerconaPGRestore object, the Operator performs the following steps:

  1. Suspends all instances in the cluster.
  2. Deletes all existing PVCs in the cluster. This removes all existing data, WAL, and tablespaces.
  3. Creates new PVCs with the snapshot serving as the data source. This restores the data, WAL, and tablespaces from that snapshot.
  4. Spins up a job to configure the restored PVCs to be used by the cluster.
  5. Resumes all instances in the cluster. The cluster starts with the data from the snapshot.

Important

An in-place restore overwrites the current data and is destructive. Any data that was written after the backup was made is lost. Therefore, consider restoring to a new cluster instead. This way you can evaluate the data before switching to the new cluster and don’t risk losing data in the existing cluster.

Follow the steps below to make a full in-place restore from a PVC snapshot.

  1. Configure the PerconaPGRestore object. Edit the deploy/restore.yaml manifest and specify the following keys:

    • pgCluster - the name of your cluster. Check it with the kubectl get pg -n $NAMESPACE command

    • volumeSnapshotBackupName - the name of the PVC snapshot backup. Check it with the kubectl get pg-backup -n $NAMESPACE command.

    Here’s the example configuration:

    apiVersion: pgv2.percona.com/v2
    kind: PerconaPGRestore
    metadata:
      name: restore1
    spec:
      pgCluster: cluster1
      volumeSnapshotBackupName: my-snapshot-backup
    
  2. Apply the configuration to start a restore:

    kubectl apply -f deploy/restore.yaml -n $NAMESPACE
    
  3. Check the restore status:

    kubectl get pg-restore restore1 -n $NAMESPACE
    
    Sample output
    NAME         CLUSTER      STATUS      COMPLETED              AGE
    restore1     cluster1     Succeeded   2026-02-16T11:00:00Z   2m20s
    

In-place restore with point-in-time recovery

You can make a point-in-time restore from a PVC snapshot and replay WAL files from a WAL archive made with pgBackRest. For this scenario, your cluster must meet the following requirements:

  1. Have a pgBackRest configuration, including the backup storage and at least one repository. See the Configure backup storage section for configuration steps.
  2. The repository must have at least one WAL archive.

The workflow for point-in-time restore is similar to a full in-place restore. After the Operator restores the data from the snapshot, it replays the WAL files from the WAL archive to bring the cluster to the target time.

Important

An in-place restore overwrites the current data and is destructive. Any data that was written after the backup was made is lost. Therefore, consider restoring to a new cluster instead. This way you can evaluate the data before switching to the new cluster and don’t risk losing data in the existing cluster.

Follow the steps below to make a point-in-time restore from a PVC snapshot.

  1. Check the repo name and the target time for the restore.

    • List the backups:
    kubectl get pg-backup -n $NAMESPACE
    
    • For a pgBackRest backup run the following command to get the target time:
    kubectl get pg-backup <backup_name> -n $NAMESPACE -o jsonpath='{.status.latestRestorableTime}'
    
  2. Configure the PerconaPGRestore object. Edit the deploy/restore.yaml manifest and specify the following keys:

    • pgCluster - the name of your cluster. Check it with the kubectl get pg -n $NAMESPACE command

    • volumeSnapshotBackupName - the name of the PVC snapshot backup.

    • repoName - the name of the pgBackRest repository that contains the WAL archives.

    • options - the options for the restore. Specify the following options:

      • --type=time - set to time to make a point-in-time restore.
      • --target - set the target time for the restore.

    Here’s the example configuration:

    apiVersion: pgv2.percona.com/v2
    kind: PerconaPGRestore
    metadata:
      name: pitr-restore
    spec:
      pgCluster: cluster1
      volumeSnapshotBackupName: my-snapshot-backup
      repoName: repo1
      options:
        - --type=time
        - --target="2026-02-16T11:00:00Z"
    
  3. Apply the configuration to start a restore:

    kubectl apply -f deploy/restore.yaml -n $NAMESPACE
    
  4. Check the restore status:

    kubectl get pg-restore pitr-restore -n $NAMESPACE
    

Create a new cluster from a PVC snapshot

You can create a new cluster from a PVC snapshot. This is useful when you want to restore the data to a new cluster and don’t want to overwrite the existing data in the existing cluster.

To create a new cluster from a PVC snapshot, you need to configure the PerconaPGCluster object and specify an volumesnapshot object as the dataSource. You also need to configure the instances and backups sections to set up the new cluster.

For more information about the dataSource options, see the Understand the dataSource options section. Also check the Custom Resource reference for all available options.

Follow the steps below to create a new cluster from a PVC snapshot.

  1. Create the namespace where a new cluster will be deployed and export it as the environment variable:

    kubectl create namespace <new-namespace>
    export NEW_NAMESPACE=<new-namespace>
    
  2. Configure the PerconaPGCluster object. Edit the deploy/cr.yaml manifest and specify the following keys:

    • dataSource - the name of the PVC snapshot backup. Check it with the kubectl get pg-backup my-snapshot-backup -o jsonpath='{.status.snapshot.dataVolumeSnapshotRef}' command on the source cluster.

    • instances - the instances configuration for the new cluster.

    • backups - the backups configuration for the new cluster.

    Here’s the example configuration:

    apiVersion: pgv2.percona.com/v2
    kind: PerconaPGCluster
    metadata:
      name: new-cluster
    spec:
      instances:
        - name: instance1
          replicas: 3
          dataVolumeClaimSpec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 10Gi
      dataSource:
        apiGroup: snapshot.storage.k8s.io
        kind: VolumeSnapshot
        name: <name-of-the-pvc-snapshot-backup>
    
  3. Apply the configuration to create the new cluster:

    kubectl apply -f deploy/cr.yaml -n $NEW_NAMESPACE
    

The new cluster will be provisioned shortly using the volume of the source cluster.


Last update: March 26, 2026
Created: March 26, 2026