Skip to content
logo
Percona Operator for MongoDB
Restore backup to a new Kubernetes-based environment
Initializing search
    percona/k8spsmdb-docs
    percona/k8spsmdb-docs
    • Welcome
      • Design and architecture
      • Comparison with other solutions
      • Install with Helm
      • Install with kubectl
      • System requirements
      • Install on Minikube
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
      • Install on Microsoft Azure Kubernetes Service (AKS)
      • Generic Kubernetes installation
      • Install on OpenShift
      • Application and system users
      • Changing MongoDB options
      • Anti-affinity and tolerations
      • Labels and annotations
      • Exposing the cluster
      • Local storage support
      • Arbiter and non-voting nodes
      • MongoDB sharding
      • Transport encryption (TLS/SSL)
      • Data at rest encryption
      • Telemetry
        • About backups
        • Configure storage for backups
        • Making scheduled backups
        • Making on-demand backup
        • Storing operations logs for point-in-time recovery
        • Restore from a previously saved backup
        • Delete the unneeded backup
      • Upgrade MongoDB and the Operator
      • Horizontal and vertical scaling
      • Multi-cluster and multi-region deployment
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Restart or pause the cluster
      • Debug and troubleshoot
      • OpenLDAP integration
      • How to use private registry
      • Creating a private S3-compatible cloud for backups
      • Restore backup to a new Kubernetes-based environment
      • How to use backups to move the external database to Kubernetes
      • Install Percona Server for MongoDB in multi-namespace (cluster-wide) mode
      • Upgrading Percona Server for MongoDB manually
      • Custom Resource options
      • Percona certified images
      • Operator API
      • Frequently asked questions
      • Old releases (documentation archive)
      • Release notes index
      • Percona Operator for MongoDB 1.14.0 (2023-03-13)
      • Percona Operator for MongoDB 1.13.0 (2022-09-15)
      • Percona Operator for MongoDB 1.12.0 (2022-05-05)
      • Percona Distribution for MongoDB Operator 1.11.0 (2021-12-21)
      • Percona Distribution for MongoDB Operator 1.10.0 (2021-09-30)
      • Percona Distribution for MongoDB Operator 1.9.0 (2021-07-29)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.8.0 (2021-05-06)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.7.0 (2021-03-08)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.6.0 (2020-12-22)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.5.0 (2020-09-07)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.4.0 (2020-03-31)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.3.0 (2019-12-11)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.2.0 (2019-09-20)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.1.0 (2019-07-15)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.0.0 (2019-05-29)

    How to restore backup to a new Kubernetes-based environment¶

    The Operator allows restoring a backup not only on the Kubernetes cluster where it was made, but also on any Kubernetes-based environment with the installed Operator.

    When restoring to a new Kubernetes-based environment, make sure it has a Secrets object with the same user passwords as in the original cluster. More details about secrets can be found in System Users. The name of the required Secrets object can be found out from the spec.secrets key in the deploy/cr.yaml (my-cluster-name-secrets by default).

    You will need correct names for the backup and the cluster. If you have access to the original cluster, available backups can be listed with the following command:

    $ kubectl get psmdb-backup
    

    And the following command will list available clusters:

    $ kubectl get psmdb
    

    Note

    If you have configured storing operations logs for point-in-time recovery, you will have possibility to roll back the cluster to a specific date and time. Otherwise, restoring backups without point-in-time recovery is the only option.

    When the correct names for the backup and the cluster are known, backup restoration can be done in the following way.

    1. Set appropriate keys in the deploy/backup/restore.yaml file.

      • set spec.clusterName key to the name of the target cluster to restore the backup on,

      • set spec.backupSource subsection instead of spec.backupName field to point on the appropriate S3-compatible storage. This backupSource subsection should contain the backup type (either logical or physical), and a destination key, followed by necessary storage configuration keys, same as in the deploy/cr.yaml file:

        ...
        backupSource:
          type: logical
          destination: s3://S3-BUCKET-NAME/BACKUP-NAME
          s3:
            credentialsSecret: my-cluster-name-backup-s3
            region: us-west-2
            endpointUrl: https://URL-OF-THE-S3-COMPATIBLE-STORAGE
        

      As you have noticed, destination value is composed of three parts in case of S3-compatible storage: the s3:// prefix, the s3 bucket name, and the actual backup name, which you have already found out using the kubectl get psmdb-backup command). For Azure Blob storage, you don’t put the prefix, and use your container name as an equivalent of a bucket.

      • you can also use a storageName key to specify the exact name of the storage (the actual storage should be already defined in the backup.storages subsection of the deploy/cr.yaml file):
      ...
      storageName: s3-us-west
      backupSource:
        destination: s3://S3-BUCKET-NAME/BACKUP-NAME
      
    2. After that, the actual restoration process can be started as follows:

      $ kubectl apply -f deploy/backup/restore.yaml
      
    1. Set appropriate keys in the deploy/backup/restore.yaml file.

      • set spec.clusterName key to the name of the target cluster to restore the backup on,

      • put additional restoration parameters to the pitr section:

        ...
        spec:
          clusterName: my-cluster-name
          pitr:
            type: date
            date: YYYY-MM-DD hh:mm:ss
        
      • set spec.backupSource subsection instead of spec.backupName field to point on the appropriate S3-compatible storage. This backupSource subsection should contain a destination key equal to the s3 bucket with a special s3:// prefix, followed by necessary S3 configuration keys, same as in deploy/cr.yaml file:

        ...
        backupSource:
          destination: s3://S3-BUCKET-NAME/BACKUP-NAME
          s3:
            credentialsSecret: my-cluster-name-backup-s3
            region: us-west-2
            endpointUrl: https://URL-OF-THE-S3-COMPATIBLE-STORAGE
        
      • you can also use a storageName key to specify the exact name of the storage (the actual storage should be already defined in the backup.storages subsection of the deploy/cr.yaml file):

        ...
        storageName: s3-us-west
        backupSource:
          destination: s3://S3-BUCKET-NAME/BACKUP-NAME
        
    2. Run the actual restoration process:

      $ kubectl apply -f deploy/backup/restore.yaml
      

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-05-23
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.