Skip to content
logo
Percona Operator for MongoDB
Configure storage for backups
Initializing search
    percona/k8spsmdb-docs
    percona/k8spsmdb-docs
    • Welcome
      • Design and architecture
      • Comparison with other solutions
      • Install with Helm
      • Install with kubectl
      • System requirements
      • Install on Minikube
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
      • Install on Microsoft Azure Kubernetes Service (AKS)
      • Generic Kubernetes installation
      • Install on OpenShift
      • Application and system users
      • Changing MongoDB options
      • Anti-affinity and tolerations
      • Labels and annotations
      • Exposing the cluster
      • Local storage support
      • Arbiter and non-voting nodes
      • MongoDB sharding
      • Transport encryption (TLS/SSL)
      • Data at rest encryption
      • Telemetry
        • About backups
        • Configure storage for backups
        • Making scheduled backups
        • Making on-demand backup
        • Storing operations logs for point-in-time recovery
        • Restore from a previously saved backup
        • Delete the unneeded backup
      • Upgrade MongoDB and the Operator
      • Horizontal and vertical scaling
      • Multi-cluster and multi-region deployment
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Restart or pause the cluster
      • Debug and troubleshoot
      • OpenLDAP integration
      • How to use private registry
      • Creating a private S3-compatible cloud for backups
      • Restore backup to a new Kubernetes-based environment
      • How to use backups to move the external database to Kubernetes
      • Install Percona Server for MongoDB in multi-namespace (cluster-wide) mode
      • Upgrading Percona Server for MongoDB manually
      • Custom Resource options
      • Percona certified images
      • Operator API
      • Frequently asked questions
      • Old releases (documentation archive)
      • Release notes index
      • Percona Operator for MongoDB 1.14.0 (2023-03-13)
      • Percona Operator for MongoDB 1.13.0 (2022-09-15)
      • Percona Operator for MongoDB 1.12.0 (2022-05-05)
      • Percona Distribution for MongoDB Operator 1.11.0 (2021-12-21)
      • Percona Distribution for MongoDB Operator 1.10.0 (2021-09-30)
      • Percona Distribution for MongoDB Operator 1.9.0 (2021-07-29)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.8.0 (2021-05-06)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.7.0 (2021-03-08)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.6.0 (2020-12-22)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.5.0 (2020-09-07)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.4.0 (2020-03-31)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.3.0 (2019-12-11)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.2.0 (2019-09-20)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.1.0 (2019-07-15)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.0.0 (2019-05-29)

    Configure storage for backups¶

    You can configure storage for backups in the backup.storages subsection of the Custom Resource, using the deploy/cr.yaml configuration file.

    You should also create the Kubernetes Secret object with credentials needed to access the storage.

    1. To store backups on the Amazon S3, you need to create a Secret with the following values:

      • the metadata.name key is the name which you wll further use to refer your Kubernetes Secret,
      • the data.AWS_ACCESS_KEY_ID and data.AWS_SECRET_ACCESS_KEY keys are base64-encoded credentials used to access the storage (obviously these keys should contain proper values to make the access possible).

      Create the Secrets file with these base64-encoded keys following the deploy/backup-s3.yaml example:

      apiVersion: v1
      kind: Secret
      metadata:
        name: my-cluster-name-backup-s3
      type: Opaque
      data:
        AWS_ACCESS_KEY_ID: UkVQTEFDRS1XSVRILUFXUy1BQ0NFU1MtS0VZ
        AWS_SECRET_ACCESS_KEY: UkVQTEFDRS1XSVRILUFXUy1TRUNSRVQtS0VZ
      

      Note

      You can use the following command to get a base64-encoded string from a plain text one:

      $ echo -n 'plain-text-string' | base64 --wrap=0
      
      $ echo -n 'plain-text-string' | base64
      

      Once the editing is over, create the Kubernetes Secret object as follows:

      $ kubectl apply -f deploy/backup-s3.yaml
      
    2. Put the data needed to access the S3-compatible cloud into the backup.storages subsection of the Custom Resource.

      • storages.<NAME>.type should be set to s3 (substitute the part with some arbitrary name you will later use to refer this storage when making backups and restores).

      • storages.<NAME>.s3.credentialsSecret key should be set to the name used to refer your Kubernetes Secret (my-cluster-name-backup-s3 in the last example).

      • storages.<NAME>.s3.bucket and storages.<NAME>.s3.region should contain the S3 bucket and region. Also you can use storages.<NAME>.s3.prefix option to specify the path (sub-folder) to the backups inside the S3 bucket. If prefix is not set, backups are stored in the root directory.

      • if you use some S3-compatible storage instead of the original Amazon S3, add the endpointURL key in the s3 subsection, which should point to the actual cloud used for backups. This value and is specific to the cloud provider. For example, using Google Cloud involves the following endpointUrl:

        endpointUrl: https://storage.googleapis.com
        

      The options within the storages.<NAME>.s3 subsection are further explained in the Operator Custom Resource options.

      Here is an example of the deploy/cr.yaml configuration file which configures Amazon S3 storage for backups:

      ...
      backup:
        ...
        storages:
          s3-us-west:
            type: s3
            s3:
              bucket: S3-BACKUP-BUCKET-NAME-HERE
              region: us-west-2
              credentialsSecret: my-cluster-name-backup-s3
        ...
      
      Using AWS EC2 instances for backups makes it possible to automate access to AWS S3 buckets based on IAM roles for Service Accounts with no need to specify the S3 credentials explicitly.

      Following steps are needed to turn this feature on:

      • Create the IAM instance profile and the permission policy within where you specify the access level that grants the access to S3 buckets.
      • Attach the IAM profile to an EC2 instance.
      • Configure an S3 storage bucket and verify the connection from the EC2 instance to it.
      • Do not provide s3.credentialsSecret for the storage in deploy/cr.yaml.
    1. To store backups on the Azure Blob storage, you need to create a Secret with the following values:

      • the metadata.name key is the name which you wll further use to refer your Kubernetes Secret,
      • the data.AZURE_STORAGE_ACCOUNT_NAME and data.AZURE_STORAGE_ACCOUNT_KEY keys are base64-encoded credentials used to access the storage (obviously these keys should contain proper values to make the access possible).

      Create the Secrets file with these base64-encoded keys following the deploy/backup-azure.yaml example:

      apiVersion: v1
      kind: Secret
      metadata:
        name: my-cluster-azure-secret
      type: Opaque
      data:
        AZURE_STORAGE_ACCOUNT_NAME: UkVQTEFDRS1XSVRILUFXUy1BQ0NFU1MtS0VZ
        AZURE_STORAGE_ACCOUNT_KEY: UkVQTEFDRS1XSVRILUFXUy1TRUNSRVQtS0VZ
      

      Note

      You can use the following command to get a base64-encoded string from a plain text one:

      $ echo -n 'plain-text-string' | base64 --wrap=0
      
      $ echo -n 'plain-text-string' | base64
      

      Once the editing is over, create the Kubernetes Secret object as follows:

      $ kubectl apply -f deploy/backup-azure.yaml
      
    2. Put the data needed to access the Azure Blob storage into the backup.storages subsection of the Custom Resource.

      • storages.<NAME>.type should be set toazure` (substitute the part with some arbitrary name you will later use to refer this storage when making backups and restores).

      • storages.<NAME>.azure.credentialsSecret key should be set to the name used to refer your Kubernetes Secret (my-cluster-azure-secret in the last example).

      • storages.<NAME>.azure.container option should contain the name of the Azure container. Also you can use storages.<NAME>.azure.prefix option to specify the path (sub-folder) to the backups inside the container. If prefix is not set, backups are stored in the root directory of the container.

      The options within the storages.<NAME>.azure subsection are further explained in the Operator Custom Resource options.

      Here is an example of the deploy/cr.yaml configuration file which configures Azure Blob storage for backups:

      ...
      backup:
        ...
        storages:
          azure-blob:
            type: azure
            azure:
              container: <your-container-name>
              prefix: psmdb
              credentialsSecret: my-cluster-azure-secret
            ...
      

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-05-23
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.