Skip to content
logo
Percona Operator for MySQL
Backup and restore
Initializing search
    percona/k8sps-docs
    percona/k8sps-docs
    • Welcome
      • System Requirements
      • Design and architecture
      • Install with Helm
      • Install on Minikube
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
      • Generic Kubernetes installation
      • Backup and restore
        • Backups on Amazon S3 or S3-compatible storage
        • Backups on Microsoft Azure Blob storage
        • Making on-demand backup
        • Restore the cluster from a previously saved backup
        • Delete the unneeded backup
      • Application and system users
      • Anti-affinity and tolerations
      • Labels and annotations
      • Changing MySQL Options
      • Exposing the cluster
      • Transport Encryption (TLS/SSL)
      • Telemetry
      • Horizontal and vertical scaling
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Custom Resource options
      • Percona certified images
      • Release notes index
      • Percona Operator for MySQL 0.5.0 (2023-03-30)
      • Percona Operator for MySQL 0.4.0 (2023-01-30)
      • Percona Operator for MySQL 0.3.0 (2022-09-29)
      • Percona Operator for MySQL 0.2.0 (2022-06-30)
      • Percona Distribution for MySQL Operator based on Percona Server for MySQL 0.1.0 (2022-01-25)

    • Backups on Amazon S3 or S3-compatible storage
    • Backups on Microsoft Azure Blob storage
    • Making on-demand backup
    • Restore the cluster from a previously saved backup
    • Delete the unneeded backup

    Providing Backups¶

    The Operator stores MySQL backups outside the Kubernetes cluster: on Amazon S3 or S3-compatible storage, or on Azure Blob Storage.

    image

    The Operator currently allows doing cluster backup on-demand (i.e. manually at any moment). It uses the Percona XtraBackup tool.

    Backups are controlled by the backup section of the deploy/cr.yaml file. This section contains backup.enabled key (it should be set to true to enable backups), and the number of options in the storages subsection, needed to access cloud to store backups.

    Backups on Amazon S3 or S3-compatible storage¶

    Since backups are stored separately on the Amazon S3, a secret with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should be present on the Kubernetes cluster. The secrets file with these base64-encoded keys should be created: for example deploy/backup-s3.yaml file with the following contents.

    apiVersion: v1
    kind: Secret
    metadata:
      name: cluster1-s3-credentials
    type: Opaque
    data:
      AWS_ACCESS_KEY_ID: UkVQTEFDRS1XSVRILUFXUy1BQ0NFU1MtS0VZ
      AWS_SECRET_ACCESS_KEY: UkVQTEFDRS1XSVRILUFXUy1TRUNSRVQtS0VZ
    

    Note

    The following command can be used to get a base64-encoded string from a plain text one:

    $ echo -n 'plain-text-string' | base64 --wrap=0
    
    $ echo -n 'plain-text-string' | base64
    

    The name value is the Kubernetes secret name which will be used further, and AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are the keys to access S3 storage (and obviously they should contain proper values to make this access possible). To have effect secrets file should be applied with the appropriate command to create the secret object, e.g. kubectl apply -f deploy/backup-s3.yaml (for Kubernetes).

    All the data needed to access the S3-compatible cloud to store backups should be put into the backup.storages subsection. Here is an example of deploy/cr.yaml which uses Amazon S3 storage for backups:

    ...
    backup:
      enabled: true
      ...
      storages:
        s3-us-west:
          type: s3
          s3:
            bucket: S3-BACKUP-BUCKET-NAME-HERE
            region: us-west-2
            credentialsSecret: cluster1-s3-credentials
    

    If you use some S3-compatible storage instead of the original Amazon S3, the endpointURL is needed in the s3 subsection which points to the actual cloud used for backups and is specific to the cloud provider. For example, using Google Cloud involves the following endpointUrl:

    endpointUrl: https://storage.googleapis.com
    

    Also you can use prefix option to specify the path (sub-folder) to the backups inside the S3 bucket. If prefix is not set, backups are stored in the root directory.

    The options within this subsection are further explained in the Operator Custom Resource options.

    One option which should be mentioned separately is credentialsSecret which is a Kubernetes secret for backups. Value of this key should be the same as the name used to create the secret object (cluster1-s3-credentials in the last example).

    Backups on Microsoft Azure Blob storage¶

    Since backups are stored separately on Azure Blob Storage, a secret with AZURE_STORAGE_ACCOUNT_NAME and AZURE_STORAGE_ACCOUNT_KEY should be present on the Kubernetes cluster. The secrets file with these base64-encoded keys should be created: for example deploy/backup-azure.yaml file with the following contents.

    apiVersion: v1
    kind: Secret
    metadata:
      name: cluster1-azure-credentials
    type: Opaque
    data:
      AZURE_STORAGE_ACCOUNT_NAME: UkVQTEFDRS1XSVRILUFXUy1BQ0NFU1MtS0VZ
      AZURE_STORAGE_ACCOUNT_KEY: UkVQTEFDRS1XSVRILUFXUy1TRUNSRVQtS0VZ
    

    Note

    The following command can be used to get a base64-encoded string from a plain text one:

    $ echo -n 'plain-text-string' | base64 --wrap=0
    
    $ echo -n 'plain-text-string' | base64
    

    The name value is the Kubernetes secret name which will be used further, and AZURE_STORAGE_ACCOUNT_NAME and AZURE_STORAGE_ACCOUNT_KEY credentials will be used to access the storage (and obviously they should contain proper values to make this access possible). To have effect secrets file should be applied with the appropriate command to create the secret object, e.g. kubectl apply -f deploy/backup-azure.yaml (for Kubernetes).

    All the data needed to access the Azure Blob storage to store backups should be put into the backup.storages subsection. Here is an example of deploy/cr.yaml which uses Azure Blob storage for backups:

    ...
    backup:
      enabled: true
      ...
      storages:
        azure-blob:
          type: azure
          azure:
            container: <your-container-name>
            credentialsSecret: cluster1-azure-credentials
    

    The options within this subsection are further explained in the Operator Custom Resource options.

    One option which should be mentioned separately is credentialsSecret which is a Kubernetes secret for backups. Value of this key should be the same as the name used to create the secret object (cluster1-azure-credentials in the last example).

    Making on-demand backup¶

    To make an on-demand backup, the user should first make changes in the deploy/cr.yaml configuration file: set the backup.enabled key to true and configure backup storage in the backup.storages subsection.

    When the deploy/cr.yaml file contains correctly configured keys and is applied with kubectl command, use a special backup configuration YAML file with the following contents:

    • backup name in the metadata.name key,

    • Percona Distribution for MySQL Cluster name in the clusterName key,

    • storage name from deploy/cr.yaml in the spec.storageName key.

    • S3 backup finalizer set by the metadata.finalizers.delete-backup key (it triggers the actual deletion of backup files from the S3 bucket when there is a manual or scheduled removal of the corresponding backup object).

    The example of such file is deploy/backup/backup.yaml.

    When the backup destination is configured and applied with kubectl apply -f deploy/cr.yaml command, make backup as follows:

    $ kubectl apply -f deploy/backup.yaml
    

    Note

    Storing backup settings in a separate file can be replaced by passing its content to the kubectl apply command as follows:

    $ cat <<EOF | kubectl apply -f-
    apiVersion: ps.percona.com/v1alpha1
    kind: PerconaServerMySQLBackup
    metadata:
      name: backup1
      finalizers:
        - delete-backup
    spec:
      clusterName: cluster1
      storageName: s3-us-west
    EOF
    

    Restore the cluster from a previously saved backup¶

    Backups can be restored not only on the Kubernetes cluster where it was made, but also on any Kubernetes-based environment with the installed Operator.

    Note

    When restoring to a new Kubernetes-based environment, make sure it has a Secrets object with the same user passwords as in the original cluster. More details about secrets can be found in System Users.

    The example of the restore configuration file is deploy/backup/restore.yaml. The options that can be used in it are described in the restore options reference.

    Following things are needed to restore a previously saved backup:

    • Make sure that the cluster is running.

    • Find out correct names for the backup and the cluster. Available backups can be listed with the following command:

      $ kubectl get ps-backup
      

      Note

      Obviously, you can make this check only on the same cluster on which you have previously made the backup.

      And the following command will list existing Percona Distribution for MySQL Cluster names in the current Kubernetes-based environment:

      $ kubectl get ps
      

    When the correct names for the backup and the cluster are known, backup restoration can be done in the following way.

    1. Set appropriate keys in the deploy/restore.yaml file.

      • set spec.clusterName key to the name of the target cluster to restore the backup on,

      • if you are restoring backup on the same Kubernetes-based cluster you have used to save this backup, set spec.backupName key to the name of your backup,

      • if you are restoring backup on the Kubernetes-based cluster different from one you have used to save this backup, set spec.backupSource subsection instead of spec.backupName field to point on the appropriate cloud storage:

        The backupSource key should contain destination key equal to the S3 bucket with a special s3:// prefix, followed by the necessary S3 configuration keys, same as in deploy/cr.yaml file:

        ...
        backupSource:
          destination: s3://S3-BUCKET-NAME/BACKUP-NAME
          s3:
            bucket: S3-BUCKET-NAME
            credentialsSecret: my-cluster-name-backup-s3
            region: us-west-2
            endpointUrl: https://URL-OF-THE-S3-COMPATIBLE-STORAGE
            ...
        

        The backupSource key should contain destination key equal to the Azure Blob container and backup name, followed by the necessary Azure configuration keys, same as in deploy/cr.yaml file:

        ...
        backupSource:
          destination: AZURE-CONTAINER-NAME/BACKUP-NAME
          azure:
            container: AZURE-CONTAINER-NAME
            credentialsSecret: my-cluster-azure-secret
            ...
        
        1. After that, the actual restoration process can be started as follows:
      $ kubectl apply -f deploy/restore.yaml
      

    Note

    Storing backup settings in a separate file can be replaced by passing its content to the kubectl apply command as follows:

    $ cat <<EOF | kubectl apply -f-
    apiVersion: "ps.percona.com/v1alpha1"
    kind: "PerconaServerMySQLRestore"
    metadata:
      name: "restore1"
    spec:
      clusterName: "cluster1"
      backupName: "backup1"
    EOF
    

    Delete the unneeded backup¶

    Manual deleting of a previously saved backup requires not more than the backup name. This name can be taken from the list of available backups returned by the following command:

    $ kubectl get ps-backup
    

    When the name is known, backup can be deleted as follows:

    $ kubectl delete ps-backup/<backup-name>
    

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-05-22
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.