Skip to content

Percona Operator for MySQL 0.7.0

Percona Operator for MySQL allows users to deploy MySQL clusters with both asynchronous and group replication topology. This release includes various stability improvements and bug fixes, getting the Operator closer to the General Availability stage. Version 0.7.0 of the Percona Operator for MySQL is still a tech preview release and it is not recommended for production environments. As of today, we recommend using Percona Operator for MySQL based on Percona XtraDB Cluster, which is production-ready and contains everything you need to quickly and consistently deploy and scale MySQL clusters in a Kubernetes-based environment, on-premises or in the cloud.


Documentation improvements

Within this release, a Quickstart guide was added to the Operator docs, that’ll set you up and running in no time! Taking a look at this guide you’ll be guided step by step through quick installation (multiple options), connecting to the database, inserting data, making a backup, and even integrating with Percona Monitoring and Management (PMM) to monitor your cluster.

Fine-tuning backups

This release brings a number of improvements for backups, making them more stable and robust. The new backup.backoffLimit Custom Resource option allows customizing the number of attempts the Operator should take to create the backup (the default is making 6 retries after the first backup attempt fails for some reason, such as faulty network connection or the cloud outage). Also, the Operator now makes a number of checks before starting the restore process to make sure that there are needed cloud credentials and the actual backup. This allows to avoid faulty restore that would leave the database cluster in non-functional state.

Other improvements

With our latest release, we put an all-hands-on-deck approach towards fine-tuning the Operator with code refactoring and a number of minor improvements, along with addressing key bugs reported by the community. We are extremely grateful to each and every person who submitted feedback and contributed to help us get to the bottom of these pesky issues.

New features

  • K8SPS-275: The Operator now checks if the needed Secrets exist and connects to the storage to check the existence of a backup before starting the restore process
  • K8SPS-277: The new topologySpreadConstraints Custom Resource option allows to use Pod Topology Spread Constraints to achieve even distribution of Pods across the Kubernetes cluster


  • K8SPS-129: The documentation on how to build and test the Operator is now available
  • K8SPS-295: Certificate issuer errors are now reflected in the Custom Resource status message and can be easily checked with the kubectl get ps -o yaml command
  • K8SPS-326: The mysql-monit Orchestrator sidecar container now inherits orchestrator resources following the way that HAProxy mysql-monit container does (thanks to SlavaUtesinov for contribution)

Bugs Fixed

  • K8SPS-124: Parametrize the number of attempts the Operator should make for backup through a Custom Resource option
  • K8SPS-146: Log messages were incorrectly mentioning semi-synchronous replication regardless of the actual replication type
  • K8SPS-173: Fix a bug due to which the Operator was silently resetting a component size to the minimum size allowed when allowUnsafeConfig was turned off, without any messages in the log
  • K8SPS-185: Fix a bug due to which the Orchestrator-MySQL (topology instances) connections were not encrypted
  • K8SPS-256: Fix a bug which caused logging the SQL statements, potentially printing sensitive information in the logs
  • K8SPS-258: If two backups were created at the same time, both of them were set to the “Running” state, while only one of them was actually running and the other one was waiting
  • K8SPS-291: Fix a bug due to which instances were not actually removed when scaling down the group replication cluster
  • K8SPS-302: Fix a bug where HAProxy, Orchestrator, and MySQL (if exposed) Services were deleted when just pausing the cluster
  • K8SPS-303: Fix a bug where ports 6033 (the default one for ProxySQL) for MySQL and port 33060 for Router were missing in appropriate Services
  • K8SPS-311: The Operator was setting the restore state to Error instead of leaving it empty if the other restore was already running, which could cause it to continue later, not desirable in situations of accidental running two restores
  • K8SPS-312: Fix a bug where Pods did not restart if the cluster1-ssl secret was deleted and recreated by cert manager, so the change of certificates did not take effect
  • K8SPS-315: ConfigMap with custom configuration specifying something different than my.cnf as config name was silently not applied without error message
  • K8SPS-316: Fix a bug where MySQL was started in read_only=true mode in case of a single instance database cluster configuration (thanks to Kilian Ries for report)
  • K8SPS-330: Fix a bug due to which the admin port did not work in case of asynchronous replication cluster with one Pod only

Supported Platforms

The Operator was developed and tested with Percona Server for MySQL 8.0.36-28. Other options may also work but have not been tested. Other software components include:

  • Orchestrator 3.2.6-12
  • MySQL Router 8.0.36
  • XtraBackup 8.0.35-30
  • Percona Toolkit 3.5.7
  • HAProxy 2.8.5
  • PMM Client 2.41.1

The following platforms were tested and are officially supported by the Operator 0.7.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on backward compatibility offered by Kubernetes itself.

Get expert help

If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services. Join K8S Squad to benefit from early access to features and “ask me anything” sessions with the Experts.

Last update: 2024-05-02