Skip to content

Monitor database with Percona Monitoring and Management (PMM)

We recommend to monitor the database with Percona Monitoring and Management (PMM) integrated within the Operator. You can also use custom monitoring solutions, but their deployment is not automated by the Operator and requires manual setup).

In this section you will learn how to monitor Percona XtraDB Cluster with PMM.

PMM is a client/server application. It consists of the PMM Server and a number of PMM Clients . PMM Clients run on each node with the database you wish to monitor. In Kubernetes, this means that PMM Clients run as sidecar containers for the database Pods.

A PMM Client collects needed metrics and sends the gathered data to the PMM Server. As a user, you connect to the PMM Server to see database metrics on a number of dashboards.

PMM Server and PMM Client are installed separately.

Install PMM Server

You must have PMM server up and running. You can run PMM Server as a Docker image, a virtual appliance, or on an AWS instance. Please refer to the official PMM documentation for the installation instructions.

Install PMM Client

Install PMM Client as a side-car container in your Kubernetes-based environment:

  1. Authorize PMM Client within PMM Server.

    1. Generate the PMM Server API Key . Specify the Admin role when getting the API Key.

    Warning: The API key is not rotated automatically.

    1. Edit the deploy/secrets.yaml secrets file and specify the PMM API key for the pmmserverkey option.
    2. Apply the configuration for the changes to take effect.

      $ kubectl apply -f deploy/secrets.yaml -n <namespace>
      
    1. Check that the serverUser key in the deploy/cr.yaml file contains your PMM Server user name (admin by default), and make sure the pmmserver key in the deploy/secrets.yaml secrets file contains the password specified for the PMM Server during its installation

    2. Apply the configuration for the changes to take effect.

      $ kubectl apply -f deploy/secrets.yaml -n <namespace>
      
  2. Update the pmm section in the deploy/cr.yaml file:

    • Set pmm.enabled=true.
    • Specify your PMM Server hostname / an IP address for the pmm.serverHost option. The PMM Server IP address should be resolvable and reachable from within your cluster.
      pmm:
        enabled: true
        image: percona/pmm-client:2.44.0
        serverHost: monitoring-service
    
  3. Apply the changes:

    $ kubectl apply -f deploy/cr.yaml -n <namespace>
    
  4. Check that corresponding Pods are not in a cycle of stopping and restarting. This cycle occurs if there are errors on the previous steps:

    $ kubectl get pods -n <namespace>
    $ kubectl logs  <cluster-name>-pxc-0 -c pmm-client -n <namespace>
    

Check the metrics

Let’s see how the collected data is visualized in PMM.

Now you can access PMM via https in a web browser, with the login/password authentication, and the browser is configured to show Percona XtraDB Cluster metrics.

Specify additional PMM parameters

You can specify additional parameters for pmm-admin add mysql and pmm-admin add proxysql commands, if needed. Use the pmm.pxcParams and pmm.proxysqlParams Custom Resource options for that.

The Operator automatically manages common Percona XtraDB Cluster Service Monitoring parameters mentioned in the official PMM documentation, such as username, password, service-name, host, etc. Assigning values to these parameters is not recommended and can negatively affect the functionality of the PMM setup carried out by the Operator.

Update the secrets file

The deploy/secrets.yaml file contains all values for each key/value pair in a convenient plain text format. But the resulting Secrets Objects contains passwords stored as base64-encoded strings. If you want to update the password field, you need to encode the new password into the base64 format and pass it to the Secrets Object.

To encode a password or any other parameter, run the following command:

$ echo -n "password" | base64 --wrap=0
$ echo -n "password" | base64

For example, to set the new PMM API key to new_key in the cluster1-secrets object, do the following:

$ kubectl patch secret/cluster1-secrets -p '{"data":{"pmmserverkey": "'$(echo -n new_key | base64 --wrap=0)'"}}'
$ kubectl patch secret/cluster1-secrets -p '{"data":{"pmmserverkey": "'$(echo -n new_key | base64)'"}}'

Check PMM Client health and status

A probe is a diagnostic mechanism in Kubernetes which helps determine whether a container is functioning correctly and whether it should continue to run, accept traffic, or be restarted.

PMM Client has the following probes:

  • Readiness probe determines when a PMM Client is available and ready to accept traffic
  • Liveness probe determines when to restart a PMM Client

To configure probes, use the spec.pmm.readinessProbes and spec.pmm.livenessProbes Custom Resource options.

Add custom PMM prefix to the cluster name

When user has several clusters with the same namespace, cluster and Pod names, and a single PMM Server, it is possible to add only one of them to the PMM Server instance because of this names coincidence.

For such cases it is possible to specify a custom prefix to the cluster name, which will be visible within PMM, and so names will become unique.

You can do it by setting the PMM_PREFIX environment variable via the Secret, specified in the pxc.envVarsSecret Custom Resource option.

Here is an example of the YAML file used to create the Secret with the my-unique-prefix- prefix encoded in base64 format:

apiVersion: v1
kind: Secret
metadata:
  name: my-env-var-secrets
type: Opaque
data:
  PMM_PREFIX: bXktdW5pcXVlLXByZWZpeC0=

Follow the instruction on all details needed to create a Secret for environment variables and adding them to the Custom Resource.

Implement custom monitoring solution without PMM

You can deploy your own monitoring solution instead of PMM, but since the Operator will know nothing about it, it will not gain the same level of deployment automation from the Operator side, and there will be no configuration via the Custom Resource. The apporach to this is to deploy your monitoring agent as a sidecar container in Percona XtraDB Cluster Pods. See sidecar containers documentation for details.

Note

You can use the monitor system user for monitoring purposes as PMM Client does. The Operator tracks the monitor user password update in the Secrets object (technical secrets used by the Operator, and restarts Percona XtraDB Cluster Pods in cases when PMM is enabled or when the sidecar container references the internal Secrets object internal-<clustername> (technical users secrets used by Operator, internal-cluster1 by default) as follows:

pxc:
sidecars:
- name: metrics
  image: my_repo/my_custom_monitoring_solution:1.0
  env:
    - name: MYSQLD_EXPORTER_PASSWORD
      valueFrom:
        secretKeyRef:
          name: internal-cluster1
          key: monitor
  ...

Last update: 2025-04-15