Skip to content
logo
Percona Monitoring and Management
Setting up DBaaS
Initializing search
    percona/pmm-doc
    percona/pmm-doc
    • Welcome
    • Setting up
      • Server
        • Network
        • Docker
        • Podman
        • Helm
        • Virtual Appliance
        • AWS Marketplace
        • Easy-install script
      • Client
        • MySQL
        • MongoDB
        • PostgreSQL
        • ProxySQL
        • Amazon RDS
        • Microsoft Azure
        • Google Cloud Platform
        • Linux
        • External Services
        • HAProxy
        • Remote instances
    • Get started
      • User Interface
      • Percona Alerting
      • Backup and Restore
        • Prepare a storage location
        • MongoDB backups
          • MongoDB backup prerequisites
          • Create MongoDB on-demand and scheduled backups
          • Create MongoDB PITR backups
          • Restore a MongoDB backup
          • MongoDB Backup and Restore support matrix
        • MySQL backups
          • MySQL backup prerequisites
          • Create a MySQL backup
          • Restore a MySQL backup
        • Edit a scheduled backup
        • Delete a backup
      • Roles and permissions
        • Configure access control
        • Labels for access control
        • Create access roles
        • Manage access roles
        • Assign roles to users
        • Use Case
      • Query Analytics
      • Working with Advisors
    • How to
      • Configure
      • Manage users
      • Upgrade
      • Secure
      • Optimize
      • Annotate
      • Share dashboards and panels
      • Extend Metrics
      • Resolve issues
      • Integrate with Percona Platform
        • Check Percona Portal account information
    • Details
      • Architecture
      • UI components
      • PMM components and versions
      • Data handling in PMM
      • Develop Advisor checks
      • Dashboards
        • Manage dashboards
          • Insight
            • Advanced Data Exploration
            • VictoriaMetrics
            • VictoriaMetrics Agents Overview
          • Kubernetes
            • Kubernetes Pods Status
            • Kubernetes Volumes
          • PMM Inventory
            • Environment Overview
            • Environment Summary
          • DBaas
          • OS Dashboards
            • Disk Details
            • Network Details
            • Memory Details
            • Node Temperature Details
            • Nodes Compare
            • Nodes Overview
            • Node Summary
            • NUMA Details
            • Processes Details
          • Prometheus Dashboards
            • Prometheus Exporters Overview
          • MySQL Dashboards
            • MySQL Command/Handler Counters Compare
            • MySQL InnoDB Compression Details
            • MySQL InnoDB Details
            • MySQL MyISAM/Aria Details
            • MySQL MyRocks Details
            • MySQL Instance Summary
            • MySQL Instances Compare
            • MySQL Instances Overview
            • MySQL Wait Event Analyses Details
            • MySQL Performance Schema Details
            • MySQL Query Response Time Details
            • MySQL Replication Summary
            • MySQL Group Replication Summary
            • MySQL Table Details
            • MySQL User Details
            • MySQL TokuDB Details
          • MongoDB Dashboards
            • Experimental MongoDB Collection Details
            • Experimental MongoDB Oplog Details
            • MongoDB Cluster Summary
            • MongoDB Instance Summary
            • MongoDB Instances Overview
            • MongoDB Instances Compare
            • MongoDB ReplSet Summary
            • MongoDB InMemory Details
            • MongoDB MMAPv1 Details
            • MongoDB WiredTiger Details
          • PostgreSQL Dashboards
            • PostgreSQL Instance Summary
            • PostgreSQL Instances Compare
            • Experimental PostgreSQL Vacuum Monitoring
          • ProxySQL Dashboards
          • HA Dashboards
            • PXC/Galera Cluster Summary
            • Experimental PXC/Galera Cluster Summary
            • PXC/Galera Nodes Compare
            • HAProxy Instance Summary
      • Commands
        • pmm-admin - PMM Administration Tool
        • pmm-agent - PMM Client agent
      • API
      • VictoriaMetrics
      • ClickHouse
      • PostgreSQL
      • Glossary
      • Introduction
      • DBaaS architecture
        • Setting up DBaaS
        • Create a Kubernetes Cluster
        • Deleting Kubernetes clusters
        • Activating DBaaS
        • Add a Kubernetes cluster automatically
        • Add a Kubernetes cluster manually
        • Manage allowed component versions
          • OLM installation
          • Operators installation
        • Add a DB Cluster
        • Manage a DB Cluster
        • Delete a DB Cluster
        • Create a database cluster from a template
      • Backup and restore
    • FAQ
    • Release Notes
      • PMM 2.37.0
      • PMM 2.36.0
      • PMM 2.35.0
      • PMM 2.34.0
      • PMM 2.33.0
      • PMM 2.32.0
      • PMM 2.31.0
      • PMM 2.30.0
      • PMM 2.29.1
      • PMM 2.29.0
      • PMM 2.28.0
      • PMM 2.27.0
      • PMM 2.26.0
      • PMM 2.25.0
      • PMM 2.24.0
      • PMM 2.23.0
      • PMM 2.22.0
      • PMM 2.21.0
      • PMM 2.20.0
      • PMM 2.19.0
      • PMM 2.18.0
      • PMM 2.17.0
      • PMM 2.16.0
      • PMM 2.15.1
      • PMM 2.15.0
      • PMM 2.14.0
      • PMM 2.13.0
      • PMM 2.12.0
      • PMM 2.11.1
      • PMM 2.11.0
      • PMM 2.10.1
      • PMM 2.10.0
      • PMM 2.9.1
      • PMM 2.9.0
      • PMM 2.8.0
      • PMM 2.7.0
      • PMM 2.6.1
      • PMM 2.6.0
      • PMM 2.5.0
      • PMM 2.4.0
      • PMM 2.3.0
      • PMM 2.2.2
      • PMM 2.2.1
      • PMM 2.2.0
      • PMM 2.1.0
      • PMM 2.0.1
      • PMM 2.0.0

    • Red Hat, CentOS
    • Debian, Ubuntu
    • minikube
      • Red Hat, CentOS
    • Start PMM server and activate a DBaaS feature
    • Create a Kubernetes cluster
      • Minikube
      • Amazon AWS EKS
      • Google GKE
    • Deleting clusters
      • Cleaning up Kubernetes cluster
    • Run PMM Server as a Docker container for DBaaS
    • Exposing PSMDB and XtraDB clusters for access by external clients

    Setting up DBaaS¶

    To use the Database as a Service (DBaaS) solution in PMM there are a few things that need to be setup first including a suitable Kubernetes Cluster. If you’ve already got a kubernetes cluster you can jump ahead and enable DBaaS in PMM.

    If you don’t have a Kubernetes cluster available you can use the free K8s provided by Percona for evaluation which will allow you to play around with DBaaS for 3 hours before the cluster expires. For a Kubernetes cluster that doesn’t expire you can use our “easy script”, you can find the instructions here.

    In the sections that follow we’ll try to outline the steps to create your own Kubernetes cluster in a few popular ways.

    Red Hat, CentOS¶

    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum -y install docker-ce
    usermod -a -G docker centos
    systemctl enable docker
    systemctl start docker
    

    Debian, Ubuntu¶

    apt-add-repository https://download.docker.com/linux/centos/docker-ce.repo
    systemctl enable docker
    systemctl start docker
    

    minikube¶

    Please follow minikube’s documentation to install it.

    Red Hat, CentOS¶

    yum -y install curl
    curl -Lo /usr/local/sbin/minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
    chmod +x /usr/local/sbin/minikube
    ln -s /usr/local/sbin/minikube /usr/sbin/minikube
    alias kubectl='minikube kubectl --'
    

    Start PMM server and activate a DBaaS feature¶

    • To start a fully-working 3 node XtraDB cluster, consisting of sets of 3x HAProxy, 3x PXC and 6x PMM Client containers, you will need at least 9 vCPU available for minikube. (1x vCPU for HAProxy and PXC and 0.5vCPU for each pmm-client containers).
    • DBaaS does not depend on PMM Client.
    • You can pass the environment variable --env ENABLE_DBAAS=1 to force the DBaaS feature when starting up pmm-server container. You can omit the variable and enable the feature later using PMM UI, please follow the link in step 3. below.
    • Add the option --network minikube if you run PMM Server and minikube in the same Docker instance. (This will share a single network and the kubeconfig will work.)
    • Add the options --env PMM_DEBUG=1 and/or --env PMM_TRACE=1 if you need extended debug details
    1. Start PMM server:

      docker run --detach --publish 80:80 --publish 443:443 --name pmm-server percona/pmm-server:2
      
    2. Change the default administrator credentials from CLI:

      (This step is optional, because the same can be done from the web interface of PMM on first login.)

      docker exec -t pmm-server bash -c 'ln -s /srv/grafana /usr/share/grafana/data; chown -R grafana:grafana /usr/share/grafana/data; grafana-cli --homepath /usr/share/grafana admin reset-admin-password <RANDOM_PASS_GOES_IN_HERE>'
      

    Important

    You must activate DBaaS using the PMM UI if you omitted --env ENABLE_DBAAS=1 when starting up the container.

    Create a Kubernetes cluster¶

    The DBaaS feature uses Kubernetes clusters to deploy database clusters. You must first create a Kubernetes cluster and then add it to PMM using kubeconfig to get a successful setup.

    Here are links to the current Kubernetes versions supported by DBaaS:

    • Percona Server for MySQL
    • Percona Server for MongoDB

    Minikube¶

    1. Configure and start minikube:

      minikube start --cpus=16 --memory=32G
      
    2. Get your kubeconfig details from minikube. (You need these to register your Kubernetes cluster with PMM Server):

      minikube kubectl -- config view --flatten --minify
      

      You will need to copy this output to your clipboard and continue with adding a Kubernetes cluster to PMM.

    Amazon AWS EKS¶

    1. Create your cluster via eksctl or the Amazon AWS interface. For example:

      eksctl create cluster --write-kubeconfig --name=your-cluster-name --zones=us-west-2a,us-west-2b --kubeconfig <PATH_TO_KUBECONFIG>
      
      2. Copy the resulting kubeconfig and follow these instructions to register a Kubernetes cluster to PMM.

    Google GKE¶

    1. Create your cluster either with Google Cloud Console or gcloud command line tool:

      The command below assumes that your gcloud command line tool is properly configured and your user authenticated and authorized to manage GKE Clusters. This example creates a minimal zonal cluster using preemptive node machines, ideal for testing the DBaaS functionality.

      gcloud container clusters create --zone europe-west3-c pmm-dbaas-cluster --cluster-version 1.19 --machine-type e2-standard-4 --preemptible --num-nodes=3
      gcloud container clusters get-credentials pmm-dbaas-cluster --zone=europe-west3-c
      kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<<your_user@your_company.com>>
      
    2. Create ServiceAccount, ClusterRole and RoleBindings (required Roles are deployed automatically when PMM deploys Operators) using the following command:

      cat <<EOF | kubectl apply -f -
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: percona-dbaas-cluster-operator
      ---
      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1beta1
      metadata:
        name: service-account-percona-server-dbaas-xtradb-operator
      subjects:
      - kind: ServiceAccount
        name: percona-dbaas-cluster-operator
      roleRef:
        kind: Role
        name: percona-xtradb-cluster-operator
        apiGroup: rbac.authorization.k8s.io
      ---
      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1beta1
      metadata:
        name: service-account-percona-server-dbaas-psmdb-operator
      subjects:
      - kind: ServiceAccount
        name: percona-dbaas-cluster-operator
      roleRef:
        kind: Role
        name: percona-server-mongodb-operator
        apiGroup: rbac.authorization.k8s.io
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRole
      metadata:
        name: service-account-percona-server-dbaas-admin
      rules:
      - apiGroups: ["*"]
        resources: ["*"]
        verbs: ["*"]
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
        name: service-account-percona-server-dbaas-operator-admin
      subjects:
      - kind: ServiceAccount
        name: percona-dbaas-cluster-operator
        namespace: default
      roleRef:
        kind: ClusterRole
        name: service-account-percona-server-dbaas-admin
        apiGroup: rbac.authorization.k8s.io
      EOF
      
    3. Extract variables required to generate a kubeconfig:

      name=`kubectl get serviceAccounts percona-dbaas-cluster-operator -o json | jq  -r '.secrets[].name'`
      certificate=`kubectl get secret $name -o json | jq -r  '.data."ca.crt"'`
      token=`kubectl get secret $name -o json | jq -r  '.data.token' | base64 -d`
      server=`kubectl cluster-info | grep 'Kubernetes control plane' | cut -d ' ' -f 7`
      
    4. Generate your kubeconfig file (copy the output):

      echo "
      apiVersion: v1
      kind: Config
      users:
      - name: percona-dbaas-cluster-operator
        user:
          token: $token
      clusters:
      - cluster:
          certificate-authority-data: $certificate
          server: $server
        name: self-hosted-cluster
      contexts:
      - context:
          cluster: self-hosted-cluster
          user: percona-dbaas-cluster-operator
        name: svcs-acct-context
      current-context: svcs-acct-context
      "
      
    5. Follow the instructions on How to add a Kubernetes cluster with kubeconfig from the previous step.

    Deleting clusters¶

    If a Public Address is set in PMM Settings, for each DB cluster an API Key is created which can be found on the page /graph/org/apikeys. You should not delete them (for now, until issue PMM-8045 is fixed) – once a DB cluster is removed from DBaaS, the related API Key is also removed.

    For example, if you only run eksctl delete cluster to delete an Amazon EKS cluster without cleaning up the cluster first, there will be a lot of orphaned resources such as Cloud Formations, Load Balancers, EC2 instances, Network interfaces, etc. The same applies for Google GKE clusters.

    Cleaning up Kubernetes cluster¶

    1. You should delete all database clusters, backups and restores.

      kubectl delete perconaxtradbclusterbackups.pxc.percona.com --all
      kubectl delete perconaxtradbclusters.pxc.percona.com --all
      kubectl delete perconaxtradbclusterrestores.pxc.percona.com --all
      
      kubectl delete perconaservermongodbbackups.psmdb.percona.com --all
      kubectl delete perconaservermongodbs.psmdb.percona.com --all
      kubectl delete perconaservermongodbrestores.psmdb.percona.com --all
      
    2. In the dbaas-controller repository, in the deploy directory there are manifests we use to deploy operators. Use them to delete operators and related resources from the cluster.

      Important

      • Do NOT execute this step before all database clusters, backups and restores are deleted in the previous step. It may result in not being able to delete the namespace DBaaS lives in.
      • Also be careful with this step if you are running DBaaS in more than one namespace as it deletes cluster level CustomResourceDefinitions needed to run DBaaS. This would break DBaaS in other namespaces. Delete just operators deployments in that case.
      # Delete the PXC operator and related resources.
      curl https://raw.githubusercontent.com/percona-platform/dbaas-controller/7a5fff023994cecf6bde15705365114004b50b41/deploy/pxc-operator.yaml | kubectl delete -f -
      
      # Delete the PSMDB operator and related resources.
      curl https://raw.githubusercontent.com/percona-platform/dbaas-controller/7a5fff023994cecf6bde15705365114004b50b41/deploy/psmdb-operator.yaml | kubectl delete -f -
      
    3. Delete the name space where the DBaaS is running, this will delete all remaining name space level resources if any are left.

      kubectl delete namespace <your-namespace>
      
    4. Delete the Kubernetes cluster. The method depends on your cloud provider:

      • Delete GKE cluster.
      • Delete Amazon EKS cluster.

    Run PMM Server as a Docker container for DBaaS¶

    1. Start PMM server from a feature branch:

      docker run --detach --name pmm-server --publish 80:80 --publish 443:443 --env ENABLE_DBAAS=1  percona/pmm-server:2;
      

      Important

      • Use --network minikube if running PMM Server and minikube in the same Docker instance. This way they will share single network and the kubeconfig will work.
      • Use Docker variables --env PMM_DEBUG=1 --env PMM_TRACE=1 to see extended debug details.
    2. Change the default administrator credentials:

      This step is optional, because the same can be done from the web interface of PMM on the first login.

      docker exec -t pmm-server bash -c 'ln -s /srv/grafana /usr/share/grafana/data; chown -R grafana:grafana /usr/share/grafana/data; grafana-cli --homepath /usr/share/grafana admin reset-admin-password <RANDOM_PASS_GOES_IN_HERE>'
      
    3. Set the public address for PMM Server in PMM settings UI

    4. Follow the steps for Add a Kubernetes cluster.

    5. Follow the steps for Add a DB Cluster.

    6. Get the IP address to connect your app/service:

      minikube kubectl get services
      

    Exposing PSMDB and XtraDB clusters for access by external clients¶

    To make services visible externally, you create a LoadBalancer service or manually run commands to expose ports:

    kubectl expose deployment hello-world --type=NodePort.
    

    See also

    • DBaaS Dashboard
    • Install minikube
    • Setting up a Standalone MYSQL Instance on Kubernetes & exposing it using Nginx Ingress Controller
    • Use a Service to Access an Application in a Cluster
    • Exposing applications using services

    Contact us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-05-25
    Percona LLC, © 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.