Skip to content
logo
Percona Operator for MongoDB
Install on Minikube
Initializing search
    percona/k8spsmdb-docs
    percona/k8spsmdb-docs
    • Welcome
      • System requirements
      • Design and architecture
      • Comparison with other solutions
      • Install with Helm
      • Install with kubectl
      • Install on Minikube
      • Install on Google Kubernetes Engine (GKE)
      • Install on Amazon Elastic Kubernetes Service (AWS EKS)
      • Install on Microsoft Azure Kubernetes Service (AKS)
      • Generic Kubernetes installation
      • Install on OpenShift
      • Application and system users
      • Changing MongoDB options
      • Anti-affinity and tolerations
      • Labels and annotations
      • Exposing the cluster
      • Local storage support
      • Arbiter and non-voting nodes
      • MongoDB sharding
      • Transport encryption (TLS/SSL)
      • Data at rest encryption
      • Telemetry
        • About backups
        • Configure storage for backups
        • Making scheduled backups
        • Making on-demand backup
        • Storing operations logs for point-in-time recovery
        • Restore from a previously saved backup
        • Delete the unneeded backup
      • Upgrade MongoDB and the Operator
      • Horizontal and vertical scaling
      • Multi-cluster and multi-region deployment
      • Monitor with Percona Monitoring and Management (PMM)
      • Add sidecar containers
      • Restart or pause the cluster
      • Debug and troubleshoot
      • OpenLDAP integration
      • How to use private registry
      • Creating a private S3-compatible cloud for backups
      • Restore backup to a new Kubernetes-based environment
      • How to use backups to move the external database to Kubernetes
      • Install Percona Server for MongoDB in multi-namespace (cluster-wide) mode
      • Upgrading Percona Server for MongoDB manually
      • Custom Resource options
      • Percona certified images
      • Operator API
      • Frequently asked questions
      • Old releases (documentation archive)
      • Release notes index
      • Percona Operator for MongoDB 1.14.0 (2023-03-13)
      • Percona Operator for MongoDB 1.13.0 (2022-09-15)
      • Percona Operator for MongoDB 1.12.0 (2022-05-05)
      • Percona Distribution for MongoDB Operator 1.11.0 (2021-12-21)
      • Percona Distribution for MongoDB Operator 1.10.0 (2021-09-30)
      • Percona Distribution for MongoDB Operator 1.9.0 (2021-07-29)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.8.0 (2021-05-06)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.7.0 (2021-03-08)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.6.0 (2020-12-22)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.5.0 (2020-09-07)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.4.0 (2020-03-31)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.3.0 (2019-12-11)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.2.0 (2019-09-20)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.1.0 (2019-07-15)
      • Percona Kubernetes Operator for Percona Server for MongoDB 1.0.0 (2019-05-29)

    Install Percona Server for MongoDB on Minikube¶

    Installing the Percona Operator for MongoDB on Minikube is the easiest way to try it locally without a cloud provider. Minikube runs Kubernetes on GNU/Linux, Windows, or macOS system using a system-wide hypervisor, such as VirtualBox, KVM/QEMU, VMware Fusion or Hyper-V. Using it is a popular way to test Kubernetes application locally prior to deploying it on a cloud.

    The following steps are needed to run Percona Operator for MongoDB on minikube:

    1. Install minikube, using a way recommended for your system. This includes the installation of the following three components:

      1. kubectl tool,

      2. a hypervisor, if it is not already installed,

      3. actual minikube package

      After the installation, run minikube start --memory=5120 --cpus=4 --disk-size=30g (parameters increase the virtual machine limits for the CPU cores, memory, and disk, to ensure stable work of the Operator). Being executed, this command will download needed virtualized images, then initialize and run the cluster. After Minikube is successfully started, you can optionally run the Kubernetes dashboard, which visually represents the state of your cluster. Executing minikube dashboard will start the dashboard and open it in your default web browser.

    2. Deploy the operator using the following command:

      $ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.14.0/deploy/bundle.yaml
      
    3. Deploy MongoDB cluster with:

      $ kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.14.0/deploy/cr-minimal.yaml
      

      Note

      This deploys a one-shard MongoDB cluster with one replica set with one node, one mongos node and one config server node. The deploy/cr-minimal.yaml is for minimal non-production deployment. For more configuration options please see deploy/cr.yaml and Custom Resource Options. You can clone the repository with all manifests and source code by executing the following command:

      $ git clone -b v1.14.0 https://github.com/percona/percona-server-mongodb-operator
      

      After editing the needed options, apply your modified deploy/cr.yaml file as follows:

      $ kubectl apply -f deploy/cr.yaml
      

      The creation process may take some time.

      The process is over when both operator and replica set pod have reached their Running status. kubectl get pods output should look like this:

      NAME                                              READY   STATUS    RESTARTS   AGE
      percona-server-mongodb-operator-d859b69b6-t44vk   1/1     Running   0          50s
      minimal-cluster-cfg-0                             1/1     Running   0          41s
      minimal-cluster-mongos-0                          1/1     Running   0          36s
      minimal-cluster-rs0-0                             1/1     Running   0          39s
      

      You can also track the progress via the Kubernetes dashboard:

      image

    4. During previous steps, the Operator has generated several secrets, including the password for the admin user, which you will need to access the cluster. Use kubectl get secrets to see the list of Secrets objects (by default Secrets object you are interested in has minimal-cluster-name-secrets name). Then kubectl get secret minimal-cluster-name-secrets -o yaml will return the YAML file with generated secrets, including the MONGODB_USER_ADMIN and MONGODB_USER_ADMIN_PASSWORD strings, which should look as follows:

      ...
      data:
        ...
        MONGODB_USER_ADMIN_PASSWORD: aDAzQ0pCY3NSWEZ2ZUIzS1I=
        MONGODB_USER_ADMIN_USER: dXNlckFkbWlu
      

      Here the actual login name and password are base64-encoded, and echo 'aDAzQ0pCY3NSWEZ2ZUIzS1I=' | base64 --decode will bring it back to a human-readable form.

    5. Check connectivity to a newly created cluster.

      First of all, run a container with a MongoDB client and connect its console output to your terminal. The following command will do this, naming the new Pod percona-client:

      $ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:4.4.18-18 --restart=Never -- bash -il
      

      Executing it may require some time to deploy the correspondent Pod. Now run mongo tool in the percona-client command shell using the login (which is userAdmin) and password obtained from the secret:

      $ mongo "mongodb://userAdmin:userAdmin123456@minimal-cluster-name-mongos.default.svc.cluster.local/admin?ssl=false"
      

    Contact Us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-03-14
    Percona LLC and/or its affiliates, © 2009 - 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.