Skip to content

Scale Percona Server for MongoDB on Kubernetes

One of the great advantages brought by Kubernetes is the ease of an application scaling. Scaling a Deployment up or down ensures new Pods are created and set to available Kubernetes nodes.

Scaling can be vertical and horizontal. Vertical scaling adds more compute or storage resources to MongoDB nodes; horizontal scaling is about adding more nodes to the cluster. High availability looks technically similar, because it also involves additional nodes, but the reason is maintaining liveness of the system in case of server or network failures.

Vertical scaling

Scale compute resources

The Operator deploys and manages multiple components, such as MongoDB replica set instances, mongos and config server replica set instances, and others. You can manage CPU or memory for every component separately by editing corresponding sections in the Custom Resource. We follow the structure for requests and limits that Kubernetes provides .

To add more resources to your MongoDB replica set instances, edit the following section in the Custom Resource:

spec:
  replsets:
    resources:
      requests: 
        memory: 4G
        cpu: 2
      limits:
        memory: 4G
        cpu: 2

Use our reference documentation for the Custom Resource options for more details about other components.

Scale storage

Kubernetes manages storage with a PersistentVolume (PV), a segment of storage supplied by the administrator, and a PersistentVolumeClaim (PVC), a request for storage from a user. Starting with Kubernetes v1.11, a user can increase the size of an existing PVC object (considered stable since Kubernetes v1.24). The user cannot shrink the size of an existing PVC object.

Starting from the Operator version 1.16.0, you can scale Percona Server for MongoDB storage automatically by configuring the Custom Resource manifest. Alternatively, you can scale the storage manually. For either way, the volume type must support PVCs expansion.

Find exact details about PVCs and the supported volume types in Kubernetes documentation .

Storage resizing with Volume Expansion capability

Certain volume types support PVCs expansion. You can run the following command to check if your storage supports the expansion capability:

$ kubectl describe sc <storage class name> | grep AllowVolumeExpansion
Expected output
AllowVolumeExpansion: true

To enable storage resizing via volume expansion, do the following:

  1. Set the enableVolumeExpansion Custom Resource option to true (it is turned off by default).
  2. Specify new storage size for the replsets.<NAME>.volumeSpec.persistentVolumeClaim.resources.requests.storage and/or configsvrReplSet.volumeSpec.persistentVolumeClaim.resources.requests.storage options in the Custom Resource.

    This is the example configuration of defining a new storage size in the deploy/cr.yaml file:

    spec:
      ...
      enableVolumeExpansion: true
      ...
      replsets:
        ...
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: <NEW STORAGE SIZE>
      ...
      configsvrReplSets:
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: <NEW STORAGE SIZE>
    
  3. Apply changes as usual:

    $ kubectl apply -f cr.yaml
    

The storage size change takes some time. When it starts, the Operator automatically adds the pvc-resize-in-progress annotation to the PerconaServerMongoDB Custom Resource. The annotation contains the timestamp of the resize start and indicates that the resize operation is running.. After the resize finishes, the Operator deletes this annotation.

Manual scaling without Volume Expansion capability

Manual scaling is the way to go if:

  • your version of the Operator is older than 1.16.0,
  • your volumes have a type that does not support Volume Expansion, or
  • you do not rely on automated scaling.

You will need to delete Pods and their persistent volumes one by one to resync the data to the new volumes. This way you can also shrink the storage.

Here’s how to resize the storage:

  1. Update the Custom Resource with the new storage size by editing and applying the deploy/cr.yaml file:

    spec:
      ...
      replsets:
        ...
        volumeSpec:
          persistentVolumeClaim:
            resources:
              requests:
                storage: <NEW STORAGE SIZE>
    
  2. Apply the Custom Resource for the changes to come into effect:

    $ kubectl apply -f deploy/cr.yaml
    
  3. Delete the StatefulSet with the orphan option

    $ kubectl delete sts <statefulset-name> --cascade=orphan
    

    The Pods will not go down and the Operator is going to recreate the StatefulSet:

    $ kubectl get sts <statefulset-name>
    
    Expected output
    my-cluster-name-rs0       3/3     39s
    
  4. Scale up the cluster (Optional)

    Changing the storage size would require us to terminate the Pods, which decreases the computational power of the cluster and might cause performance issues. To improve performance during the operation we are going to change the size of the cluster from 3 to 5 nodes:

    spec:
      ...
      replsets:
        ...
        size: 5
    

    Apply the change:

    $ kubectl apply -f deploy/cr.yaml
    

    New Pods will already have the new storage size:

    $ kubectl get pvc
    
    Expected output
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    mongod-data-my-cluster-name-cfg-0   Bound    pvc-a2b37f4d-6f11-443c-8670-de82ce9fc335   10Gi       RWO            standard       110m
    mongod-data-my-cluster-name-cfg-1   Bound    pvc-ded949e5-0f93-4f57-ab2c-7c5fd9528fa0   10Gi       RWO            standard       109m
    mongod-data-my-cluster-name-cfg-2   Bound    pvc-f3a441dd-94b6-4dc0-b96c-58b7851dfaa0   10Gi       RWO            standard       108m
    mongod-data-my-cluster-name-rs0-0   Bound    pvc-b183c40b-c165-445a-aacd-9a34b8fff227   19Gi       RWO            standard       49m
    mongod-data-my-cluster-name-rs0-1   Bound    pvc-f186426b-cbbe-4c31-860e-97a4dfca3de0   19Gi       RWO            standard       47m
    mongod-data-my-cluster-name-rs0-2   Bound    pvc-6beb6ccd-8b3a-4580-b3ef-a2345a2c21d6   19Gi       RWO            standard       45m 
    
  5. Delete PVCs and Pods with the old storage size one by one. Wait for data to sync before you proceed to the next node.

    $ kubectl delete pvc <PVC NAME>
    $ kubectl delete pod <POD NAME>
    

    The new PVC is going to be created along with the Pod.

The storage size change takes some time. When it starts, the Operator automatically adds the pvc-resize-in-progress annotation to the PerconaServerMongoDB Custom Resource. The annotation contains the timestamp of the resize start and indicates that the resize operation is running.. After the resize finishes, the Operator deletes this annotation.

Horizontal scaling

Replica Sets

You can change the size separately for different components of your MongoDB replica set by setting these options in the appropriate subsections:

For example, the following update in deploy/cr.yaml sets the size of the MongoDB Replica Set rs0 to 5 nodes:

spec:
  ...
  replsets:
  - name: rs0
    size: 5
    ...

Don’t forget to apply changes as usual, running the kubectl apply -f deploy/cr.yaml command.

Note

The Operator will not allow to scale Percona Server for MongoDB with the kubectl scale statefulset <StatefulSet name> command as it puts size configuration options out of sync.

Sharding

You can change the size for different components of your MongoDB sharded cluster by setting these options in the appropriate subsections:

Changing the number of shards

You can change the number of shards of an existing cluster by adding or removing members in the spec.replsets subsection.

For example, given the following cluster that has 2 shards:

spec:
  ...
  replsets:
  - name: rs0
    size: 3
    ...
  - name: rs1
    size: 3
    ...

You can add an extra shard by applying the following configuration:

spec:
  ...
  replsets:
  - name: rs0
    size: 3
    ...
  - name: rs1
    size: 3
    ...
  - name: rs2
    size: 3
    ...

Similary, you can reduce the number of shards by removing the rs1 and rs2 elements:

spec:
  ...
  replsets:
  - name: rs0
    size: 3
    ...

Note

The Operator will not allow you to remove existing shards unless they don’t have any user-created collections. It is your responsibility to ensure the shard’s data is migrated to the remaining shards in the cluster before trying to applying this change.


Last update: 2025-10-21