Skip to content
logo
Percona XtraDB Cluster
Configure Percona XtraDB Cluster on CentOS
Initializing search
    percona/pxc-docs
    percona/pxc-docs
    • Home
      • About Percona XtraDB Cluster
      • Percona XtraDB Cluster limitations
      • Understand version numbers
      • Quick start guide for Percona XtraDB Cluster
      • Install Percona XtraDB Cluster
      • Configure nodes for write-set replication
      • Bootstrap the first node
      • Add nodes to cluster
      • Verify replication
      • High availability
      • PXC strict mode
      • Online schema upgrade
      • Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU)
      • Security basics
      • Secure the network
      • Encrypt PXC traffic
      • Enable AppArmor
      • Enable SELinux
      • State snapshot transfer
      • Percona XtraBackup SST configuration
      • Restart the cluster nodes
      • Cluster failover
      • Monitor the cluster
      • Certification in Percona XtraDB Cluster
      • Percona XtraDB Cluster threading model
      • Understand GCache and Record-Set cache
      • GCache encryption and Write-Set cache encryption
      • Perfomance Schema instrumentation
      • Data at Rest Encryption
      • Upgrade Percona XtraDB Cluster
      • Crash recovery
      • Configure Percona XtraDB Cluster on CentOS
        • Prerequisites
        • Step 1. Installing PXC
        • Step 2. Configuring the first node
        • Step 3. Configuring the second node
        • Step 4. Configuring the third node
        • Testing replication
      • Configure Percona XtraDB Cluster on Ubuntu
      • Set up Galera arbitrator
      • How to set up a three-node cluster on a single box
      • How to set up a three-node cluster in EC2 environment
      • Load balancing with HAProxy
      • Load balancing with ProxySQL
      • ProxySQL admin utilities
      • Setting up a testing environment with ProxySQL
      • Release notes index
      • Percona XtraDB Cluster 8.0.31-23 (2023-03-14)
      • Percona XtraDB Cluster 8.0.30-22.md (2022-12-28)
      • Percona XtraDB Cluster 8.0.29-21 (2022-09-12)
      • Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)
      • Percona XtraDB Cluster 8.0.27-18.1
      • Percona XtraDB Cluster 8.0.26-16.1
      • Percona XtraDB Cluster 8.0.25-15.1
      • Percona XtraDB Cluster 8.0.23-14.1
      • Percona XtraDB Cluster 8.0.22-13.1
      • Percona XtraDB Cluster 8.0.21-12.1
      • Percona XtraDB Cluster 8.0.20-11
      • Percona XtraDB Cluster 8.0.20-11.3
      • Percona XtraDB Cluster 8.0.20-11.2
      • Percona XtraDB Cluster 8.0.19-10
      • Percona XtraDB Cluster 8.0.18-9.3
      • Index of wsrep status variables
      • Index of wsrep system variables
      • Index of wsrep_provider options
      • Index of files created by PXC
      • Frequently asked questions
      • Glossary
      • Copyright and licensing information
      • Trademark policy

    • Prerequisites
    • Step 1. Installing PXC
    • Step 2. Configuring the first node
    • Step 3. Configuring the second node
    • Step 4. Configuring the third node
    • Testing replication

    Configure Percona XtraDB Cluster on CentOS¶

    This tutorial describes how to install and configure three Percona XtraDB Cluster nodes on CentOS 7 servers, using the packages from Percona repositories.

    • Node 1

      • Host name: percona1

      • IP address: 192.168.70.71

    • Node 2

      • Host name: percona2

      • IP address: 192.168.70.72

    • Node 3

      • Host name: percona3

      • IP address: 192.168.70.73

    Prerequisites¶

    The procedure described in this tutorial requires the following:

    • All three nodes have CentOS 7 installed.

    • The firewall on all nodes is configured to allow connecting to ports 3306, 4444, 4567 and 4568.

    • SELinux on all nodes is disabled.

    Different from previous version

    The variable wsrep_sst_auth has been removed. Percona XtraDB Cluster 8.0 automatically creates the system user mysql.pxc.internal.session. During SST, the user mysql.pxc.sst.user and the role mysql.pxc.sst.role are created on the donor node.

    Step 1. Installing PXC¶

    Install Percona XtraDB Cluster on all three nodes as described in Installing Percona XtraDB Cluster on Red Hat Enterprise Linux and CentOS.

    Step 2. Configuring the first node¶

    Individual nodes should be configured to be able to bootstrap the cluster. For more information about bootstrapping the cluster, see Bootstrapping the First Node.

    1. Make sure that the configuration file /etc/my.cnf on the first node (percona1) contains the following:

      [mysqld]
      
      datadir=/var/lib/mysql
      user=mysql
      
      # Path to Galera library
      wsrep_provider=/usr/lib64/galera4/libgalera_smm.so
      
      # Cluster connection URL contains the IPs of node#1, node#2 and node#3
      wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73
      
      # In order for Galera to work correctly binlog format should be ROW
      binlog_format=ROW
      
      # Using the MyISAM storage engine is not recommended.
      default_storage_engine=InnoDB
      
      # This InnoDB autoincrement locking mode is a requirement for Galera
      innodb_autoinc_lock_mode=2
      
      # Node 1 address
      wsrep_node_address=192.168.70.71
      
      # SST method
      wsrep_sst_method=xtrabackup-v2
      
      # Cluster name
      wsrep_cluster_name=my_centos_cluster
      
    2. Start the first node with the following command:

      [root@percona1 ~] # systemctl start mysql@bootstrap.service
      

      The previous command will start the cluster with initial wsrep_cluster_address variable set to gcomm://. If the node or MySQL are restarted later, there will be no need to change the configuration file.

    3. After the first node has been started, cluster status can be checked with the following command:

      mysql> show status like 'wsrep%';
      

      This output shows that the cluster has been successfully bootstrapped.

      Expected output
      +----------------------------+--------------------------------------+
      | Variable_name              | Value                                |
      +----------------------------+--------------------------------------+
      | wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |
      ...
      | wsrep_local_state          | 4                                    |
      | wsrep_local_state_comment  | Synced                               |
      ...
      | wsrep_cluster_size         | 1                                    |
      | wsrep_cluster_status       | Primary                              |
      | wsrep_connected            | ON                                   |
      ...
      | wsrep_ready                | ON                                   |
      +----------------------------+--------------------------------------+
      75 rows in set (0.00 sec)
      

      Copy the automatically generated temporary password for the superuser account:

      $ sudo grep 'temporary password' /var/log/mysqld.log
      

      Use this password to log in as root:

      $ mysql -u root -p
      

      Change the password for the superuser account and log out. For example:

      mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'r00tP@$$';
      
      Expected output
      Query OK, 0 rows affected (0.00 sec)
      

    Step 3. Configuring the second node¶

    1. Make sure that the configuration file /etc/my.cnf on the second node (percona2) contains the following:

      [mysqld]
      
      datadir=/var/lib/mysql
      user=mysql
      
      # Path to Galera library
      wsrep_provider=/usr/lib64/galera4/libgalera_smm.so
      
      # Cluster connection URL contains IPs of node#1, node#2 and node#3
      wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73
      
      # In order for Galera to work correctly binlog format should be ROW
      binlog_format=ROW
      
      # Using the MyISAM storage engine is not recommended
      default_storage_engine=InnoDB
      
      # This InnoDB auto_increment locking mode is a requirement for Galera
      innodb_autoinc_lock_mode=2
      
      # Node 2 address
      wsrep_node_address=192.168.70.72
      
      # Cluster name
      wsrep_cluster_name=my_centos_cluster
      
      # SST method
      wsrep_sst_method=xtrabackup-v2
      
    2. Start the second node with the following command:

      [root@percona2 ~]# systemctl start mysql
      
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on both nodes. The following is an example of status from the second node (percona2):

      mysql> show status like 'wsrep%';
      

      The output shows that the new node has been successfully added to the cluster.

      Expected output
      +----------------------------+--------------------------------------+
      | Variable_name              | Value                                |
      +----------------------------+--------------------------------------+
      | wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |
      ...
      | wsrep_local_state          | 4                                    |
      | wsrep_local_state_comment  | Synced                               |
      ...
      | wsrep_cluster_size         | 2                                    |
      | wsrep_cluster_status       | Primary                              |
      | wsrep_connected            | ON                                   |
      ...
      | wsrep_ready                | ON                                   |
      +----------------------------+--------------------------------------+
      40 rows in set (0.01 sec)
      

    Step 4. Configuring the third node¶

    1. Make sure that the MySQL configuration file /etc/my.cnf on the third node (percona3) contains the following:

      [mysqld]
      
      datadir=/var/lib/mysql
      user=mysql
      
      # Path to Galera library
      wsrep_provider=/usr/lib64/galera4/libgalera_smm.so
      
      # Cluster connection URL contains IPs of node#1, node#2 and node#3
      wsrep_cluster_address=gcomm://192.168.70.71,192.168.70.72,192.168.70.73
      
      # In order for Galera to work correctly binlog format should be ROW
      binlog_format=ROW
      
      # Using the MyISAM storage engine is not recommended
      default_storage_engine=InnoDB
      
      # This InnoDB auto_increment locking mode is a requirement for Galera
      innodb_autoinc_lock_mode=2
      
      # Node #3 address
      wsrep_node_address=192.168.70.73
      
      # Cluster name
      wsrep_cluster_name=my_centos_cluster
      
      # SST method
      wsrep_sst_method=xtrabackup-v2
      
    2. Start the third node with the following command:

      [root@percona3 ~]# systemctl start mysql
      
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all three nodes. The following is an example of status from the third node (percona3):

      mysql> show status like 'wsrep%';
      

      The output confirms that the third node has joined the cluster.

      Expected output
      +----------------------------+--------------------------------------+
      | Variable_name              | Value                                |
      +----------------------------+--------------------------------------+
      | wsrep_local_state_uuid     | c2883338-834d-11e2-0800-03c9c68e41ec |
      ...
      | wsrep_local_state          | 4                                    |
      | wsrep_local_state_comment  | Synced                               |
      ...
      | wsrep_cluster_size         | 3                                    |
      | wsrep_cluster_status       | Primary                              |
      | wsrep_connected            | ON                                   |
      ...
      | wsrep_ready                | ON                                   |
      +----------------------------+--------------------------------------+
      40 rows in set (0.01 sec)
      

    Testing replication¶

    To test replication, lets create a new database on second node, create a table for that database on the third node, and add some records to the table on the first node.

    1. Create a new database on the second node:

      mysql@percona2> CREATE DATABASE percona;
      

      The following output confirms that a new database has been created:

      Expected output
      Query OK, 1 row affected (0.01 sec)
      
    2. Switch to a newly created database:

      mysql@percona3> USE percona;
      

      The following output confirms that a database has been changed:

      Expected output
      Database changed
      
    3. Create a table on the third node:

      mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
      

      The following output confirms that a table has been created:

      Expected output
      Query OK, 0 rows affected (0.05 sec)
      
    4. Insert records on the first node:

      mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1');
      

      The following output confirms that the records have been inserted:

      Expected output
      Query OK, 1 row affected (0.02 sec)
      
    5. Retrieve all the rows from that table on the second node:

      mysql@percona2> SELECT * FROM percona.example;
      

      The following output confirms that all the rows have been retrieved:

      Expected output
      +---------+-----------+
      | node_id | node_name |
      +---------+-----------+
      |       1 | percona1  |
      +---------+-----------+
      1 row in set (0.00 sec)
      

      This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.

    Contact us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-01-20
    Percona LLC and/or its affiliates, © 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.