Skip to content
logo
Percona XtraDB Cluster
Frequently asked questions
Initializing search
    percona/pxc-docs
    percona/pxc-docs
    • Home
      • About Percona XtraDB Cluster
      • Percona XtraDB Cluster limitations
      • Understand version numbers
      • Quick start guide for Percona XtraDB Cluster
      • Install Percona XtraDB Cluster
      • Configure nodes for write-set replication
      • Bootstrap the first node
      • Add nodes to cluster
      • Verify replication
      • High availability
      • PXC strict mode
      • Online schema upgrade
      • Non-Blocking Operations (NBO) method for Online Scheme Upgrades (OSU)
      • Security basics
      • Secure the network
      • Encrypt PXC traffic
      • Enable AppArmor
      • Enable SELinux
      • State snapshot transfer
      • Percona XtraBackup SST configuration
      • Restart the cluster nodes
      • Cluster failover
      • Monitor the cluster
      • Certification in Percona XtraDB Cluster
      • Percona XtraDB Cluster threading model
      • Understand GCache and Record-Set cache
      • GCache encryption and Write-Set cache encryption
      • Perfomance Schema instrumentation
      • Data at Rest Encryption
      • Upgrade Percona XtraDB Cluster
      • Crash recovery
      • Configure Percona XtraDB Cluster on CentOS
      • Configure Percona XtraDB Cluster on Ubuntu
      • Set up Galera arbitrator
      • How to set up a three-node cluster on a single box
      • How to set up a three-node cluster in EC2 environment
      • Load balancing with HAProxy
      • Load balancing with ProxySQL
      • ProxySQL admin utilities
      • Setting up a testing environment with ProxySQL
      • Release notes index
      • Percona XtraDB Cluster 8.0.31-23 (2023-03-14)
      • Percona XtraDB Cluster 8.0.30-22.md (2022-12-28)
      • Percona XtraDB Cluster 8.0.29-21 (2022-09-12)
      • Percona XtraDB Cluster 8.0.28-19.1 (2022-07-19)
      • Percona XtraDB Cluster 8.0.27-18.1
      • Percona XtraDB Cluster 8.0.26-16.1
      • Percona XtraDB Cluster 8.0.25-15.1
      • Percona XtraDB Cluster 8.0.23-14.1
      • Percona XtraDB Cluster 8.0.22-13.1
      • Percona XtraDB Cluster 8.0.21-12.1
      • Percona XtraDB Cluster 8.0.20-11
      • Percona XtraDB Cluster 8.0.20-11.3
      • Percona XtraDB Cluster 8.0.20-11.2
      • Percona XtraDB Cluster 8.0.19-10
      • Percona XtraDB Cluster 8.0.18-9.3
      • Index of wsrep status variables
      • Index of wsrep system variables
      • Index of wsrep_provider options
      • Index of files created by PXC
      • Frequently asked questions
        • How do I report bugs?
        • How do I solve locking issues like auto-increment?
        • What if a node crashes and InnoDB recovery rolls back some transactions?
        • How can I check the Galera node health?
        • How does Percona XtraDB Cluster handle big transactions?
        • Is it possible to have different table structures on the nodes?
        • What if a node fails or there is a network issue between nodes?
        • How would the quorum mechanism handle split brain?
        • Why a node stops accepting commands if the other one fails in a 2-node setup?
        • Is it possible to set up a cluster without state transfer?
        • What TCP ports are used by Percona XtraDB Cluster?
        • Is there “async” mode or only “sync” commits are supported?
        • Does it work with regular MySQL replication?
        • Why the init script (/etc/init.d/mysql) does not start?
        • What does “nc: invalid option – ‘d’” in the sst.err log file mean?
      • Glossary
      • Copyright and licensing information
      • Trademark policy

    • How do I report bugs?
    • How do I solve locking issues like auto-increment?
    • What if a node crashes and InnoDB recovery rolls back some transactions?
    • How can I check the Galera node health?
    • How does Percona XtraDB Cluster handle big transactions?
    • Is it possible to have different table structures on the nodes?
    • What if a node fails or there is a network issue between nodes?
    • How would the quorum mechanism handle split brain?
    • Why a node stops accepting commands if the other one fails in a 2-node setup?
    • Is it possible to set up a cluster without state transfer?
    • What TCP ports are used by Percona XtraDB Cluster?
    • Is there “async” mode or only “sync” commits are supported?
    • Does it work with regular MySQL replication?
    • Why the init script (/etc/init.d/mysql) does not start?
    • What does “nc: invalid option – ‘d’” in the sst.err log file mean?

    Frequently asked questions¶

    How do I report bugs?¶

    All bugs can be reported on JIRA. Please submit error.log files from all the nodes.

    How do I solve locking issues like auto-increment?¶

    For auto-increment, Percona XtraDB Cluster changes auto_increment_offset for each new node. In a single-node workload, locking is handled in the same way as InnoDB. In case of write load on several nodes, Percona XtraDB Cluster uses optimistic locking and the application may receive lock error in response to COMMIT query.

    What if a node crashes and InnoDB recovery rolls back some transactions?¶

    When a node crashes, after restarting, it will copy the whole dataset from another node (if there were changes to data since the crash).

    How can I check the Galera node health?¶

    To check the health of a Galera node, use the following query:

    SELECT 1 FROM dual;
    

    The following results of the previous query are possible:

    • You get the row with id=1 (node is healthy)

    • Unknown error (node is online, but Galera is not connected/synced with the cluster)

    • Connection error (node is not online)

    You can also check a node’s health with the clustercheck script. First set up the clustercheck user:

    mysql> CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY PASSWORD
    '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19';
    
    Expected output
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> GRANT PROCESS ON *.* TO 'clustercheck'@'localhost';
    

    You can then check a node’s health by running the clustercheck script:

    /usr/bin/clustercheck clustercheck password 0
    

    If the node is running, you should get the following status:

    HTTP/1.1 200 OK
    Content-Type: text/plain
    Connection: close
    Content-Length: 40
    
    Percona XtraDB Cluster Node is synced.
    

    In case node isn’t synced or if it is offline, status will look like:

    HTTP/1.1 503 Service Unavailable
    Content-Type: text/plain
    Connection: close
    Content-Length: 44
    
    Percona XtraDB Cluster Node is not synced.
    

    Note

    The clustercheck script has the following syntax:

    <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>

    Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local

    Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local

    How does Percona XtraDB Cluster handle big transactions?¶

    Percona XtraDB Cluster populates write set in memory before replication, and this sets the limit for the size of transactions that make sense. There are wsrep variables for maximum row count and maximum size of write set to make sure that the server does not run out of memory.

    Is it possible to have different table structures on the nodes?¶

    For example, if there are four nodes, with four tables: sessions_a, sessions_b, sessions_c, and sessions_d, and you want each table in a separate node, this is not possible for InnoDB tables. However, it will work for MEMORY tables.

    What if a node fails or there is a network issue between nodes?¶

    The quorum mechanism in Percona XtraDB Cluster will decide which nodes can accept traffic and will shut down the nodes that do not belong to the quorum. Later when the failure is fixed, the nodes will need to copy data from the working cluster.

    The algorithm for quorum is Dynamic Linear Voting (DLV). The quorum is preserved if (and only if) the sum weight of the nodes in a new component strictly exceeds half that of the preceding Primary Component, minus the nodes which left gracefully.

    The mechanism is described in detail in Galera documentation.

    How would the quorum mechanism handle split brain?¶

    The quorum mechanism cannot handle split brain. If there is no way to decide on the primary component, Percona XtraDB Cluster has no way to resolve a split brain. The minimal recommendation is to have 3 nodes. However, it is possibile to allow a node to handle traffic with the following option:

    wsrep_provider_options="pc.ignore_sb = yes"
    

    Why a node stops accepting commands if the other one fails in a 2-node setup?¶

    This is expected behavior to prevent split brain. For more information, see previous question or Galera documentation.

    Is it possible to set up a cluster without state transfer?¶

    It is possible in two ways:

    1. By default, Galera reads starting position from a text file <datadir>/grastate.dat. Make this file identical on all nodes, and there will be no state transfer after starting a node.

    2. Use the wsrep_start_position variable to start the nodes with the same UUID:seqno value.

    What TCP ports are used by Percona XtraDB Cluster?¶

    You may need to open up to four ports if you are using a firewall:

    1. Regular MySQL port (default is 3306).

    2. Port for group communication (default is 4567). It can be changed using the following option:

      wsrep_provider_options ="gmcast.listen_addr=tcp://0.0.0.0:4010; "
      
    3. Port for State Snaphot Transfer (default is 4444). It can be changed using the following option:

      wsrep_sst_receive_address=10.11.12.205:5555
      
    4. Port for Incremental State Transfer (default is port for group communication + 1 or 4568). It can be changed using the following option:

      wsrep_provider_options = "ist.recv_addr=10.11.12.206:7777; "
      

    Is there “async” mode or only “sync” commits are supported?¶

    Percona XtraDB Cluster does not support “async” mode, all commits are synchronous on all nodes. To be precise, the commits are “virtually” synchronous, which means that the transaction should pass certification on nodes, not physical commit. Certification means a guarantee that the transaction does not have conflicts with other transactions on the corresponding node.

    Does it work with regular MySQL replication?¶

    Yes. On the node you are going to use as source, you should enable log-bin and log-slave-update options.

    Why the init script (/etc/init.d/mysql) does not start?¶

    Try to disable SELinux with the following command:

    echo 0 > /selinux/enforce
    

    What does “nc: invalid option – ‘d’” in the sst.err log file mean?¶

    This error is specific to Debian and Ubuntu. Percona XtraDB Cluster uses netcat-openbsd package. This dependency has been fixed. Future releases of Percona XtraDB Cluster will be compatible with any netcat (see bug PXC-941).

    Contact us

    For free technical help, visit the Percona Community Forum.

    To report bugs or submit feature requests, open a JIRA ticket.

    For paid support and managed or consulting services , contact Percona Sales.


    Last update: 2023-01-20
    Percona LLC and/or its affiliates, © 2023
    Made with Material for MkDocs

    Cookie consent

    We use cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of our documentation and whether users find what they're searching for. With your consent, you're helping us to make our documentation better.