« DBMS/MySQL/XtraDBCluster/Install » : différence entre les versions

De TartareFR
Aller à la navigation Aller à la recherche
 
(5 versions intermédiaires par le même utilisateur non affichées)
Ligne 24 : Ligne 24 :
== Installation du cluster ==
== Installation du cluster ==


L'installation ne se fera que sur les noeuds du cluster: clust-node1, clust-node2, clust-node3 et clust-node4.
* Installation des dépôts externes: <pre>rpm -Uvh http://mirrors.ircam.fr/pub/fedora/epel/6/i386/epel-release-6-8.noarch.rpm</pre><pre>rpm -Uvh  http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm</pre>
* Installation des dépôts externes: <pre>rpm -Uvh http://mirrors.ircam.fr/pub/fedora/epel/6/i386/epel-release-6-8.noarch.rpm</pre><pre>rpm -Uvh  http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm</pre>
* Installation de XtraDBCluster sur les noeuds<pre>yum install Percona-XtraDB-Cluster-56</pre>
* Installation de XtraDBCluster sur les noeuds<pre>yum install Percona-XtraDB-Cluster-56</pre>
Ligne 139 : Ligne 140 :
# Contain the name (wsrep_node_name) of the preferred donor for the SST
# Contain the name (wsrep_node_name) of the preferred donor for the SST
#wsrep_sst_donor=
#wsrep_sst_donor=
# When enabled SST donor node will not accept incoming queries, instead it will reject queries with UNKNOWN COMMAND error code. Useful for disabling node in load balancer. Default: OFF
# When enabled SST donor node will not accept incoming queries, instead it will reject queries with UNKNOWN COMMAND error code. Useful for disabling node in load balancer.
# Default: OFF
#wsrep_sst_donor_rejects_queries=
#wsrep_sst_donor_rejects_queries=
# sets up the method for taking the State Snapshot Transfer (SST). Available options are: xtrabackup, xtrabackup-v2, rsync, mysqldump, custom_script_name, skip. Default: xtrabackup-v2 (Recommended too).
# sets up the method for taking the State Snapshot Transfer (SST). Available options are: xtrabackup, xtrabackup-v2, rsync, mysqldump, custom_script_name, skip.
# Default: xtrabackup-v2 (Recommended too).
wsrep_sst_method=xtrabackup-v2
wsrep_sst_method=xtrabackup-v2
# Used to configure address on which the node expects the SST. Default: AUTO
# Used to configure address on which the node expects the SST. Default: AUTO
Ligne 205 : Ligne 208 :
<pre>/etc/init.d/mysql start</pre>
<pre>/etc/init.d/mysql start</pre>


On peut maintenant vérifier que le cluster est bien contruit en lançant une requète sur un des noeuds
On peut maintenant vérifier que le cluster est bien contruit en lançant une requète sur un des noeuds. La variable '''wsrep_cluster_size''' doit être égale au nombre de noeuds du cluster.
<syntaxhighlight lang="sql">
<syntaxhighlight lang="sql">
show status like "%wsrep_cluster%"
show status like "%wsrep_cluster%"
</syntaxhighlight>
<pre>
+--------------------------+--------------------------------------+
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
| Variable_name            | Value                                |
Ligne 216 : Ligne 221 :
| wsrep_cluster_status    | Primary                              |
| wsrep_cluster_status    | Primary                              |
+--------------------------+--------------------------------------+
+--------------------------+--------------------------------------+
</syntaxhighlight>
</pre>
{{Admon/note|Vérification|La requète doit rapporter les mêmes résultats sur les autres noeuds du cluster.}}
 
== HAProxy ==
 
=== Installation du proxy ===
Sur le serveur clust-master, on va installer <app>haproxy</app> pour faire un pont vers notre cluster.
<pre>yum install haproxy sysbench</pre>
 
Fichier <path>/etc/haproxy/haproxy.cfg</path>
<pre>
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#  http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
 
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #  file. A line like the following can be added to
    #  /etc/sysconfig/syslog
    #
    #    local2.*                      /var/log/haproxy.log
    #
   
    #log        127.0.0.1 local2
    log        127.0.0.1 local0
    log        127.0.0.1 local1 notice
 
    #chroot      /var/lib/haproxy
    pidfile    /var/run/haproxy.pid
    maxconn    4096
    user        haproxy
    group      haproxy
    daemon
 
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats mode 0600 level admin
 
 
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                    global
    option                  tcplog
    option                  dontlognull
    #option http-server-close
    #option forwardfor      except 127.0.0.0/8
    option                  redispatch
    retries                3
    timeout http-request    10s
    timeout queue          1m
    timeout connect        10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check          10s
    maxconn                2000
 
#---------------------------------------------------------------------
# mysql frontend which proxys to the backends
#---------------------------------------------------------------------
frontend mysql-front
        bind *:3306
        mode tcp
        default_backend mysql-back
 
frontend mysql-fail-front
        bind *:3307
        mode tcp
        default_backend mysql-fail-back
 
frontend stats-front
        bind *:1936
        mode http
        default_backend stats-back
 
 
#---------------------------------------------------------------------
# static backend for serving up mysql
#---------------------------------------------------------------------
#  - maxconn : high limit for the total number of simultaneous connections
#  - check  : enable monitoring
#  - inter  : interval between tests (in milliseconds)
#  - rise    : number of valid checks needed for the server to fully get up
#  - fall    : number of failures supported before declaring that the server has fallen down
#  - port    : default server port
#  - addr    : specific address for the test (default = address server)
 
backend mysql-back
        mode tcp
        balance leastconn
        option httpchk
        server clust-node1 192.168.122.178:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
        server clust-node2 192.168.122.123:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
        server clust-node3 192.168.122.124:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
        server clust-node4 192.168.122.250:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
 
backend mysql-fail-back
        mode tcp
        balance leastconn
        option httpchk
        server clust-node1 192.168.122.178:3306 check port 9200 inter 12000 rise 3 fall 3
        server clust-node2 192.168.122.123:3306 check port 9200 inter 12000 rise 3 fall 3 backup
        server clust-node3 192.168.122.124:3306 check port 9200 inter 12000 rise 3 fall 3 backup
        server clust-node4 192.168.122.250:3306 check port 9200 inter 12000 rise 3 fall 3 backup
 
backend stats-back
        balance roundrobin
        stats enable
        stats hide-version
        stats realm Haproxy\ Statistics
        stats uri /
        stats auth admin:admin
</pre>
 
=== Installation sur les noeuds du cluster ===
 
* Modification du fichier listant les services: <path>/etc/services</path><pre>sed -e '/9200\/tcp/ s#^#mysqlchk        9200/tcp\n#' /etc/services</pre>
* Activation au démarrage du super-daemon <app>xinetd</app><pre>chkconfig xinetd on</pre>
* Démarrage du super-daemon <app>xinetd</app><pre>service xinetd start</pre>
* Ajout des droits MySQL pour <app>clustercheck</app><syntaxhighlight lang="sql">GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!'</syntaxhighlight>
* Ajout des droits MySQL pour le client <app>mysql</app><syntaxhighlight lang="sql">GRANT ALL ON *.* TO 'root'@'192.168.122.136' IDENTIFIED BY ''</syntaxhighlight>
* Rafraichissement des privilèges MySQL<syntaxhighlight lang="sql">FLUSH PRIVILEGES</syntaxhighlight>
 
=== Supervision ===
 
* <app>hatop</app>: http://hatop.googlecode.com/files/hatop-0.7.7.tar.gz
* W.U.I.: http://clust-master:1936/

Dernière version du 20 août 2014 à 15:44

Description

Percona XtraDB Cluster (PXC) est une solution hautement disponible et hautement évolutive (capacité à monter en charge) de cluster MySQL. Le Serveur Percona, Percona Xtrabackup et la bibliothèque Galera pour la haute disponibilité sont intégrés dans PXC. Elle fournit une réplication synchrone, multi-maître et parallélisée. Tous les noeuds du cluster peuvent lire/écrire simultanément.

Hostname IP Address
clust-master 192.168.122.136
clust-node1 192.168.122.178
clust-node2 192.168.122.123
clust-node3 192.168.122.124
clust-node4 192.168.122.250

Installation du cluster

L'installation ne se fera que sur les noeuds du cluster: clust-node1, clust-node2, clust-node3 et clust-node4.

  • Installation des dépôts externes:
    rpm -Uvh http://mirrors.ircam.fr/pub/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
    rpm -Uvh  http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
  • Installation de XtraDBCluster sur les noeuds
    yum install Percona-XtraDB-Cluster-56

Bootstrap du cluster

On commence par fournir à tous les noeuds, un fichier de configuration MySQL: <path>/etc/my.cnf</path>.

Il faudra remplacer @HOSTNAME@ par le nom d'hôte du noeud, et @IPADDR@ par son adresse IP (LAN).

[client]
port=3306
socket=/var/lib/mysql/mysql.sock
#default-character-set = latin1

[mysqld]
port=3306
socket=/var/lib/mysql/mysql.sock
skip-external-locking
skip-name-resolve
table_open_cache = 64
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
datadir=/var/lib/mysql
user=mysql
old_passwords=0
long_query_time=1
slow_query_log=1
slow_query_log_file=/var/lib/mysql/@HOSTNAME@-slow.log
relay-log=@HOSTNAME@-relay-bin
connect_timeout = 5
#query_cache_limit = 1M
interactive_timeout=60
wait_timeout=60
explicit_defaults_for_timestamp=1
#default-authentication-plugin=sha256_password
query_cache_type = 0
query_cache_size = 0
log_error=error.log

default_storage_engine=InnoDB

## [REPLICATION SECTION]
server_id=3
log_bin=mysql-bin
log_slave_updates
binlog_format = ROW

## [WSREP SECTION]
## XtraDB Cluster
# Select upgrade schema method. TOI (Total Order Isolation): Replicate DDL or RSU (Rolling Schema Upgrade): DDL not replicated. Default: TOI
#wsrep_OSU_method=
# Manage auto_increment variables; Default: ON
#wsrep_auto_increment_control=
# Wait for applied events on slave before doing other queries. Avoid out-of-sync slaves but introduce large latencies. Default: OFF
#wsrep_casual_reads=
# Generate primary keys automatically if needed on rows. Default: ON
#wsrep_certify_nonPK=
# IP address of another node in the cluster. Empty for bootstrap the node (new cluster).
wsrep_cluster_address = gcomm://192.168.122.178,192.168.122.123,192.168.122.124,192.168.122.250
# Cluster Name, identical on all nodes.
wsrep_cluster_name = TARTARECLUST
# Used to convert LOCK/UNLOCK TABLES statements to BEGIN/COMMIT. Default: OFF
#wsrep_convert_LOCK_to_trx=
# Directory where wsrep provider will store its files. Default: mysql datadir
#wsrep_data_home_dir=
# Send the DBUG option to the wsrep provider.
#wsrep_dbug_option=
# Enable debug. Default: OFF
#wsrep_debug=
# Desync the node from the cluster. Default: OFF
#wsrep_desync=
# Workaround for Drupal/MySQL bug #282555. Default: OFF
#wsrep_drupal_282555_workaround=
# Define a binlog format regardless of session binlog format setting. Default: NONE
#wsrep_forced_binlog_format=
# Control whether LOAD DATA transaction splitting is wanted or not. Default: OFF
#wsrep_load_data_splitting=
# Used to control whether sole cluster conflicts should be logged. Default: OFF
#wsrep_log_conflicts=
# Used to control max number of rows each writeset can contain. Default: 131072 (128K)
#wsrep_max_ws_rows=
# Used to control max writeset size. Default: 1073741824 (1G)
#wsrep_max_ws_size
# Control how many replication events will be grouped together. Default: O (no grouping)
#wsrep_mysql_replication_bundle=
# Use to specify network address of the node. Useful for nodes with multiple NICs. Default: Usually set up as primary network interface (eth0)
wsrep_node_address = @IPADDR@
# Address at which node accepts client connections. Default: <wsrep_node_address>:3306
#wsrep_node_incoming_address=
# Node name, unique value for each node
wsrep_node_name=@HOSTNAME@
# Used to set the notification command that server will execute every time cluster membership or local node status changes.
#wsrep_notify_cmd=
# Enable/Disable WSREP replication. When OFF, server behave like standalone MySQL server. Default: ON
#wsrep_on=
# Handling of preordered replication events (like replication from traditional master). Default: OFF
#wsrep_preorder=
# Path to the Galera library. Default: None
wsrep_provider = /usr/lib64/libgalera_smm.so
# Specific configuration options for wsrep provider
#wsrep_provider_options=
# Used to recover GTID. Default: OFF
#wsrep_recover=
# Used to reject queries for that node. Can be useful for maintenance. Default: NONE
#wsrep_reject_queries=
# Control if MyISAM will be replicated or not. Default: OFF
#wsrep_replicate_myisam=
# Sets the number of times autocommitted transactions will be tried in the cluster if it encounters certification errors. Default: 1
#wsrep_retry_autocommit=
# Number of threads that can apply replication transactions in parallel. Default: 1
wsrep_slave_threads=4
# Contain the authentication information needed for State Snapshot Transfer (SST)
#wsrep_sst_auth=
# Contain the name (wsrep_node_name) of the preferred donor for the SST
#wsrep_sst_donor=
# When enabled SST donor node will not accept incoming queries, instead it will reject queries with UNKNOWN COMMAND error code. Useful for disabling node in load balancer.
# Default: OFF
#wsrep_sst_donor_rejects_queries=
# sets up the method for taking the State Snapshot Transfer (SST). Available options are: xtrabackup, xtrabackup-v2, rsync, mysqldump, custom_script_name, skip.
# Default: xtrabackup-v2 (Recommended too).
wsrep_sst_method=xtrabackup-v2
# Used to configure address on which the node expects the SST. Default: AUTO
#wsrep_sst_receive_address=
# Contain the UUID:seqno value
#wsrep_start_position=

## [INNODB SECTION]
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2
innodb_data_file_path =  ibdata1:50M:autoextend
innodb_log_file_size = 250M
innodb_buffer_pool_size = 128M
#innodb_additional_mem_pool_size = 16M
innodb_table_locks = 0
innodb_log_buffer_size = 32M
innodb_lock_wait_timeout = 60
innodb_thread_concurrency = 4
innodb_commit_concurrency = 8
innodb_flush_method=O_DIRECT
innodb_flush_log_at_trx_commit=2
innodb_file_per_table=1
#innodb_read_io_threads=4
#innodb_write_io_threads=4
#innodb_io_capacity=200
#innodb_doublewrite = 0
#innodb_support_xa = 0

sync_binlog=0

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

On peut maintenant initialisé le cluster. Le cluster ne s'initialise que sur un seul noeud

/etc/init.d/mysql bootstrap-pxc

On ne fait que démarrer <app>mysql</app> sur les autres noeuds

/etc/init.d/mysql start

On peut maintenant vérifier que le cluster est bien contruit en lançant une requète sur un des noeuds. La variable wsrep_cluster_size doit être égale au nombre de noeuds du cluster.

show status like "%wsrep_cluster%"
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_conf_id    | 4                                    |
| wsrep_cluster_size       | 4                                    |
| wsrep_cluster_state_uuid | 96dcbc83-2853-11e4-bf21-3342cf0bfa37 |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
Note.png
Vérification
La requète doit rapporter les mêmes résultats sur les autres noeuds du cluster.

HAProxy

Installation du proxy

Sur le serveur clust-master, on va installer <app>haproxy</app> pour faire un pont vers notre cluster.

yum install haproxy sysbench

Fichier <path>/etc/haproxy/haproxy.cfg</path>

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    
    #log         127.0.0.1 local2
    log         127.0.0.1 local0
    log         127.0.0.1 local1 notice

    #chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4096
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats mode 0600 level admin


#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  tcplog
    option                  dontlognull
    #option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 2000

#---------------------------------------------------------------------
# mysql frontend which proxys to the backends
#---------------------------------------------------------------------
frontend mysql-front
        bind *:3306
        mode tcp
        default_backend mysql-back

frontend mysql-fail-front
        bind *:3307
        mode tcp
        default_backend mysql-fail-back

frontend stats-front
        bind *:1936
        mode http
        default_backend stats-back


#---------------------------------------------------------------------
# static backend for serving up mysql
#---------------------------------------------------------------------
#  - maxconn : high limit for the total number of simultaneous connections
#  - check   : enable monitoring
#  - inter   : interval between tests (in milliseconds)
#  - rise    : number of valid checks needed for the server to fully get up
#  - fall    : number of failures supported before declaring that the server has fallen down
#  - port    : default server port
#  - addr    : specific address for the test (default = address server)

backend mysql-back
        mode tcp
        balance leastconn
        option httpchk
        server clust-node1 192.168.122.178:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
        server clust-node2 192.168.122.123:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
        server clust-node3 192.168.122.124:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3
        server clust-node4 192.168.122.250:3306 maxconn 20 check port 9200 inter 12000 rise 3 fall 3

backend mysql-fail-back
        mode tcp
        balance leastconn
        option httpchk
        server clust-node1 192.168.122.178:3306 check port 9200 inter 12000 rise 3 fall 3
        server clust-node2 192.168.122.123:3306 check port 9200 inter 12000 rise 3 fall 3 backup
        server clust-node3 192.168.122.124:3306 check port 9200 inter 12000 rise 3 fall 3 backup
        server clust-node4 192.168.122.250:3306 check port 9200 inter 12000 rise 3 fall 3 backup

backend stats-back
        balance roundrobin
        stats enable
        stats hide-version
        stats realm Haproxy\ Statistics
        stats uri /
        stats auth admin:admin

Installation sur les noeuds du cluster

  • Modification du fichier listant les services: <path>/etc/services</path>
    sed -e '/9200\/tcp/ s#^#mysqlchk        9200/tcp\n#' /etc/services
  • Activation au démarrage du super-daemon <app>xinetd</app>
    chkconfig xinetd on
  • Démarrage du super-daemon <app>xinetd</app>
    service xinetd start
  • Ajout des droits MySQL pour <app>clustercheck</app>
    GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!'
    
  • Ajout des droits MySQL pour le client <app>mysql</app>
    GRANT ALL ON *.* TO 'root'@'192.168.122.136' IDENTIFIED BY ''
    
  • Rafraichissement des privilèges MySQL
    FLUSH PRIVILEGES
    

Supervision