加载中

I reckon there’s little sense in running 2 or more Percona XtraDB Cluster (PXC) nodes in a single physical server other than for educational and testing purposes – but doing so is still useful in those cases. The most popular way of achieving this seems to be with server virtualization, such as making use of Vagrant boxes. But in the same way you can have multiple instances of MySQL running in parallel on the OS level in the form of concurrent mysqld processes, so too can you have multiple Percona XtraDB Cluster nodes. And the way to achieve this is precisely the same: using dedicated datadirs and different ports for each node.

Which ports?

4 tcp ports are used by Pecona XtraDB Cluster:

  • the regular MySQL port (default 3306)

  • port for group (Galera) communication (default 4567)

  • port for State Transfer (default 4444)

  • port for Incremental State Transfer (default is: port for group communication (4567) + 1 = 4568)

Of course, when you have multiple instances in the same server default values won’t work for all of them so we need to define new ports  for the additional instances and make sure to have the local firewall open to them, if there is one active (iptables, selinux,…).

我认为在单个物理服务器上运行2个或多个Percona XtraDB Cluster(PXC)节点这样没有什么意义,除了教育和测试目的,但在这种情况下这样做仍然是有用的。最受欢迎的实现方式似乎是服务器的虚拟化,比如利用流浪盒子。但是同样的方式你可以运行多个MySQL实例在并行操作系统级别上,还有并发的mysqld的形成过程,因此你也可以有多个Percona XtraDB Cluster节点。而且实现这一目标的方法是恰恰相同的:使用专用的datadirs和为每个节点设置不同的端口。

哪个端口?

Pecona XtraDB Cluster 使用 4 个 TCP 端口:

  • 常规的MySQL端口(默认3306)

  • (Galera)(默认4567)

  • 状态传输端口(默认4444)

  • 增量状态传输端口(默认是:组通信端口(4567)+ 1 = 4568)

当然,当你在同一台服务器上有多个实例的默认值,并不适用于所有人,所以我们需要为其他实例定义新的端口,确保本地防火墙对他们是开放的,如果有一个活动(iptables,selinux,…)。

Installing Percona XtraDB Cluster, configuring and starting the first node

My test server was a fresh CentOS 6.5 configured with Percona yum repository, from which I installed the latest Percona XtraDB Cluster (5.6.20-25.7.888.el6); note that you’ll need the EPEL repository as well to install socat, which is a dependency (see this bug). To avoid confusion, I’ve prevented the mysql service to start automatically:

chkconfig --level 3 mysql off
chkconfig --del mysql

I could have installed PXC from the tarball but I decided to do it from the repositories to have all dependencies covered by yum. This is how my initial /etc/my.cnf looked like (note the use of default values):

[mysqld]
datadir = /var/lib/mysql
port=3306
socket=/var/lib/mysql/mysql-node1.sock
pid-file=/var/lib/mysql/mysql-node1.pid
log-error=/var/lib/mysql/mysql-node1.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node1
wsrep_cluster_address=gcomm://

I’ve started by manually bootsrapping the cluster with this single node with the command:

$ mysqld_safe --defaults-file=/etc/my.cnf --wsrep-new-cluster

You should then be able to access this node through the local socket:

$ mysql -S /var/lib/mysql/mysql-node1.sock

安装Percona XtraDB 集群,配置并启动第一个节点

我的测试服务器用的是一个全新的CentOS(社区企业操作系统)6.5 版,系统安装了Percona yum 工具,通过工具我安装了最新的Percona XtraDB集群(5.6.20-25.7.888.el6版本);注意:你可能需要安装EPEL(企业版Linux额外包)和socat(Socket CAT)工具,这两个工具是独立的(见bug)。 为了避免冲突,我已经停止了mysql服务的自启动:

chkconfig --level 3 mysql off
chkconfig --del mysql

我原本计划从压缩包中安装PXC(Percona XtraDB Cluster),但是后来我决定通过yum工具进行安装,这样可以自动下载所有依赖包。 这是我最初的/etc/my.cnf 文件(注意默认值的使用):

[mysqld]
datadir = /var/lib/mysql
port=3306
socket=/var/lib/mysql/mysql-node1.sock
pid-file=/var/lib/mysql/mysql-node1.pid
log-error=/var/lib/mysql/mysql-node1.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node1
wsrep_cluster_address=gcomm://

我使用下面的命令手动启动了一个节点上的集群引导程序:

$ mysqld_safe --defaults-file=/etc/my.cnf --wsrep-new-cluster

启动后,你应当可以通过本地接口访问该节点:

$ mysql -S /var/lib/mysql/mysql-node1.sock

Configuring and starting the second node

Then I created a similar configuration configuration file for the second instance, which I named /etc/my2.cnf, with the following modifications:

[mysqld]
datadir = /var/lib/mysql2
port=3307
socket=/var/lib/mysql2/mysql-node2.sock
pid-file=/var/lib/mysql2/mysql-node2.pid
log-error=/var/lib/mysql2/mysql-node2.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node2
wsrep_cluster_address=gcomm://127.0.0.1:4567,127.0.0.1:5020
wsrep_provider_options = "base_port=5020;"

Note the use of base_port: by having it defined, port 5020 is used for group communication and 5021 (the one above it) is reserved for IST (it’s the same as using gmcast.listen_addr=tcp://127.0.0.1:5021, just simpler).

You need to create and setup the right permissions to the datadir on this second instance, otherwise MySQL won’t be able to create some files (like .pid and .err), though you don’t need to run the mysql_install_db script:

$ chown -R mysql:mysql /var/lib/mysql2

You can then start this second instance with the following command:

$ mysqld_safe --defaults-file=/etc/my2.cnf

While it starts, watch the log to observe how this second node starts, communicates with the primary node and join the cluster. On a different terminal from the one you’ve started the instance, execute:

$ tail -f /var/log/mysql2/mysql-node2.err

Remember that at any time you can use mysqladmin to stop the nodes, you only need to provide the right socket as argument, like follows:

$ mysqladmin -S /var/lib/mysql/mysql-node1.sock shutdown

Finally, once you have the whole cluster up you should edit the my.cnf of the first node with a complete wsrep_cluster_addres, as show in /etc/my2.cnf above.

配置和启动第二个节点

然后,我创建了一个类似的第二个实例配置文件的配置,我叫/etc/my2.cnf,有以下修改:

[mysqld]
datadir = /var/lib/mysql2
port=3307
socket=/var/lib/mysql2/mysql-node2.sock
pid-file=/var/lib/mysql2/mysql-node2.pid
log-error=/var/lib/mysql2/mysql-node2.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node2
wsrep_cluster_address=gcomm://127.0.0.1:4567,127.0.0.1:5020
wsrep_provider_options = "base_port=5020;"

注意使用base_port:通过它定义的,5020端口是用于组通信和5020(上面)为IST保留着(一样简单的使用gmcast.listen_addr =tcp:/ / 127.0.0.1:5021)。

您需要在这第二个实例中为datadir创建和设置正确的权限,否则MySQL无法创建一些文件(像.pid和.err),虽然你不需要运行mysql_install_db脚本:

$ chown -R mysql:mysql /var/lib/mysql2

然后,您可以用以下命令启动第二个实例:

$ mysqld_safe --defaults-file=/etc/my2.cnf

当开始时,通过看日志来观察这第二个节点开始,与主节点间的通信和加入集群。从一开始的实例在不同的终端上执行:

$ tail -f /var/log/mysql2/mysql-node2.err

记住,任何时候都可以使用mysqladmin停止节点,您只需要提供正确的套接字作为参数,如:

$ mysqladmin -S /var/lib/mysql/mysql-node1.sock shutdown

最后,一旦你有整个集群,你应该编辑my.cnf中的第一节点与一个完整的wsrep_cluster_addres,在/etc/my2.cnf上面显示。

Using mysqld_multi

My last blog post was on running multiple instances of MySQL with myslqd_multi. It applies here as well, the only exception is that you need to make sure to use “wsrep_cluster_address=gcomm://” in the first node whenever you bootstrap the cluster – and pay attention to start it before the other nodes.

The only advantage I see in using mysqld_multi is facilitating the management (start/stop) of the nodes and concentrating all configuration in a single my.cnf file. In any case, you shouldn’t be running a PXC cluster in a single box for any purpose other than educational.

使用mysqld_multi

我最新的博客发表在使用myslqd_multi运行MySQL多实例.它也可以在这里使用,唯一的例外是,你需要确保,无论什么时候初始化运行集群,在第一个节点要使用“wsrep_cluster_address=gcomm://”,同时,注意在其它节点之前启动它.

在我看来,使用mysqld_multi的唯一优势,在于促进节点的管理(启动/停止),并集中所有的配置到单一的my.cnf文件中.除了教学目的,你完全不需要在单一服务器运行一个PXC集群.

Adding a second Percona XtraDB Cluster node to a production server

What if you have a production cluster composed of multiple physical servers and you want to add a second node to one of them? It works the same way – you’ll just need to use the server’s IP address when configuring it instead of the loopback network interface. Here’s an example of a PXC cluster composed initially by three nodes: 192.168.70.1, 192.168.70.2, and 192.168.70.3. I’ve added a 4th node running on the server that is already hosting the 3rd – the wsrep_cluster_address line looks like as follows after the changes:

wsrep_cluster_address = gcomm://192.168.70.1,192.168.70.2,192.168.70.3:4567,192.168.70.3:5020

Additional ressources

We have a documentation page on “How to setup 3 node cluster on single box” that contains more details of what I’ve covered above with a slightly different approach.

为生产服务器增加第二个Percona XtraDB Cluster节点

如果你有一个生产集群,它们由多个物理服务器组成,同时,你想为其中之一增加第二个节点,这个情况会怎么样呢?它以同样的方式工作.配置的时候,你需要使用服务器的IP地址取代回路地址.这里有一个PXC集群的例子,初始情况下,它由三个节点组成: 192.168.70.1, 192.168.70.2和 192.168.70.3.我已经增加了第四个节点,它正运行并服务着第三个节点.在修改之后,wsrep_cluster_address 行看起来像下面这样:

wsrep_cluster_address = gcomm://192.168.70.1,192.168.70.2,192.168.70.3:4567,192.168.70.3:5020

其它资源

我有一个关于“如何在单一服务器配置三节点的集群”的参考资料页面,它采用了一些不同的方法,比我上面谈到的内容包含更多的细节.

返回顶部
顶部