Merge branch '2.2' into develop
This commit is contained in:
@ -1,12 +1,32 @@
|
||||
# maxscale-cdc-connector
|
||||
# Maxscale CDC Connector
|
||||
|
||||
The C++ connector for the [MariaDB MaxScale](https://mariadb.com/products/technology/maxscale)
|
||||
[CDC system](https://mariadb.com/kb/en/mariadb-enterprise/mariadb-maxscale-22-avrorouter-tutorial/).
|
||||
|
||||
## Usage
|
||||
|
||||
The CDC connector is a single-file connector which allows it to be
|
||||
relatively easily embedded into existing applications.
|
||||
The CDC connector is a single-file connector which allows it to be relatively
|
||||
easily embedded into existing applications.
|
||||
|
||||
## API Overview
|
||||
|
||||
A CDC connection object is prepared by instantiating the `CDC::Connection`
|
||||
class. To create the actual connection, call the `CDC::Connection::connect`
|
||||
method of the class.
|
||||
|
||||
After the connection has been created, call the `CDC::Connection::read` method
|
||||
to get a row of data. The `CDC::Row::length` method tells how many values a row
|
||||
has and `CDC::Row::value` is used to access that value. The field name of a
|
||||
value can be extracted with the `CDC::Row::key` method and the current GTID of a
|
||||
row of data is retrieved with the `CDC::Row::gtid` method.
|
||||
|
||||
To close the connection, destroy the instantiated object.
|
||||
|
||||
## Examples
|
||||
|
||||
The source code
|
||||
[contains an example](https://github.com/mariadb-corporation/MaxScale/blob/2.2/connectors/cdc-connector/examples/main.cpp)
|
||||
that demonstrates basic usage of the MaxScale CDC Connector.
|
||||
|
||||
## Dependencies
|
||||
|
||||
@ -45,5 +65,5 @@ sudo zypper install -y libjansson-devel openssl-devel cmake make gcc-c++ git
|
||||
## Building and Packaging
|
||||
|
||||
To build and package the connector as a library, follow MaxScale build
|
||||
instructions with the exception of adding `-DTARGET_COMPONENT=devel` to
|
||||
the CMake call.
|
||||
instructions with the exception of adding `-DTARGET_COMPONENT=devel` to the
|
||||
CMake call.
|
@ -4,7 +4,6 @@
|
||||
## About MariaDB MaxScale
|
||||
|
||||
- [About MariaDB MaxScale](About/About-MaxScale.md)
|
||||
- [Release Notes](Release-Notes/MaxScale-2.1.5-Release-Notes.md)
|
||||
- [Changelog](Changelog.md)
|
||||
- [Limitations](About/Limitations.md)
|
||||
|
||||
@ -53,7 +52,6 @@ These tutorials are for specific use cases and module combinations.
|
||||
|
||||
Here are tutorials on monitoring and managing MariaDB MaxScale in cluster environments.
|
||||
|
||||
- [MariaDB MaxScale HA with Corosync-Pacemaker](Tutorials/MaxScale-HA-with-Corosync-Pacemaker.md)
|
||||
- [MariaDB MaxScale HA with Lsyncd](Tutorials/MaxScale-HA-with-lsyncd.md)
|
||||
- [Nagios Plugins for MariaDB MaxScale Tutorial](Tutorials/Nagios-Plugins.md)
|
||||
|
||||
@ -115,6 +113,10 @@ Documentation for MaxScale protocol modules.
|
||||
- [Change Data Capture (CDC) Protocol](Protocols/CDC.md)
|
||||
- [Change Data Capture (CDC) Users](Protocols/CDC_users.md)
|
||||
|
||||
The MaxScale CDC Connector provides a C++ API for consuming data from a CDC system.
|
||||
|
||||
- [CDC Connector](Connectors/CDC-Connector.md)
|
||||
|
||||
## Authenticators
|
||||
|
||||
A short description of the authentication module type can be found in the
|
||||
|
@ -43,10 +43,17 @@ All command accept the following global options.
|
||||
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format
|
||||
and each value must be separated by a comma.
|
||||
[string] [default: "localhost:8989"]
|
||||
-s, --secure Enable HTTPS requests [boolean] [default: "false"]
|
||||
-t, --timeout Request timeout in milliseconds [number] [default: "10000"]
|
||||
-q, --quiet Silence all output [boolean] [default: "false"]
|
||||
--tsv Print tab separated output [boolean] [default: "false"]
|
||||
-t, --timeout Request timeout in milliseconds [number] [default: 10000]
|
||||
-q, --quiet Silence all output [boolean] [default: false]
|
||||
--tsv Print tab separated output [boolean] [default: false]
|
||||
|
||||
HTTPS/TLS Options:
|
||||
-s, --secure Enable HTTPS requests [boolean] [default: false]
|
||||
--tls-key Path to TLS private key [string]
|
||||
--tls-cert Path to TLS public certificate [string]
|
||||
--tls-ca-cert Path to TLS CA certificate [string]
|
||||
--tls-verify-server-cert Whether to verify server TLS certificates
|
||||
[boolean] [default: true]
|
||||
|
||||
Options:
|
||||
--version Show version number [boolean]
|
||||
|
@ -10,6 +10,27 @@ report at [Jira](https://jira.mariadb.org).
|
||||
|
||||
## Changed Features
|
||||
|
||||
### MaxCtrl Moved to `maxscale` Package
|
||||
|
||||
The MaxCtrl client is now a part of the main MaxScale package, `maxscale`. This
|
||||
means that the `maxctrl` executable is now immediately available upon the
|
||||
installation of MaxScale.
|
||||
|
||||
In the 2.2.1 beta version MaxCtrl was in its own package. If you have a previous
|
||||
installation of MaxCtrl, please remove it before upgrading to MaxScale 2.2.2.
|
||||
|
||||
### MaxScale C++ CDC Connector Integration
|
||||
|
||||
The MaxScale C++ CDC Connector is now distributed as a part of MaxScale. The
|
||||
connector libraries are in a separate package, `maxscale-cdc-connector`. Refer
|
||||
to the [CDC Connector documentation](../Connectors/CDC-Connector.md) for more details.
|
||||
|
||||
### Output of `show threads` has changed.
|
||||
|
||||
For each thread is shown what state it is in, how many descriptors are currently
|
||||
in the thread's epoll instance and how many descriptors in total have been in the
|
||||
thread's epoll instance.
|
||||
|
||||
## Dropped Features
|
||||
|
||||
## New Features
|
||||
@ -26,6 +47,13 @@ It is now possible to specify what local address MaxScale should
|
||||
use when connecting to servers. Please refer to the documentation
|
||||
for [details](../Getting-Started/Configuration-Guide.md#local_address).
|
||||
|
||||
### External master support for failover/switchover
|
||||
|
||||
Failover/switchover now tries to preserve replication from an external master
|
||||
server. Check
|
||||
[MariaDB Monitor documentation](../Monitors/MariaDB-Monitor.md#external-master-support)
|
||||
for more information.
|
||||
|
||||
## Bug fixes
|
||||
|
||||
[Here is a list of bugs fixed in MaxScale 2.2.2.](https://jira.mariadb.org/issues/?jql=project%20%3D%20MXS%20AND%20issuetype%20%3D%20Bug%20AND%20status%20%3D%20Closed%20AND%20fixVersion%20%3D%202.2.2)
|
||||
@ -33,6 +61,9 @@ for [details](../Getting-Started/Configuration-Guide.md#local_address).
|
||||
* [MXS-1661](https://jira.mariadb.org/browse/MXS-1661) Excessive logging by MySQLAuth at authentication error (was: MySQLAuth SQLite database can be permanently locked)
|
||||
* [MXS-1660](https://jira.mariadb.org/browse/MXS-1660) Failure to resolve hostname is considered an error
|
||||
* [MXS-1654](https://jira.mariadb.org/browse/MXS-1654) MaxScale crashes if env-variables are used without substitute_variables=1 having been defined
|
||||
* [MXS-1653](https://jira.mariadb.org/browse/MXS-1653) sysbench failed to initialize w/ MaxScale read/write splitter
|
||||
* [MXS-1647](https://jira.mariadb.org/browse/MXS-1647) Module API version is not checked
|
||||
* [MXS-1643](https://jira.mariadb.org/browse/MXS-1643) Too many monitor events are triggered
|
||||
* [MXS-1641](https://jira.mariadb.org/browse/MXS-1641) Fix overflow in master id
|
||||
* [MXS-1633](https://jira.mariadb.org/browse/MXS-1633) Need remove mutex in sqlite
|
||||
* [MXS-1630](https://jira.mariadb.org/browse/MXS-1630) MaxCtrl binary are not included by default in MaxScale package
|
||||
@ -46,6 +77,7 @@ for [details](../Getting-Started/Configuration-Guide.md#local_address).
|
||||
* [MXS-1586](https://jira.mariadb.org/browse/MXS-1586) Mysqlmon switchover does not immediately detect bad new master
|
||||
* [MXS-1583](https://jira.mariadb.org/browse/MXS-1583) Database firewall filter failing with multiple users statements in rules file
|
||||
* [MXS-1539](https://jira.mariadb.org/browse/MXS-1539) Authentication data should be thread specific
|
||||
* [MXS-1508](https://jira.mariadb.org/browse/MXS-1508) Failover is sometimes triggered on non-simple topologies
|
||||
|
||||
## Known Issues and Limitations
|
||||
|
||||
|
@ -1,5 +1,7 @@
|
||||
# HintRouter
|
||||
|
||||
HintRouter was introduced in 2.2 and is still beta.
|
||||
|
||||
## Overview
|
||||
|
||||
The **HintRouter** module is a simple router intended to operate in conjunction
|
||||
|
@ -188,3 +188,68 @@ Aug 11 10:51:57 maxscale2 Keepalived_vrrp[20257]: VRRP_Instance(VI_1) Entering F
|
||||
Aug 11 10:51:57 maxscale2 Keepalived_vrrp[20257]: VRRP_Instance(VI_1) removing protocol VIPs.
|
||||
Aug 11 10:51:57 maxscale2 Keepalived_vrrp[20257]: VRRP_Instance(VI_1) Now in FAULT state
|
||||
```
|
||||
|
||||
## MaxScale active/passive-setting
|
||||
|
||||
When using multiple MaxScales with replication cluster management features
|
||||
(failover, switchover, rejoin), only one MaxScale instance should be allowed to
|
||||
modify the cluster at any given time. This instance should be the one with
|
||||
MASTER Keepalived status. MaxScale itself does not know its state, but MaxCtrl
|
||||
(a replacement for MaxAdmin) can set a MaxScale instance to passive mode. As of
|
||||
version 2.2.2, a passive MaxScale behaves similar to an active one with the
|
||||
distinction that it won't perform failover, switchover or rejoin. Even manual
|
||||
versions of these commands will end in error. The passive/active mode
|
||||
differences may be expanded in the future.
|
||||
|
||||
To have Keepalived modify the MaxScale operating mode, a notify script is
|
||||
needed. This script is ran whenever Keepalived changes its state. The script
|
||||
file is defined in the Keepalived configuration file as `notify`.
|
||||
|
||||
```
|
||||
...
|
||||
virtual_ipaddress {
|
||||
192.168.1.13
|
||||
}
|
||||
track_script {
|
||||
chk_myscript
|
||||
}
|
||||
notify /home/user/notify_script.sh
|
||||
...
|
||||
```
|
||||
Keepalived calls the script with three parameters. In our case, only the third
|
||||
parameter, STATE, is relevant. An example script is below.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
TYPE=$1
|
||||
NAME=$2
|
||||
STATE=$3
|
||||
|
||||
OUTFILE=/home/user/state.txt
|
||||
|
||||
case $STATE in
|
||||
"MASTER") echo "Setting this MaxScale node to active mode" > $OUTFILE
|
||||
maxctrl alter maxscale passive false
|
||||
exit 0
|
||||
;;
|
||||
"BACKUP") echo "Setting this MaxScale node to passive mode" > $OUTFILE
|
||||
maxctrl alter maxscale passive true
|
||||
exit 0
|
||||
;;
|
||||
"FAULT") echo "MaxScale failed the status check." > $OUTFILE
|
||||
maxctrl alter maxscale passive true
|
||||
exit 0
|
||||
;;
|
||||
*) echo "Unknown state" > $OUTFILE
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
```
|
||||
The script logs the current state to a text file and sets the operating mode of
|
||||
MaxScale. The FAULT case also attempts to set MaxScale to passive mode,
|
||||
although the MaxCtrl command will likely fail.
|
||||
|
||||
If all MaxScale/Keepalived instances have a similar notify script, only one
|
||||
MaxScale should ever be in active mode.
|
||||
|
@ -1,528 +0,0 @@
|
||||
# How to make MariaDB MaxScale High Available
|
||||
|
||||
The document shows an example of a Pacemaker / Corosync setup with MariaDB MaxScale based on Linux Centos 6.5, using three virtual servers and unicast heartbeat mode with the following minimum requirements:
|
||||
|
||||
- MariaDB MaxScale process is started/stopped and monitored via /etc/init.d/maxscale script that is LSB compatible in order to be managed by Pacemaker resource manager
|
||||
|
||||
- A Virtual IP is set providing the access to the MariaDB MaxScale process that could be set to one of the cluster nodes
|
||||
|
||||
- Pacemaker/Corosync and crmsh command line tool basic knowledge
|
||||
|
||||
Please note the solution is a quick setup example that may not be suited for all production environments.
|
||||
|
||||
## Clustering Software installation
|
||||
|
||||
On each node in the cluster do the following steps.
|
||||
|
||||
### Add clustering repos to yum
|
||||
|
||||
```
|
||||
# vi /etc/yum.repos.d/ha-clustering.repo
|
||||
```
|
||||
|
||||
Add the following to the file.
|
||||
|
||||
```
|
||||
[haclustering]
|
||||
name=HA Clustering
|
||||
baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
```
|
||||
|
||||
### Install the software
|
||||
|
||||
```
|
||||
# yum install pacemaker corosync crmsh
|
||||
```
|
||||
|
||||
Package versions used
|
||||
|
||||
```
|
||||
Package pacemaker-1.1.10-14.el6_5.3.x86_64
|
||||
Package corosync-1.4.5-2.4.x86_64
|
||||
Package crmsh-2.0+git46-1.1.x86_64
|
||||
```
|
||||
|
||||
### Assign hostname on each node
|
||||
|
||||
In this example the three names used for the nodes are: node1,node,node3
|
||||
|
||||
```
|
||||
[root@server1 ~]# hostname node1
|
||||
...
|
||||
[root@server2 ~]# hostname node2
|
||||
...
|
||||
[root@server3 ~]# hostname node3
|
||||
```
|
||||
|
||||
For each node, add all the server names into `/etc/hosts`.
|
||||
|
||||
```
|
||||
[root@node3 ~]# vi /etc/hosts
|
||||
10.74.14.39 node1
|
||||
10.228.103.72 node2
|
||||
10.35.15.26 node3 current-node
|
||||
...
|
||||
[root@node1 ~]# vi /etc/hosts
|
||||
10.74.14.39 node1 current-node
|
||||
10.228.103.72 node2
|
||||
10.35.15.26 node3
|
||||
```
|
||||
|
||||
**Note**: add _current-node_ as an alias for the current node in each of the /etc/hosts files.
|
||||
|
||||
### Prepare authkey for optional cryptographic use
|
||||
|
||||
On one of the nodes, say node2 run the corosync-keygen utility and follow
|
||||
|
||||
```
|
||||
[root@node2 ~]# corosync-keygen
|
||||
|
||||
Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy.
|
||||
|
||||
After completion the key will be found in /etc/corosync/authkey.
|
||||
```
|
||||
|
||||
### Prepare the corosync configuration file
|
||||
|
||||
Using node2 as an example:
|
||||
|
||||
```
|
||||
[root@node2 ~]# vi /etc/corosync/corosync.conf
|
||||
```
|
||||
|
||||
Add the following to the file:
|
||||
|
||||
```
|
||||
# Please read the corosync.conf.5 manual page
|
||||
|
||||
compatibility: whitetank
|
||||
|
||||
totem {
|
||||
version: 2
|
||||
secauth: off
|
||||
interface {
|
||||
member {
|
||||
memberaddr: node1
|
||||
}
|
||||
member {
|
||||
memberaddr: node2
|
||||
}
|
||||
member {
|
||||
memberaddr: node3
|
||||
}
|
||||
ringnumber: 0
|
||||
bindnetaddr: current-node
|
||||
mcastport: 5405
|
||||
ttl: 1
|
||||
}
|
||||
transport: udpu
|
||||
}
|
||||
|
||||
logging {
|
||||
fileline: off
|
||||
to_logfile: yes
|
||||
to_syslog: yes
|
||||
logfile: /var/log/cluster/corosync.log
|
||||
debug: off
|
||||
timestamp: on
|
||||
logger_subsys {
|
||||
subsys: AMF
|
||||
debug: off
|
||||
}
|
||||
}
|
||||
|
||||
# this will start Pacemaker processes
|
||||
|
||||
service {
|
||||
ver: 0
|
||||
name: pacemaker
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: in this example:
|
||||
|
||||
- unicast UDP is used
|
||||
|
||||
- bindnetaddr for corosync process is current-node, that has the right value on each node due to the alias added in /etc/hosts above
|
||||
|
||||
- Pacemaker processes are started by the corosync daemon, so there is no need to launch it via /etc/init.d/pacemaker start
|
||||
|
||||
### Copy configuration files and auth key on each of the other nodes
|
||||
|
||||
```
|
||||
[root@node2 ~]# scp /etc/corosync/* root@node1:/etc/corosync/
|
||||
...
|
||||
[root@node2 ~]# scp /etc/corosync/* root@nodeN:/etc/corosync/
|
||||
```
|
||||
|
||||
Corosync needs port 5405 to be opened. Configure any firewall or iptables accordingly. For a quick start just disable iptables on each nodes:
|
||||
|
||||
```
|
||||
[root@node2 ~]# service iptables stop
|
||||
...
|
||||
[root@nodeN ~]# service iptables stop
|
||||
```
|
||||
|
||||
### Start Corosyn on each node
|
||||
|
||||
```
|
||||
[root@node2 ~] #/etc/init.d/corosync start
|
||||
...
|
||||
[root@nodeN ~] #/etc/init.d/corosync start
|
||||
```
|
||||
|
||||
Check that the corosync daemon is successfully bound to port 5405.
|
||||
|
||||
```
|
||||
[root@node2 ~] #netstat -na | grep 5405
|
||||
udp 0 0 10.228.103.72:5405 0.0.0.0:*
|
||||
```
|
||||
|
||||
Check if other nodes are reachable with nc utility and option UDP (-u).
|
||||
|
||||
```
|
||||
[root@node2 ~] #echo "check ..." | nc -u node1 5405
|
||||
[root@node2 ~] #echo "check ..." | nc -u node3 5405
|
||||
...
|
||||
[root@node1 ~] #echo "check ..." | nc -u node2 5405
|
||||
[root@node1 ~] #echo "check ..." | nc -u node3 5405
|
||||
```
|
||||
|
||||
If the following message is displayed, there is an issue with communication between the nodes.
|
||||
|
||||
```
|
||||
nc: Write error: Connection refused
|
||||
```
|
||||
|
||||
This is most likely to be an issue with the firewall configuration on your nodes. Check and resolve any issues with your firewall configuration.
|
||||
|
||||
### Check the cluster status from any node
|
||||
|
||||
```
|
||||
[root@node3 ~]# crm status
|
||||
```
|
||||
|
||||
The command should produce the following.
|
||||
|
||||
```
|
||||
[root@node3 ~]# crm status
|
||||
Last updated: Mon Jun 30 12:47:53 2014
|
||||
Last change: Mon Jun 30 12:47:39 2014 via crmd on node2
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
0 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
```
|
||||
|
||||
For the basic setup disable the following properties:
|
||||
|
||||
- stonith
|
||||
|
||||
- quorum policy
|
||||
|
||||
```
|
||||
[root@node3 ~]# crm configure property 'stonith-enabled'='false'
|
||||
[root@node3 ~]# crm configure property 'no-quorum-policy'='ignore'
|
||||
```
|
||||
|
||||
For additional information see:
|
||||
|
||||
[http://www.clusterlabs.org/doc/crm_fencing.html](http://www.clusterlabs.org/doc/crm_fencing.html)
|
||||
|
||||
[http://clusterlabs.org/doc/](http://clusterlabs.org/doc/)
|
||||
|
||||
The configuration is automatically updated on every node.
|
||||
|
||||
Check it from another node, say node1:
|
||||
|
||||
```
|
||||
[root@node1 ~]# crm configure show
|
||||
node node1
|
||||
node node2
|
||||
node node3
|
||||
property cib-bootstrap-options: \
|
||||
dc-version=1.1.10-14.el6_5.3-368c726 \
|
||||
cluster-infrastructure="classic openais (with plugin)" \
|
||||
expected-quorum-votes=3 \
|
||||
stonith-enabled=false \
|
||||
no-quorum-policy=ignore \
|
||||
placement-strategy=balanced \
|
||||
default-resource-stickiness=infinity
|
||||
```
|
||||
|
||||
The Corosync / Pacemaker cluster is ready to be configured to manage resources.
|
||||
|
||||
## MariaDB MaxScale init script
|
||||
|
||||
The MariaDB MaxScale init script in `/etc/init.d./maxscale` allows to start, stop, restart and monitor the MariaDB MaxScale process running on the system.
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale
|
||||
Usage: /etc/init.d/maxscale {start|stop|status|restart|condrestart|reload}
|
||||
```
|
||||
|
||||
- Start
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale start
|
||||
Starting MaxScale: maxscale (pid 25892) is running... [ OK ]
|
||||
```
|
||||
|
||||
- Start again
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale start
|
||||
Starting MaxScale: found maxscale (pid 25892) is running.[ OK ]
|
||||
```
|
||||
|
||||
- Stop
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale stop
|
||||
Stopping MaxScale: [ OK ]
|
||||
```
|
||||
|
||||
- Stop again
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale stop
|
||||
Stopping MaxScale: [FAILED]
|
||||
```
|
||||
|
||||
- Status (MaxScale not running)
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale status
|
||||
MaxScale is stopped [FAILED]
|
||||
```
|
||||
|
||||
The script exit code for "status" is 3
|
||||
|
||||
- Status (MaxScale is running)
|
||||
|
||||
```
|
||||
[root@node1 ~]# /etc/init.d/maxscale status
|
||||
Checking MaxScale status: MaxScale (pid 25953) is running.[ OK ]
|
||||
```
|
||||
|
||||
The script exit code for "status" is 0
|
||||
|
||||
Read the following for additional information about LSB init scripts:
|
||||
|
||||
[http://www.linux-ha.org/wiki/LSB_Resource_Agents](http://www.linux-ha.org/wiki/LSB_Resource_Agents)
|
||||
|
||||
After checking that the init scripts for MariaDB MaxScale work, it is possible to configure MariaDB MaxScale for HA via Pacemaker.
|
||||
|
||||
# Configure MariaDB MaxScale for HA with Pacemaker
|
||||
|
||||
```
|
||||
[root@node2 ~]# crm configure primitive MaxScale lsb:maxscale \
|
||||
op monitor interval="10s” timeout=”15s” \
|
||||
op start interval="0” timeout=”15s” \
|
||||
op stop interval="0” timeout=”30s”
|
||||
```
|
||||
|
||||
MaxScale resource will be started.
|
||||
|
||||
```
|
||||
[root@node2 ~]# crm status
|
||||
Last updated: Mon Jun 30 13:15:34 2014
|
||||
Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node2
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
1 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node1
|
||||
```
|
||||
|
||||
## Basic use cases
|
||||
|
||||
### Resource restarted after a failure
|
||||
|
||||
In the example MariaDB MaxScale PID is 26114, kill the process immediately.
|
||||
|
||||
```
|
||||
[root@node2 ~]# kill -9 26114
|
||||
...
|
||||
[root@node2 ~]# crm status
|
||||
Last updated: Mon Jun 30 13:16:11 2014
|
||||
Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node2
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
1 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
Failed actions:
|
||||
|
||||
MaxScale_monitor_15000 on node1 'not running' (7): call=19, status=complete, last-rc-change='Mon Jun 30 13:16:14 2014', queued=0ms, exec=0ms
|
||||
```
|
||||
|
||||
**Note**: the _MaxScale_monitor_ failed action
|
||||
|
||||
After a few seconds it will be started again.
|
||||
|
||||
```
|
||||
[root@node2 ~]# crm status
|
||||
Last updated: Mon Jun 30 13:21:12 2014
|
||||
Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node1
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
1 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node1
|
||||
```
|
||||
|
||||
### The resource cannot be migrated to node1 for a failure
|
||||
|
||||
First, migrate the the resource to another node, say node3.
|
||||
|
||||
```
|
||||
[root@node1 ~]# crm resource migrate MaxScale node3
|
||||
...
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
Failed actions:
|
||||
|
||||
MaxScale_start_0 on node1 'not running' (7): call=76, status=complete, last-rc-change='Mon Jun 30 13:31:17 2014', queued=2015ms, exec=0ms
|
||||
```
|
||||
|
||||
**Note**: the _MaxScale_start_ failed action on node1, and after a few seconds.
|
||||
|
||||
```
|
||||
[root@node3 ~]# crm status
|
||||
Last updated: Mon Jun 30 13:35:00 2014
|
||||
Last change: Mon Jun 30 13:31:13 2014 via crm_resource on node3
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
1 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node2
|
||||
|
||||
Failed actions:
|
||||
|
||||
MaxScale_start_0 on node1 'not running' (7): call=76, status=complete, last-rc-change='Mon Jun 30 13:31:17 2014', queued=2015ms, exec=0ms
|
||||
```
|
||||
|
||||
Successfully, MaxScale has been started on a new node (node2).
|
||||
|
||||
**Note**: Failed actions remain in the output of crm status.
|
||||
|
||||
With "crm resource cleanup MaxScale" is possible to cleanup the messages:
|
||||
|
||||
```
|
||||
[root@node1 ~]# crm resource cleanup MaxScale
|
||||
Cleaning up MaxScale on node1
|
||||
Cleaning up MaxScale on node2
|
||||
Cleaning up MaxScale on node3
|
||||
```
|
||||
|
||||
The cleaned status is visible from other nodes as well.
|
||||
|
||||
```
|
||||
[root@node2 ~]# crm status
|
||||
Last updated: Mon Jun 30 13:38:18 2014
|
||||
Last change: Mon Jun 30 13:38:17 2014 via crmd on node3
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
1 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
MaxScale (lsb:maxscale): Started node2
|
||||
```
|
||||
|
||||
## Add a Virtual IP (VIP) to the cluster
|
||||
|
||||
It’s possible to add a virtual IP to the cluster. MariaDB MaxScale process will be only contacted via this IP. The virtual IP can move across nodes in case one of them fails.
|
||||
|
||||
The Setup is very easy. Assuming an addition IP address is available and can be added to one of the nodes, this is the new configuration to add.
|
||||
|
||||
```
|
||||
[root@node2 ~]# crm configure primitive maxscale_vip ocf:heartbeat:IPaddr2 params ip=192.168.122.125 op monitor interval=10s
|
||||
```
|
||||
|
||||
MariaDB MaxScale process and the VIP must be run in the same node, so it is mandatory to add to the configuration to the group ‘maxscale_service’.
|
||||
|
||||
```
|
||||
[root@node2 ~]# crm configure group maxscale_service maxscale_vip MaxScale
|
||||
```
|
||||
|
||||
The following is the final configuration.
|
||||
|
||||
```
|
||||
[root@node3 ~]# crm configure show
|
||||
node node1
|
||||
node node2
|
||||
node node3
|
||||
primitive MaxScale lsb:maxscale \
|
||||
op monitor interval=15s timeout=10s \
|
||||
op start interval=0 timeout=15s \
|
||||
op stop interval=0 timeout=30s
|
||||
primitive maxscale_vip IPaddr2 \
|
||||
params ip=192.168.122.125 \
|
||||
op monitor interval=10s
|
||||
group maxscale_service maxscale_vip MaxScale \
|
||||
meta target-role=Started
|
||||
property cib-bootstrap-options: \
|
||||
dc-version=1.1.10-14.el6_5.3-368c726 \
|
||||
cluster-infrastructure="classic openais (with plugin)" \
|
||||
expected-quorum-votes=3 \
|
||||
stonith-enabled=false \
|
||||
no-quorum-policy=ignore \
|
||||
placement-strategy=balanced \
|
||||
last-lrm-refresh=1404125486
|
||||
```
|
||||
|
||||
Check the resource status.
|
||||
|
||||
```
|
||||
[root@node1 ~]# crm status
|
||||
Last updated: Mon Jun 30 13:51:29 2014
|
||||
Last change: Mon Jun 30 13:51:27 2014 via crmd on node1
|
||||
Stack: classic openais (with plugin)
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.10-14.el6_5.3-368c726
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
2 Resources configured
|
||||
|
||||
Online: [ node1 node2 node3 ]
|
||||
|
||||
Resource Group: maxscale_service
|
||||
|
||||
maxscale_vip (ocf::heartbeat:IPaddr2): Started node2
|
||||
|
||||
MaxScale (lsb:maxscale): Started node2
|
||||
```
|
||||
|
||||
With both resources on node2, now MariaDB MaxScale service will be reachable via the configured VIP address 192.168.122.125.
|
||||
|
Binary file not shown.
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 99 KiB |
@ -26,3 +26,9 @@ string in slashes, e.g. `match=/^select/` defines the pattern `^select`.
|
||||
|
||||
Binlog server automatically accepts GTID connection from MariaDB 10 slave servers
|
||||
by saving all incoming GTIDs into a SQLite map database.
|
||||
|
||||
### MaxCtrl Included in Main Package
|
||||
|
||||
In the 2.2.1 beta version MaxCtrl was in its own package whereas in 2.2.2
|
||||
it is in the main `maxscale` package. If you have a previous installation
|
||||
of MaxCtrl, please remove it before upgrading to MaxScale 2.2.2.
|
||||
|
@ -16,6 +16,7 @@ var colors = require('colors/safe');
|
||||
var Table = require('cli-table');
|
||||
var consoleLib = require('console')
|
||||
var os = require('os')
|
||||
var fs = require('fs')
|
||||
|
||||
module.exports = function() {
|
||||
|
||||
@ -57,6 +58,13 @@ module.exports = function() {
|
||||
argv.reject(err)
|
||||
})
|
||||
}, function(err) {
|
||||
|
||||
if (err.error.cert) {
|
||||
// TLS errors cause extremely verbose errors, truncate the certifiate details
|
||||
// from the error output
|
||||
delete err.error.cert
|
||||
}
|
||||
|
||||
// One of the HTTP request pings to the cluster failed, log the error
|
||||
argv.reject(JSON.stringify(err.error, null, 4))
|
||||
})
|
||||
@ -181,6 +189,31 @@ module.exports = function() {
|
||||
return Promise.resolve(colors.green('OK'))
|
||||
}
|
||||
|
||||
|
||||
this.setTlsCerts = function(args) {
|
||||
args.agentOptions = {}
|
||||
if (this.argv['tls-key']) {
|
||||
args.agentOptions.key = fs.readFileSync(this.argv['tls-key'])
|
||||
}
|
||||
|
||||
if (this.argv['tls-cert']) {
|
||||
args.agentOptions.cert = fs.readFileSync(this.argv['tls-cert'])
|
||||
}
|
||||
|
||||
if (this.argv['tls-ca-cert']) {
|
||||
args.agentOptions.ca = fs.readFileSync(this.argv['tls-ca-cert'])
|
||||
}
|
||||
|
||||
if (this.argv['tls-passphrase']) {
|
||||
args.agentOptions.passphrase = this.argv['tls-passphrase']
|
||||
}
|
||||
|
||||
if (!this.argv['tls-verify-server-cert']) {
|
||||
args.agentOptions.checkServerIdentity = function() {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Helper for executing requests and handling their responses, returns a
|
||||
// promise that is fulfilled when all requests successfully complete. The
|
||||
// promise is rejected if any of the requests fails.
|
||||
@ -189,6 +222,7 @@ module.exports = function() {
|
||||
args.uri = getUri(host, this.argv.secure, resource)
|
||||
args.json = true
|
||||
args.timeout = this.argv.timeout
|
||||
setTlsCerts(args)
|
||||
|
||||
return request(args)
|
||||
.then(function(res) {
|
||||
@ -275,7 +309,11 @@ function pingCluster(hosts) {
|
||||
var promises = []
|
||||
|
||||
hosts.forEach(function(i) {
|
||||
promises.push(request(getUri(i, false, '')))
|
||||
args = {}
|
||||
args.uri = getUri(i, this.argv.secure, '')
|
||||
args.json = true
|
||||
setTlsCerts(args)
|
||||
promises.push(request(args))
|
||||
})
|
||||
|
||||
return Promise.all(promises)
|
||||
|
@ -21,7 +21,7 @@ program
|
||||
.strict()
|
||||
.exitProcess(false)
|
||||
.showHelpOnFail(false)
|
||||
.group(['u', 'p', 'h', 's', 't', 'q', 'tsv'], 'Global Options:')
|
||||
.group(['u', 'p', 'h', 't', 'q', 'tsv'], 'Global Options:')
|
||||
.option('u', {
|
||||
alias:'user',
|
||||
global: true,
|
||||
@ -42,27 +42,45 @@ program
|
||||
default: 'localhost:8989',
|
||||
type: 'string'
|
||||
})
|
||||
.option('s', {
|
||||
alias: 'secure',
|
||||
describe: 'Enable HTTPS requests',
|
||||
default: 'false',
|
||||
type: 'boolean'
|
||||
})
|
||||
.option('t', {
|
||||
alias: 'timeout',
|
||||
describe: 'Request timeout in milliseconds',
|
||||
default: '10000',
|
||||
default: 10000,
|
||||
type: 'number'
|
||||
})
|
||||
.option('q', {
|
||||
alias: 'quiet',
|
||||
describe: 'Silence all output',
|
||||
default: 'false',
|
||||
default: false,
|
||||
type: 'boolean'
|
||||
})
|
||||
.option('tsv', {
|
||||
describe: 'Print tab separated output',
|
||||
default: 'false',
|
||||
default: false,
|
||||
type: 'boolean'
|
||||
})
|
||||
.group(['s', 'tls-key', 'tls-cert', 'tls-ca-cert', 'tls-verify-server-cert'], 'HTTPS/TLS Options:')
|
||||
.option('s', {
|
||||
alias: 'secure',
|
||||
describe: 'Enable HTTPS requests',
|
||||
default: false,
|
||||
type: 'boolean'
|
||||
})
|
||||
.option('tls-key', {
|
||||
describe: 'Path to TLS private key',
|
||||
type: 'string'
|
||||
})
|
||||
.option('tls-cert', {
|
||||
describe: 'Path to TLS public certificate',
|
||||
type: 'string'
|
||||
})
|
||||
.option('tls-ca-cert', {
|
||||
describe: 'Path to TLS CA certificate',
|
||||
type: 'string'
|
||||
})
|
||||
.option('tls-verify-server-cert', {
|
||||
describe: 'Whether to verify server TLS certificates',
|
||||
default: true,
|
||||
type: 'boolean'
|
||||
})
|
||||
|
||||
|
@ -822,6 +822,9 @@ add_test_executable(setup_binlog_gtid.cpp setup_binlog_gtid setup_binlog_gtid LA
|
||||
# TODO: make it working with zypper and apt, move part of KDC setup to MDBCI
|
||||
add_test_executable(kerberos_setup.cpp kerberos_setup kerberos LABELS HEAVY gssapi REPL_BACKEND)
|
||||
|
||||
# Configures 'keepalived' on two Maxscale machines and tried failover
|
||||
add_test_executable(keepalived.cpp keepalived keepalived LABELS REPL_BACKEND TWO_MAXSCALES)
|
||||
|
||||
|
||||
# enable after fixing MXS-419
|
||||
# add_test_executable(mxs419_lots_of_connections.cpp mxs419_lots_of_connections replication LABELS REPL_BACKEND)
|
||||
|
93
maxscale-system-test/cnf/maxscale.cnf.template.keepalived.000
Executable file
93
maxscale-system-test/cnf/maxscale.cnf.template.keepalived.000
Executable file
@ -0,0 +1,93 @@
|
||||
[maxscale]
|
||||
threads=###threads###
|
||||
|
||||
[MySQL Monitor]
|
||||
type=monitor
|
||||
module=mysqlmon
|
||||
###repl51###
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
monitor_interval=1000
|
||||
detect_stale_master=false
|
||||
detect_standalone_master=false
|
||||
|
||||
[RW Split Router]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
router_options=slave_selection_criteria=LEAST_GLOBAL_CONNECTIONS
|
||||
max_slave_connections=1
|
||||
version_string=10.2-server1
|
||||
|
||||
[Read Connection Router Slave]
|
||||
type=service
|
||||
router=readconnroute
|
||||
router_options=slave
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
version_string=10.2-server1
|
||||
|
||||
[Read Connection Router Master]
|
||||
type=service
|
||||
router=readconnroute
|
||||
router_options=master
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
version_string=10.2-server1
|
||||
|
||||
[RW Split Listener]
|
||||
type=listener
|
||||
service=RW Split Router
|
||||
protocol=MySQLClient
|
||||
port=4006
|
||||
|
||||
[Read Connection Listener Slave]
|
||||
type=listener
|
||||
service=Read Connection Router Slave
|
||||
protocol=MySQLClient
|
||||
port=4009
|
||||
|
||||
[Read Connection Listener Master]
|
||||
type=listener
|
||||
service=Read Connection Router Master
|
||||
protocol=MySQLClient
|
||||
port=4008
|
||||
|
||||
[CLI]
|
||||
type=service
|
||||
router=cli
|
||||
|
||||
[CLI Listener]
|
||||
type=listener
|
||||
service=CLI
|
||||
protocol=maxscaled
|
||||
socket=default
|
||||
|
||||
[server1]
|
||||
type=server
|
||||
address=###node_server_IP_1###
|
||||
port=###node_server_port_1###
|
||||
protocol=MySQLBackend
|
||||
|
||||
[server2]
|
||||
type=server
|
||||
address=###node_server_IP_2###
|
||||
port=###node_server_port_2###
|
||||
protocol=MySQLBackend
|
||||
|
||||
[server3]
|
||||
type=server
|
||||
address=###node_server_IP_3###
|
||||
port=###node_server_port_3###
|
||||
protocol=MySQLBackend
|
||||
|
||||
[server4]
|
||||
type=server
|
||||
address=###node_server_IP_4###
|
||||
port=###node_server_port_4###
|
||||
protocol=MySQLBackend
|
93
maxscale-system-test/cnf/maxscale.cnf.template.keepalived.001
Executable file
93
maxscale-system-test/cnf/maxscale.cnf.template.keepalived.001
Executable file
@ -0,0 +1,93 @@
|
||||
[maxscale]
|
||||
threads=###threads###
|
||||
|
||||
[MySQL Monitor]
|
||||
type=monitor
|
||||
module=mysqlmon
|
||||
###repl51###
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
monitor_interval=1000
|
||||
detect_stale_master=false
|
||||
detect_standalone_master=false
|
||||
|
||||
[RW Split Router]
|
||||
type=service
|
||||
router=readwritesplit
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
router_options=slave_selection_criteria=LEAST_GLOBAL_CONNECTIONS
|
||||
max_slave_connections=1
|
||||
version_string=10.2-server2
|
||||
|
||||
[Read Connection Router Slave]
|
||||
type=service
|
||||
router=readconnroute
|
||||
router_options=slave
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
version_string=10.2-server2
|
||||
|
||||
[Read Connection Router Master]
|
||||
type=service
|
||||
router=readconnroute
|
||||
router_options=master
|
||||
servers=server1,server2,server3,server4
|
||||
user=maxskysql
|
||||
passwd=skysql
|
||||
version_string=10.2-server2
|
||||
|
||||
[RW Split Listener]
|
||||
type=listener
|
||||
service=RW Split Router
|
||||
protocol=MySQLClient
|
||||
port=4006
|
||||
|
||||
[Read Connection Listener Slave]
|
||||
type=listener
|
||||
service=Read Connection Router Slave
|
||||
protocol=MySQLClient
|
||||
port=4009
|
||||
|
||||
[Read Connection Listener Master]
|
||||
type=listener
|
||||
service=Read Connection Router Master
|
||||
protocol=MySQLClient
|
||||
port=4008
|
||||
|
||||
[CLI]
|
||||
type=service
|
||||
router=cli
|
||||
|
||||
[CLI Listener]
|
||||
type=listener
|
||||
service=CLI
|
||||
protocol=maxscaled
|
||||
socket=default
|
||||
|
||||
[server1]
|
||||
type=server
|
||||
address=###node_server_IP_1###
|
||||
port=###node_server_port_1###
|
||||
protocol=MySQLBackend
|
||||
|
||||
[server2]
|
||||
type=server
|
||||
address=###node_server_IP_2###
|
||||
port=###node_server_port_2###
|
||||
protocol=MySQLBackend
|
||||
|
||||
[server3]
|
||||
type=server
|
||||
address=###node_server_IP_3###
|
||||
port=###node_server_port_3###
|
||||
protocol=MySQLBackend
|
||||
|
||||
[server4]
|
||||
type=server
|
||||
address=###node_server_IP_4###
|
||||
port=###node_server_port_4###
|
||||
protocol=MySQLBackend
|
@ -147,13 +147,16 @@ bool generate_traffic_and_check(TestConnections& test, MYSQL* conn, int insert_c
|
||||
timespec short_sleep;
|
||||
short_sleep.tv_sec = 0;
|
||||
short_sleep.tv_nsec = 100000000;
|
||||
|
||||
mysql_query(conn, "BEGIN");
|
||||
|
||||
for (int i = 0; i < insert_count; i++)
|
||||
{
|
||||
test.try_query(conn, INSERT, inserts++);
|
||||
nanosleep(&short_sleep, NULL);
|
||||
}
|
||||
sleep(1);
|
||||
bool rval = false;
|
||||
|
||||
mysql_query(conn, SELECT);
|
||||
MYSQL_RES *res = mysql_store_result(conn);
|
||||
test.assert(res != NULL, "Query did not return a result set");
|
||||
@ -184,6 +187,7 @@ bool generate_traffic_and_check(TestConnections& test, MYSQL* conn, int insert_c
|
||||
}
|
||||
mysql_free_result(res);
|
||||
}
|
||||
mysql_query(conn, "COMMIT");
|
||||
return rval;
|
||||
}
|
||||
|
||||
|
146
maxscale-system-test/keepalived.cpp
Normal file
146
maxscale-system-test/keepalived.cpp
Normal file
@ -0,0 +1,146 @@
|
||||
/**
|
||||
* @file keepalived.cpp keepalived Test of two Maxscale + keepalived failover
|
||||
*
|
||||
* - 'version_string' configured to be different for every Maxscale
|
||||
* - configure keepalived for two nodes (uses xxx.xxx.xxx.253 as a virtual IP
|
||||
* where xxx.xxx.xxx. - first 3 numbers from client IP)
|
||||
* - suspend Maxscale 1
|
||||
* - wait and check version_string from Maxscale on virtual IP, expect 10.2-server2
|
||||
* - resume Maxscale 1, suspend Maxscale 2
|
||||
* - wait and check version_string from Maxscale on virtual IP, expect 10.2-server1
|
||||
* - resume Maxscale 2
|
||||
* TODO: replace 'yum' call with executing Chef recipe
|
||||
*/
|
||||
|
||||
|
||||
#include <iostream>
|
||||
#include "testconnections.h"
|
||||
|
||||
#define FAILOVER_WAIT_TIME 5
|
||||
|
||||
char virtual_ip[16];
|
||||
char * print_version_string(TestConnections * Test)
|
||||
{
|
||||
MYSQL * keepalived_conn = open_conn(Test->maxscales->rwsplit_port[0], virtual_ip, Test->maxscales->user_name, Test->maxscales->password, Test->ssl);
|
||||
const char * version_string;
|
||||
mariadb_get_info(keepalived_conn, MARIADB_CONNECTION_SERVER_VERSION, (void *)&version_string);
|
||||
Test->tprintf("%s\n", version_string);
|
||||
mysql_close(keepalived_conn);
|
||||
return((char*) version_string);
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
int i;
|
||||
char * version;
|
||||
|
||||
TestConnections * Test = new TestConnections(argc, argv);
|
||||
Test->set_timeout(10);
|
||||
|
||||
Test->tprintf("Maxscale_N %d\n", Test->maxscales->N);
|
||||
if (Test->maxscales->N < 2)
|
||||
{
|
||||
Test->tprintf("At least 2 Maxscales are needed for this test. Exiting\n");
|
||||
exit(0);
|
||||
}
|
||||
|
||||
|
||||
Test->check_maxscale_alive(0);
|
||||
Test->check_maxscale_alive(1);
|
||||
|
||||
// Get test client IP, replace last number in it with 253 and use it as Virtual IP
|
||||
char client_ip[24];
|
||||
char * last_dot;
|
||||
Test->get_client_ip(0, client_ip);
|
||||
last_dot = client_ip;
|
||||
Test->tprintf("My IP is %s\n", client_ip);
|
||||
for (i = 0; i < 3; i++)
|
||||
{
|
||||
last_dot = strstr(last_dot, ".");
|
||||
last_dot = &last_dot[1];
|
||||
}
|
||||
last_dot[0] = '\0';
|
||||
Test->tprintf("First part of IP is %s\n", client_ip);
|
||||
|
||||
sprintf(virtual_ip, "%s253", client_ip);
|
||||
|
||||
|
||||
for (i = 0; i < Test->maxscales->N; i++)
|
||||
{
|
||||
std::string src = std::string(test_dir) + "/keepalived_cnf/" + std::to_string(i + 1) + ".conf";
|
||||
std::string cp_cmd = "cp " + std::string(Test->maxscales->access_homedir[i]) + std::to_string(i + 1) + ".conf " +
|
||||
" /etc/keepalived/keepalived.conf";
|
||||
Test->tprintf("%s\n", src.c_str());
|
||||
Test->tprintf("%s\n", cp_cmd.c_str());
|
||||
Test->maxscales->ssh_node(i, "yum install -y keepalived", true);
|
||||
Test->maxscales->copy_to_node(i, src.c_str(), Test->maxscales->access_homedir[i]);
|
||||
Test->maxscales->ssh_node(i, cp_cmd.c_str(), true);
|
||||
Test->maxscales->ssh_node_f(i, true, "sed -i \"s/###virtual_ip###/%s/\" /etc/keepalived/keepalived.conf", virtual_ip);
|
||||
std::string script_src = std::string(test_dir) + "/keepalived_cnf/is_maxscale_running.sh";
|
||||
std::string script_cp_cmd = "cp " + std::string(Test->maxscales->access_homedir[i]) + "is_maxscale_running.sh /usr/bin/";
|
||||
Test->maxscales->copy_to_node(i, script_src.c_str(), Test->maxscales->access_homedir[i]);
|
||||
Test->maxscales->ssh_node(i, script_cp_cmd.c_str(), true);
|
||||
Test->maxscales->ssh_node(i, "sudo service keepalived restart", true);
|
||||
}
|
||||
|
||||
print_version_string(Test);
|
||||
|
||||
Test->tprintf("Suspend Maxscale 000 machine and waiting\n");
|
||||
system(Test->maxscales->stop_vm_command[0]);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
|
||||
version = print_version_string(Test);
|
||||
if (strcmp(version, "10.2-server2") != 0)
|
||||
{
|
||||
Test->add_result(false, "Failover did not happen");
|
||||
}
|
||||
|
||||
|
||||
Test->tprintf("Resume Maxscale 000 machine and waiting\n");
|
||||
system(Test->maxscales->start_vm_command[0]);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
print_version_string(Test);
|
||||
|
||||
Test->tprintf("Suspend Maxscale 001 machine and waiting\n");
|
||||
system(Test->maxscales->stop_vm_command[1]);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
|
||||
version = print_version_string(Test);
|
||||
if (strcmp(version, "10.2-server1") != 0)
|
||||
{
|
||||
Test->add_result(false, "Failover did not happen");
|
||||
}
|
||||
|
||||
print_version_string(Test);
|
||||
Test->tprintf("Resume Maxscale 001 machine and waiting\n");
|
||||
system(Test->maxscales->start_vm_command[1]);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
print_version_string(Test);
|
||||
|
||||
Test->tprintf("Stop Maxscale on 000 machine\n");
|
||||
Test->stop_maxscale(0);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
version = print_version_string(Test);
|
||||
if (strcmp(version, "10.2-server2") != 0)
|
||||
{
|
||||
Test->add_result(false, "Failover did not happen");
|
||||
}
|
||||
|
||||
Test->tprintf("Start back Maxscale on 000 machine\n");
|
||||
Test->start_maxscale(0);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
|
||||
Test->tprintf("Stop Maxscale on 001 machine\n");
|
||||
Test->stop_maxscale(1);
|
||||
sleep(FAILOVER_WAIT_TIME);
|
||||
version = print_version_string(Test);
|
||||
if (strcmp(version, "10.2-server1") != 0)
|
||||
{
|
||||
Test->add_result(false, "Failover did not happen");
|
||||
}
|
||||
|
||||
int rval = Test->global_result;
|
||||
delete Test;
|
||||
return rval;
|
||||
}
|
||||
|
24
maxscale-system-test/keepalived_cnf/is_maxscale_running.sh
Executable file
24
maxscale-system-test/keepalived_cnf/is_maxscale_running.sh
Executable file
@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
fileName="maxadmin_output.txt"
|
||||
rm $fileName
|
||||
timeout 2s maxadmin list servers > $fileName
|
||||
to_result=$?
|
||||
if [ $to_result -ge 1 ]
|
||||
then
|
||||
echo Timed out or error, timeout returned $to_result
|
||||
exit 3
|
||||
else
|
||||
echo MaxAdmin success, rval is $to_result
|
||||
echo Checking maxadmin output sanity
|
||||
grep1=$(grep server1 $fileName)
|
||||
grep2=$(grep server2 $fileName)
|
||||
|
||||
if [ "$grep1" ] && [ "$grep2" ]
|
||||
then
|
||||
echo All is fine
|
||||
exit 0
|
||||
else
|
||||
echo Something is wrong
|
||||
exit 3
|
||||
fi
|
||||
fi
|
@ -278,6 +278,7 @@ void run_test(TestConnections& test, const vector<string>& ips)
|
||||
create_user_and_grants(test, zUser1, zPassword1, local_ip);
|
||||
create_user_and_grants(test, zUser2, zPassword2, ip2);
|
||||
create_user_and_grants(test, zUser2, zPassword2, local_ip);
|
||||
test.repl->sync_slaves();
|
||||
|
||||
test.tprintf("\n");
|
||||
test.tprintf("Testing default; alice should be able to access, bob not.");
|
||||
|
@ -20,6 +20,7 @@ export maxscale_log_dir="/var/log/maxscale/"
|
||||
# Number of nodes
|
||||
export galera_N=`cat "$MDBCI_VM_PATH/$config_name"_network_config | grep galera | grep network | wc -l`
|
||||
export node_N=`cat "$MDBCI_VM_PATH/$config_name"_network_config | grep node | grep network | wc -l`
|
||||
export maxscale_N=`cat "$MDBCI_VM_PATH/$config_name"_network_config | grep maxscale | grep network | wc -l`
|
||||
sed "s/^/export /g" "$MDBCI_VM_PATH/$config_name"_network_config > "$curr_dir"/"$config_name"_network_config_export
|
||||
source "$curr_dir"/"$config_name"_network_config_export
|
||||
|
||||
@ -40,7 +41,7 @@ export maxscale_password="skysql"
|
||||
|
||||
export maxadmin_password="mariadb"
|
||||
|
||||
for prefix in "node" "galera"
|
||||
for prefix in "node" "galera" "maxscale"
|
||||
do
|
||||
N_var="$prefix"_N
|
||||
Nx=${!N_var}
|
||||
@ -77,8 +78,8 @@ do
|
||||
eval 'export $stop_cmd_var="$mysql_exe stop "'
|
||||
fi
|
||||
|
||||
eval 'export "$prefix"_"$num"_start_vm_command="cd $mdbci_dir/$config_name;vagrant up node_$num --provider=$provider; cd $curr_dir"'
|
||||
eval 'export "$prefix"_"$num"_kill_vm_command="cd $mdbci_dir/$config_name;vagrant halt node_$num --provider=$provider; cd $curr_dir"'
|
||||
eval 'export "$prefix"_"$num"_start_vm_command="cd ${MDBCI_VM_PATH}/$config_name;vagrant resume ${prefix}_$num ; cd $curr_dir"'
|
||||
eval 'export "$prefix"_"$num"_stop_vm_command="cd ${MDBCI_VM_PATH}/$config_name;vagrant suspend ${prefix}_$num ; cd $curr_dir"'
|
||||
done
|
||||
done
|
||||
|
||||
|
@ -489,7 +489,15 @@ void check_server_statuses(TestConnections& test)
|
||||
masters += check_server_status(test, 3);
|
||||
masters += check_server_status(test, 4);
|
||||
|
||||
test.assert(masters == 1, "Unpexpected number of masters: %d", masters);
|
||||
if (masters == 0)
|
||||
{
|
||||
test.tprintf("No master, checking that autofail has been turned off.");
|
||||
test.log_includes(0, "disabling automatic failover");
|
||||
}
|
||||
else if (masters != 1)
|
||||
{
|
||||
test.assert(!true, "Unexpected number of masters: %d", masters);
|
||||
}
|
||||
}
|
||||
|
||||
void run(TestConnections& test)
|
||||
|
@ -129,9 +129,9 @@ int main(int argc, char** argv)
|
||||
return test.global_result;
|
||||
}
|
||||
|
||||
// Manually set current master to replicate from the old master to quickly fix the cluster.
|
||||
cout << "Setting server " << master_id_new << " to replicate from server 1. Auto-rejoin should redirect "
|
||||
"servers so that in the end server 1 is master and all others are slaves." << endl;
|
||||
// Set current master to replicate from the old master. The old master should remain as the current master.
|
||||
cout << "Setting server " << master_id_new << " to replicate from server 1. Server " << master_id_new
|
||||
<< " should remain as the master because server 1 doesn't have the latest event it has." << endl;
|
||||
const char CHANGE_CMD_FMT[] = "CHANGE MASTER TO MASTER_HOST = '%s', MASTER_PORT = %d, "
|
||||
"MASTER_USE_GTID = current_pos, MASTER_USER='repl', MASTER_PASSWORD = 'repl';";
|
||||
char cmd[256];
|
||||
@ -143,9 +143,10 @@ int main(int argc, char** argv)
|
||||
sleep(5);
|
||||
get_output(test);
|
||||
|
||||
expect(test, "server1", "Master", "Running");
|
||||
expect(test, "server2", "Slave", "Running");
|
||||
expect(test, "server1", "Running");
|
||||
expect(test, "server2", "Master", "Running");
|
||||
expect(test, "server3", "Slave", "Running");
|
||||
expect(test, "server4", "Slave", "Running");
|
||||
test.repl->fix_replication();
|
||||
return test.global_result;
|
||||
}
|
||||
|
@ -197,7 +197,6 @@ int Nodes::ssh_node_f(int node, bool sudo, const char* format, ...)
|
||||
va_start(valist, format);
|
||||
vsnprintf(sys, message_len + 1, format, valist);
|
||||
va_end(valist);
|
||||
|
||||
int result = ssh_node(node, sys, sudo);
|
||||
free(sys);
|
||||
return (result);
|
||||
@ -428,6 +427,40 @@ int Nodes::read_basic_env()
|
||||
{
|
||||
sprintf(hostname[i], "%s", IP[i]);
|
||||
}
|
||||
|
||||
sprintf(env_name, "%s_%03d_start_vm_command", prefix, i);
|
||||
env = getenv(env_name);
|
||||
if (env == NULL)
|
||||
{
|
||||
sprintf(env_name, "%s_start_vm_command", prefix);
|
||||
env = getenv(env_name);
|
||||
}
|
||||
|
||||
if (env != NULL)
|
||||
{
|
||||
sprintf(start_vm_command[i], "%s", env);
|
||||
}
|
||||
else
|
||||
{
|
||||
sprintf(start_vm_command[i], "exit 0");
|
||||
}
|
||||
|
||||
sprintf(env_name, "%s_%03d_stop_vm_command", prefix, i);
|
||||
env = getenv(env_name);
|
||||
if (env == NULL)
|
||||
{
|
||||
sprintf(env_name, "%s_stop_vm_command", prefix);
|
||||
env = getenv(env_name);
|
||||
}
|
||||
|
||||
if (env != NULL)
|
||||
{
|
||||
sprintf(stop_vm_command[i], "%s", env);
|
||||
}
|
||||
else
|
||||
{
|
||||
sprintf(stop_vm_command[i], "exit 0");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -64,6 +64,16 @@ public:
|
||||
|
||||
char hostname[256][1024];
|
||||
|
||||
/**
|
||||
* @brief stop_vm_command Command to suspend VM
|
||||
*/
|
||||
char stop_vm_command[256][1024];
|
||||
/**
|
||||
|
||||
* @brief start_vm_command Command to resume VM
|
||||
*/
|
||||
char start_vm_command[256][1024];
|
||||
|
||||
/**
|
||||
* @brief User name to access backend nodes
|
||||
*/
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <time.h>
|
||||
#include <signal.h>
|
||||
#include <execinfo.h>
|
||||
#include <sys/stat.h>
|
||||
#include <sstream>
|
||||
|
||||
#include "mariadb_func.h"
|
||||
@ -281,7 +282,7 @@ TestConnections::TestConnections(int argc, char *argv[]):
|
||||
|
||||
if (maxscale_init)
|
||||
{
|
||||
init_maxscale(0);
|
||||
init_maxscales();
|
||||
}
|
||||
|
||||
if (backend_ssl)
|
||||
@ -529,11 +530,26 @@ const char * get_template_name(char * test_name)
|
||||
|
||||
void TestConnections::process_template(int m, const char *template_name, const char *dest)
|
||||
{
|
||||
struct stat stb;
|
||||
char str[4096];
|
||||
char template_file[1024];
|
||||
|
||||
char extended_template_file[1024];
|
||||
|
||||
sprintf(template_file, "%s/cnf/maxscale.cnf.template.%s", test_dir, template_name);
|
||||
sprintf(extended_template_file, "%s.%03d", template_file, m);
|
||||
|
||||
if (stat((char*)extended_template_file, &stb) == 0)
|
||||
{
|
||||
strcpy(template_file, extended_template_file);
|
||||
}
|
||||
tprintf("Template file is %s\n", template_file);
|
||||
|
||||
sprintf(str, "cp %s maxscale.cnf", template_file);
|
||||
if (verbose)
|
||||
{
|
||||
tprintf("Executing '%s' command\n", str);
|
||||
}
|
||||
if (system(str) != 0)
|
||||
{
|
||||
tprintf("Error copying maxscale.cnf template\n");
|
||||
@ -595,12 +611,19 @@ void TestConnections::process_template(int m, const char *template_name, const c
|
||||
maxscales->copy_to_node_legacy((char *) "maxscale.cnf", (char *) dest, m);
|
||||
}
|
||||
|
||||
int TestConnections::init_maxscales()
|
||||
{
|
||||
for (int i = 0; i < maxscales->N; i++)
|
||||
{
|
||||
init_maxscale(i);
|
||||
}
|
||||
}
|
||||
|
||||
int TestConnections::init_maxscale(int m)
|
||||
{
|
||||
const char * template_name = get_template_name(test_name);
|
||||
tprintf("Template is %s\n", template_name);
|
||||
process_template(m, template_name, maxscales->access_homedir[m]);
|
||||
|
||||
process_template(m, template_name, maxscales->access_homedir[m]);
|
||||
maxscales->ssh_node_f(m, true,
|
||||
"cp maxscale.cnf %s;rm -rf %s/certs;mkdir -m a+wrx %s/certs;",
|
||||
maxscales->maxscale_cnf[m],
|
||||
@ -614,7 +637,6 @@ int TestConnections::init_maxscale(int m)
|
||||
sprintf(str, "cp %s/ssl-cert/* .", test_dir);
|
||||
system(str);
|
||||
|
||||
|
||||
maxscales->ssh_node_f(m, true,
|
||||
"chown maxscale:maxscale -R %s/certs;"
|
||||
"chmod 664 %s/certs/*.pem;"
|
||||
|
@ -263,11 +263,16 @@ public:
|
||||
|
||||
/**
|
||||
* @brief InitMaxscale Copies MaxSclae.cnf and start MaxScale
|
||||
* @param m Number of Maxscale node
|
||||
* @return 0 if case of success
|
||||
*/
|
||||
int init_maxscale(int m = 0);
|
||||
|
||||
|
||||
/**
|
||||
* @brief InitMaxscale Copies MaxSclae.cnf and start MaxScale on all Maxscale nodes
|
||||
* @return 0 if case of success
|
||||
*/
|
||||
int init_maxscales();
|
||||
|
||||
/**
|
||||
* @brief start_binlog configure first node as Master, Second as slave connected to Master and others as slave connected to MaxScale binlog router
|
||||
|
@ -1,4 +1,4 @@
|
||||
add_library(mariadbmon SHARED mysql_mon.cc)
|
||||
add_library(mariadbmon SHARED mariadbmon.cc)
|
||||
target_link_libraries(mariadbmon maxscale-common)
|
||||
add_dependencies(mariadbmon pcre2)
|
||||
set_target_properties(mariadbmon PROPERTIES VERSION "1.4.0")
|
||||
|
@ -12,7 +12,7 @@
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file mysql_mon.c - A MySQL replication cluster monitor
|
||||
* @file A MariaDB replication cluster monitor
|
||||
*/
|
||||
|
||||
#define MXS_MODULE_NAME "mariadbmon"
|
||||
@ -811,8 +811,8 @@ extern "C"
|
||||
|
||||
MXS_MODULE* MXS_CREATE_MODULE()
|
||||
{
|
||||
MXS_NOTICE("Initialise the MySQL Monitor module.");
|
||||
static const char ARG_MONITOR_DESC[] = "MySQL Monitor name (from configuration file)";
|
||||
MXS_NOTICE("Initialise the MariaDB Monitor module.");
|
||||
static const char ARG_MONITOR_DESC[] = "Monitor name (from configuration file)";
|
||||
static modulecmd_arg_type_t switchover_argv[] =
|
||||
{
|
||||
{
|
||||
@ -865,7 +865,7 @@ extern "C"
|
||||
MXS_MODULE_API_MONITOR,
|
||||
MXS_MODULE_GA,
|
||||
MXS_MONITOR_VERSION,
|
||||
"A MySQL Master/Slave replication monitor",
|
||||
"A MariaDB Master/Slave replication monitor",
|
||||
"V1.5.0",
|
||||
MXS_NO_MODULE_CAPABILITIES,
|
||||
&MyObject,
|
||||
@ -939,7 +939,7 @@ void info_free_func(void *val)
|
||||
/**
|
||||
* @brief Helper function that initializes the server info hashtable
|
||||
*
|
||||
* @param handle MySQL monitor handle
|
||||
* @param handle MariaDB monitor handle
|
||||
* @param database List of monitored databases
|
||||
* @return True on success, false if initialization failed. At the moment
|
||||
* initialization can only fail if memory allocation fails.
|
||||
@ -1671,7 +1671,7 @@ static MXS_MONITORED_SERVER *build_mysql51_replication_tree(MXS_MONITOR *mon)
|
||||
/**
|
||||
* Monitor an individual server
|
||||
*
|
||||
* @param handle The MySQL Monitor object
|
||||
* @param handle The Monitor object
|
||||
* @param database The database to probe
|
||||
*/
|
||||
static void
|
||||
@ -2534,7 +2534,7 @@ monitorMain(void *arg)
|
||||
}
|
||||
}
|
||||
|
||||
/* Do now the heartbeat replication set/get for MySQL Replication Consistency */
|
||||
/* Generate the replication heartbeat event by performing an update */
|
||||
if (replication_heartbeat &&
|
||||
root_master &&
|
||||
(SERVER_IS_MASTER(root_master->server) ||
|
||||
@ -2563,7 +2563,8 @@ monitorMain(void *arg)
|
||||
|
||||
// Do not auto-join servers on this monitor loop if a failover (or any other cluster modification)
|
||||
// has been performed, as server states have not been updated yet. It will happen next iteration.
|
||||
if (handle->auto_rejoin && !failover_performed && cluster_can_be_joined(handle))
|
||||
if (!config_get_global_options()->passive && handle->auto_rejoin &&
|
||||
!failover_performed && cluster_can_be_joined(handle))
|
||||
{
|
||||
// Check if any servers should be autojoined to the cluster
|
||||
ServerVector joinable_servers;
|
||||
@ -2598,10 +2599,11 @@ monitorMain(void *arg)
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch a MySQL node by node_id
|
||||
* Fetch a node by node_id
|
||||
*
|
||||
* @param ptr The list of servers to monitor
|
||||
* @param node_id The MySQL server_id to fetch
|
||||
* @param node_id The server_id to fetch
|
||||
*
|
||||
* @return The server with the required server_id
|
||||
*/
|
||||
static MXS_MONITORED_SERVER *
|
||||
@ -2621,10 +2623,10 @@ getServerByNodeId(MXS_MONITORED_SERVER *ptr, long node_id)
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch a MySQL slave node from a node_id
|
||||
* Fetch a slave node from a node_id
|
||||
*
|
||||
* @param ptr The list of servers to monitor
|
||||
* @param node_id The MySQL server_id to fetch
|
||||
* @param node_id The server_id to fetch
|
||||
* @param slave_down_setting Whether to accept or reject slaves which are down
|
||||
* @return The slave server of this node_id
|
||||
*/
|
||||
@ -2899,9 +2901,8 @@ static void set_slave_heartbeat(MXS_MONITOR* mon, MXS_MONITORED_SERVER *database
|
||||
|
||||
/*******
|
||||
* This function computes the replication tree
|
||||
* from a set of MySQL Master/Slave monitored servers
|
||||
* and returns the root server with SERVER_MASTER bit.
|
||||
* The tree is computed even for servers in 'maintenance' mode.
|
||||
* from a set of monitored servers and returns the root server with
|
||||
* SERVER_MASTER bit. The tree is computed even for servers in 'maintenance' mode.
|
||||
*
|
||||
* @param handle The monitor handle
|
||||
* @param num_servers The number of servers monitored
|
||||
@ -3938,7 +3939,7 @@ static bool query_one_row(MXS_MONITORED_SERVER *database, const char* query, uns
|
||||
if (columns != expected_cols)
|
||||
{
|
||||
mysql_free_result(result);
|
||||
MXS_ERROR("Unexpected result for '%s'. Expected %d columns, got %d. MySQL Version: %s",
|
||||
MXS_ERROR("Unexpected result for '%s'. Expected %d columns, got %d. Server version: %s",
|
||||
query, expected_cols, columns, database->server->version_string);
|
||||
}
|
||||
else
|
||||
@ -4248,7 +4249,7 @@ static bool switchover_start_slave(MYSQL_MONITOR* mon, MXS_MONITORED_SERVER* old
|
||||
}
|
||||
|
||||
/**
|
||||
* Get MySQL connection error strings from all the given servers, form one string.
|
||||
* Get MariaDB connection error strings from all the given servers, form one string.
|
||||
*
|
||||
* @param slaves Servers with errors
|
||||
* @return Concatenated string.
|
@ -485,18 +485,7 @@ static bool route_stored_query(RWSplitSession *rses)
|
||||
if (!routeQuery((MXS_ROUTER*)rses->router, (MXS_ROUTER_SESSION*)rses, query_queue))
|
||||
{
|
||||
rval = false;
|
||||
char* sql = modutil_get_SQL(query_queue);
|
||||
|
||||
if (sql)
|
||||
{
|
||||
MXS_ERROR("Routing query \"%s\" failed.", sql);
|
||||
MXS_FREE(sql);
|
||||
}
|
||||
else
|
||||
{
|
||||
MXS_ERROR("Failed to route query.");
|
||||
}
|
||||
gwbuf_free(query_queue);
|
||||
MXS_ERROR("Failed to route queued query.");
|
||||
}
|
||||
|
||||
if (rses->query_queue == NULL)
|
||||
|
Reference in New Issue
Block a user