Complete addition of Markdown documents

This commit is contained in:
counterpoint 2015-01-28 12:12:23 +00:00
parent e2d2a5b7de
commit 2e0ecc2a86
17 changed files with 7520 additions and 0 deletions

View File

@ -0,0 +1,104 @@
# Limitations and Known Issues within MaxScale
The purpose of this documentation is to provide a central location that will document known issues and limitations within the MaxScale product and the plugins that form part of that product. Since limitations may related to specific plugins or to MaxScale as a whole this document is divided into a number of sections, the purpose of which are to isolate the limitations to the components which illustrate them.
## Limitations in the MaxScale core
This section describes the limitations that are common to all configuration of plugins with MaxScale.
## Limitations with MySQL Protocol support
* Compression
* SSL
Both capabilities are not included in MySQL server handshake
* LOAD DATA LOCAL INFILE currently not supported
## Limitations with MySQL Master/Slave Replication monitoring
## Limitations with Galera Cluster Monitoring
Master selection is based only on MIN(wsrep_local_index), no other server parameter.
## Limitations in the connection router
If Master changes (ie. new Master promotion) during current connection the router cannot check the change
## Limitations in the Read/Write Splitter
### Scale-out limitations
In master-slave replication cluster also read-only queries are routed to master too in the following situations:
* if they are executed inside an open transaction
* in case of prepared statement execution
* statement includes a stored procedure, or an UDF call
### Limitations in client session handling
Some of the queries that client sends are routed to all backends instead of sending them just to one of server. These queries include "USE <db name>" and “SET autocommit=0” among many others. Read/Write Splitter sends a copy of these queries to each backend server and forwards the first reply it receives to the client. Below is a list of MySQL commands which we call session commands :
COM_INIT_DB (USE <db name> creates this)
COM_CHANGE_USER
COM_STMT_CLOSE
COM_STMT_SEND_LONG_DATA
COM_STMT_RESET
COM_STMT_PREPARE
Also these are session commands:
COM_QUIT (no response)
COM_REFRESH
COM_DEBUG
COM_PING
In addition there are query types which belong to the same group:
SQLCOM_CHANGE_DB
SQLCOM_DEALLOCATE_PREPARE
SQLCOM_PREPARE
SQLCOM_SET_OPTION
SELECT ..INTO variable|OUTFILE|DUMPFILE
Then there are queries which modify session characteristics, listed as derived, internal RWSplit types:
QUERY_TYPE_ENABLE_AUTOCOMMIT
QUERY_TYPE_DISABLE_AUTOCOMMIT
There is a possibility for misbehavior; if "USE mytable" was executed in one of the slaves and it failed, it may be due to replication lag rather than the fact it didn’t exist. Thus the same command may end up with different result among backend servers. This disparity is missed.
The above-mentioned behavior can be partially controller with RWSplit configuration parameter called
use_sql_variables_in=[master|all] (master)
Server-side session variables are called as SQL variables. If "master" or no value is set, SQL variables are read and written in master only. Autocommit values and prepared statements are routed to all nodes always.
NOTE: If variable is written as a part of write query, it is treated like write query and not routed to all servers. For example, INSERT INTO test.t1 VALUES (@myvar:= 7) .
Examples:
If new database "db" was created and client executes “USE db” and it is routed to slave before the CREATE DATABASE clause is replicated to all slaves there is a risk of executing query in wrong database. Similarly, if any response that RWSplit sends back to the client differ from that of the master, there is a risk for misbehavior.
Most imaginable reasons are related to replication lag but it could be possible that a slave fails to execute something because of some non-fatal, temporary failure while execution of same command succeeds in other backends.
## Authentication Related Limitations
MySQL old passwords are not supported

View File

@ -0,0 +1,140 @@
MaxScale Release Notes
1.0.4 GA
This document details the changes in version 1.0.4 since the release of the 1.0.2 Release Candidate of the MaxScale product.
# New Features
No new features have been introduced since the released candidate was released.
# Bug Fixes
A number of bug fixes have been applied between the 0.6 alpha and this alpha release. The table below lists the bugs that have been resolved. The details for each of these may be found in bugs.mariadb.com.
<table>
<tr>
<td>ID</td>
<td>Summary</td>
</tr>
<tr>
<td>644</td>
<td>Buffered that were cloned using the gwbuf_clone routine failed to initialise the buffer lock structure correctly.</td>
</tr>
<tr>
<td>643</td>
<td>Recursive filter definitions in the configuration file could cause MaxScale to loop</td>
</tr>
<tr>
<td>665</td>
<td>An access to memory that had already been freed could be made within the MaxScale core</td>
</tr>
<tr>
<td>664</td>
<td>MySQL Authentication code could access memory that had already been freed.</td>
</tr>
<tr>
<td>673</td>
<td>MaxScale could crash if it had an empty user table and the MaxAdmin show dbusers command was run</td>
</tr>
<tr>
<td>670</td>
<td>The tee filter could lose statement on the branch service if the branch service was significantly slower at executing statements compared with the main service.</td>
</tr>
<tr>
<td>653</td>
<td>Memory corruption could occur with extremely long hostnames in the mysql.user table.</td>
</tr>
<tr>
<td>657</td>
<td>If the branch service of a tee filter shutdown unexpectedly then MaxScale could fail</td>
</tr>
<tr>
<td>654</td>
<td>Missing quotes in MaxAdmin show dbusers command could cause MaxAdmin to crash</td>
</tr>
<tr>
<td>677</td>
<td>A race condition existed in the tee filter client reply handling</td>
</tr>
<tr>
<td>658</td>
<td>The readconnroute router did not correctly close sessions when a backend database failed</td>
</tr>
<tr>
<td>662</td>
<td>MaxScale startup hangs if no backend servers respond</td>
</tr>
<tr>
<td>676</td>
<td>MaxScale writes a log entry, "Write to backend failed. Session closed." when changing default database via readwritesplit with max_slave_connections != 100%</td>
</tr>
<tr>
<td>650</td>
<td>Tee filter does not correctly detect missing branch service</td>
</tr>
<tr>
<td>645</td>
<td>Tee filter can hang MaxScale if the read/write splitter is used</td>
</tr>
<tr>
<td>678</td>
<td>Tee filter does not always send full query to branch service</td>
</tr>
<tr>
<td>679</td>
<td>A shared pointer in the service was leading to misleading service states</td>
</tr>
<tr>
<td>680</td>
<td>The Read/Write Splitter can not load users if there are no databases available at startup</td>
</tr>
<tr>
<td>681</td>
<td>The Read/Write Splitter could crash is the value of max_slave_connections was set to a low percentage and only a small number of backend servers are available</td>
</tr>
</table>
# Known Issues
There are a number bugs and known limitations within this version of MaxScale, the most serious of this are listed below.
* The SQL construct "LOAD DATA LOCAL INFILE" is not fully supported.
* The Read/Write Splitter is a little too strict when it receives errors from slave servers during execution of session commands. This can result in sessions being terminated in situation in which MaxScale could recover without terminating the sessions.
* MaxScale can not manage authentication that uses wildcard matching in hostnames in the mysql.user table of the backend database. The only wildcards that can be used are in IP address entries.
* When users have different passwords based on the host from which they connect MaxScale is unable to determine which password it should use to connect to the backend database. This results in failed connections and unusable usernames in MaxScale.
# Packaging
Both RPM and Debian packages are available for MaxScale in addition to the tar based releases previously distributed we now provide
* CentOS/RedHat 5
* CentOS/RedHat 6
* CentOS/RedHat 7
* Debian 6
* Debian 7
* Ubuntu 12.04 LTS
* Ubuntu 13.10
* Ubuntu 14.04 LTS
* Fedora 19
* Fedora 20
* OpenSuSE 13
# MaxScale Home Default Value
The installation assumes that the default value for the environment variable MAXSCALE_HOME is set to /usr/local/skysql/maxscale. This is hard coded in the service startup file that is placed in /etc/init.d/maxscale by the installation process.

View File

@ -1,8 +1,30 @@
# Contents
## About MaxScale
- [Release Notes 1.0.4](About/MaxScale-1.0.4-Release-Notes.md)
- [Limitations](About/Limitations.md)
## Getting Started
- [Getting Started with MaxScale](Getting-Started/Getting-Started-With-MaxScale.md)
- [Configuration Guide](Getting-Started/Configuration-Guide.md)
## Reference
- [MaxAdmin](Reference/MaxAdmin.md)
- [MaxScale HA with Corosync-Pacemaker](Reference/MaxScale-HA-with-Corosync-Pacemaker.md)
- [How Errors are Handled in MaxScale](Reference/How-errors-are-handled-in-MaxScale.md)
- [Debug and Diagnostic Support](Reference/Debug-And-Diagnostic-Support.md)
## Tutorials
- [Administration Tutorial](Tutorials/Administration-Tutorial.md)
- [Filter Tutorial](Tutorials/Filter-Tutorial.md)
- [Galera Cluster Connection Routing Tutorial](Tutorials/Galera-Cluster-Connection-Routing-Tutorial.md)
- [Galera Cluster Read-Write Splitting Tutorial](Tutorials/Galera-Cluster-Read-Write-Splitting-Tutorial.md)
- [MySQL Replication Connection Routing Tutorial](Tutorials/MySQL-Replication-Connection-Routing-Tutorial.md)
- [MySQL Replication Read-Write Splitting Tutorial](Tutorials/MySQL-Replication-Read-Write-Splitting-Tutorial.md)
## Filters

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,56 @@
# How errors are handled in MaxScale
This document describes how errors are handled in MaxScale, its protocol modules and routers.
Assume a client, maxscale, and master/slave replication cluster.
An "error" can be due to failed authentication, routing error (unsupported query type etc.), or backend failure.
## Authentication error
Authentication is relatively complex phase in the beginning of session creation. Roughly speaking, client protocol has loaded user information from backend so that it can authenticate client without consulting backend. When client sends authentication data to MaxScale data is compared against backend’s user data in the client protocol module. If authentication fails client protocol module refreshes backend data just in case it had became obsolete after last refresh. If authentication still fails after refresh, authentication error occurs.
Close sequence starts from mysql_client.c:gw_read_client_event where
1. session state is set to SESSION_STATE_STOPPING
2. dcb_close is called for client DCB
1. client DCB is removed from epoll set and state is set to DCB_STATE_NOPOLLING
2. client protocol’s close is called (gw_client_close)
* protocol struct is done’d
* router’s closeSession is called (includes calling dcb_close for backends)
3. dcb_call_callback is called for client DCB with DCB_REASON_CLOSE
4. client DCB is set to zombies list
Each call for dcb_close in closeSession repeat steps 2a-d.
## Routing errors
### Invalid capabilities returned by router
When client protocol module receives query from client the protocol state is (typically) MYSQL_IDLE. The protocol state is checked in mysql_client.c:gw_read_client_event. First place where a hard error may occur is when router capabilities are read. If router response is invalid (other than RCAP_TYPE_PACKET_INPUT and RCAP_TYPE_STMT_INPUT). In case of invalid return value from the router, error is logged, followed by session closing.
### Backend failure
When mysql_client.c:gw_read_client_event calls either route_by_statement or directly SESSION_ROUTE_QUERY script, which calls the routeQuery function of the head session’s router. routeQuery returns 1 if succeed, or 0 in case of error. Success here means that query was routed and reply will be sent to the client while error means that routing failed because of backend (server/servers/service) failure or because of side effect of backend failure.
In case of backend failure, error is replied to client and handleError is called to resolve backend problem. handleError is called with action ERRACT_NEW_CONNECTION which tells to error handler that it should try to find a replacement for failed backend. Handler will return true if there are enough backend servers for session’s needs. If handler returns false it means that session can’t continue processing further queries and will be closed. Client will be sent an error message and dcb_close is called for client DCB.
Close sequence is similar to that described above from phase #2 onward.
Reasons for "backend failure" in rwsplit:
* router has rses_closed == true because other thread has detected failure and started to close session
* master has disappeared; demoted to slave, for example
### Router error
In cases where SESSION_ROUTE_QUERY has returned successfully (=1) query may not be successfully processed in backend or even sent to it. It is posible that router fails in routing the particular query but there is no such error which would prevent session from continuing. In this case router handles error silently by creating and adding MySQL error to first available backend’s (incoming) eventqueue where it is found and sent to client (clientReply).

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,658 @@
How to make MaxScale High Available
Corosync/Pacemaker setup
& MaxScale init script
Massimiliano Pinto
Last Updated: 4th August 2014
# Contents
[Contents](#heading=h.myvf4p2ngdc5)
[Overview](#heading=h.92d1rpk8nyx4)
[Clustering Software installation](#heading=h.c1l0xy6aynl7)
[MaxScale init script](#heading=h.cfb6xvv8fu1n)
[Configure MaxScale for HA](#heading=h.qk4cgmtiugm0)
[Use case: failed resource is restarted](#heading=h.3fszf28iz3m5)
[Use case: failed resource migration on a node is started in another one](#heading=h.erqw535ttk7l)
[Add a Virtual IP (VIP) to the cluster](#heading=h.vzslsgvxjyug)
# Overview
The document shows an example of a Pacemaker / Corosync setup with MaxScale based on Linux Centos 6.5, using three virtual servers and unicast heartbeat mode with the following minimum requirements:
- MaxScale process is started/stopped and monitored via /etc/init.d/maxscale script that is LSB compatible in order to be managed by Pacemaker resource manager
- A Virtual IP is set providing the access to the MaxScale process that could be set to one of the cluster nodes
- Pacemaker/Corosync and crmsh command line tool basic knowledge
Please note the solution is a quick setup example that may not be suited for all production environments.
# Clustering Software installation
On each node in the cluster do the following steps:
(1) Add clustering repos to yum
# vi /etc/yum.repos.d/ha-clustering.repo
Add the following to the file
[haclustering]
name=HA Clustering
baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/
enabled=1
gpgcheck=0
(2) Install the software
# yum install pacemaker corosync crmsh
Package versions used
Package** pacemake**r-1.1.10-14.el6_5.3.x86_64
Package **corosync**-1.4.5-2.4.x86_64
Package **crmsh**-2.0+git46-1.1.x86_64
(3) Assign hostname on each node
In this example the three names used for the nodes are:
**node1,node,node3**
# hostname **node1**
...
# hostname nodeN
(4) For each node add server names in /etc/hosts
[root@node3 ~]# vi /etc/hosts
10.74.14.39 node1
10.228.103.72 node2
10.35.15.26 node3 current-node
[root@node1 ~]# vi /etc/hosts
10.74.14.39 node1 current-node
10.228.103.72 node2
10.35.15.26 node3
...
**Please note**: add **current-node** as an alias for the current node in each of the /etc/hosts files.
(5) Prepare authkey for optional cryptographic use
On one of the nodes, say node2 run the corosync-keygen utility and follow
[root@node2 ~]# corosync-keygen
Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy.
After completion the key will be found in /etc/corosync/authkey.
(6) Prepare the corosync configuration file
Using node2 as an example:
[root@node2 ~]# vi /etc/corosync/corosync.conf
Add the following to the file:
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
interface {
member {
memberaddr: node1
}
member {
memberaddr: node2
}
member {
memberaddr: node3
}
ringnumber: 0
bindnetaddr: current-node
mcastport: 5405
ttl: 1
}
transport: udpu
}
logging {
fileline: off
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
# this will start Pacemaker processes
service {
ver: 0
name: pacemaker
}
**Please note **in this example:
- unicast UDP is used
- bindnetaddr for corosync process is current-node, that has the right value on each node due to the alias added in /etc/hosts above
- Pacemaker processes are started by the corosync daemon, so there is no need to launch it via /etc/init.d/pacemaker start
(7) copy configuration files and auth key on each of the other nodes
[root@node2 ~]# scp /etc/corosync/* root@node1:/etc/corosync/
[root@node2 ~]# scp /etc/corosync/* root@nodeN:/etc/corosync/
...
(8) Corosync needs port *5*405 to be opened:
- configure any firewall or iptables accordingly
For a quick start just disable iptables on each nodes:
[root@node2 ~]# service iptables stop
[root@nodeN ~]# service iptables stop
(9) Start Corosyn on each node:
[root@node2 ~] #/etc/init.d/corosync start
[root@nodeN ~] #/etc/init.d/corosync start
and check the corosync daemon is successfully bound to port 5405:
[root@node2 ~] #netstat -na | grep 5405
udp 0 0 10.228.103.72:5405 0.0.0.0:*
Check if other nodes are reachable with nc utility and option UDP (-u):
[root@node2 ~] #echo "check ..." | nc -u node1 5405
[root@node2 ~] #echo "check ..." | nc -u node3 5405
...
[root@node1 ~] #echo "check ..." | nc -u node2 5405
[root@node1 ~] #echo "check ..." | nc -u node3 5405
If the following message is displayed
**nc: Write error: Connection refused**
There is an issue with communication between the nodes, this is most likely to be an issue with the firewall configuration on your nodes. Check and resolve issues with your firewall configuration.
(10) Check the cluster status, from any node
[root@node3 ~]# crm status
After a while this will be the output:
[root@node3 ~]# crm status
Last updated: Mon Jun 30 12:47:53 2014
Last change: Mon Jun 30 12:47:39 2014 via crmd on node2
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
0 Resources configured
Online: [ node1 node2 node3 ]
For the basic setup disable the following properties:
- stonith
- quorum policy
[root@node3 ~]# crm configure property 'stonith-enabled'='false'
[root@node3 ~]# crm configure property 'no-quorum-policy'='ignore'
For more information see:
[http://www.clusterlabs.org/doc/crm_fencing.html](http://www.clusterlabs.org/doc/crm_fencing.html)
[http://clusterlabs.org/doc/](http://clusterlabs.org/doc/)
The configuration is automatically updated on every node:
Check it from another node, say node1
[root@node1 ~]# crm configure show
node node1
node node2
node node3
property cib-bootstrap-options: \
dc-version=1.1.10-14.el6_5.3-368c726 \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=3 \
stonith-enabled=false \
no-quorum-policy=ignore \
placement-strategy=balanced \
default-resource-stickiness=infinity
The Corosync / Pacemaker cluster is ready to be configured to manage resources.
# MaxScale init script /etc/init.d/maxscale
The MaxScale /etc/init.d./maxscale script allows to start/stop/restart and monitor maxScale process running in the system.
Edit it and modify the **MAXSCALE_BASEDIR** to match the installation directory you choose when you installed MaxScale.
**Note**:
It could be necessary to modify other variables, such as
MAXSCALE_BIN, MAXSCALE_HOME, MAXSCALE_PIDFILE and LD_LIBRARY_PATH for a non standard setup.
[root@node1 ~]# /etc/init.d/maxscale
Usage: /etc/init.d/maxscale {start|stop|status|restart|condrestart|reload}
- Start
[root@node1 ~]# /etc/init.d/maxscale start
Starting MaxScale: maxscale (pid 25892) is running... [ OK ]
- Start again
[root@node1 ~]# /etc/init.d/maxscale start
Starting MaxScale: found maxscale (pid 25892) is running.[ OK ]
- Stop
[root@node1 ~]# /etc/init.d/maxscale stop
Stopping MaxScale: [ OK ]
- Stop again
[root@node1 ~]# /etc/init.d/maxscale stop
Stopping MaxScale: [FAILED]
- Status (MaxScale not running)
[root@node1 ~]# /etc/init.d/maxscale status
MaxScale is stopped [FAILED]
The script exit code for "status" is 3
- Status (MaxScale is running)
[root@node1 ~]# /etc/init.d/maxscale status
Checking MaxScale status: MaxScale (pid 25953) is running.[ OK ]
The script exit code for "status" is 0
Note: the MaxScale script is LSB compatible and returns the proper exit code for each action:
For more informations;
[http://www.linux-ha.org/wiki/LSB_Resource_Agents](http://www.linux-ha.org/wiki/LSB_Resource_Agents)
After checking maxScale is well managed by the /etc/init.d/script is possible to configure the MAxScale HA via Pacemaker.
# Configure MaxScale for HA with Pacemaker
[root@node2 ~]# crm configure primitive MaxScale lsb:maxscale \
op monitor interval="10s” timeout=”15s” \
op start interval="0” timeout=”15s” \
op stop interval="0” timeout=”30s”
MaxScale resource will be started:
[root@node2 ~]# crm status
Last updated: Mon Jun 30 13:15:34 2014
Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node2
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
1 Resources configured
Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node1
Basic use cases:
# 1. Resource restarted after a failure:
MaxScale Pid is, $MAXSCALE_PIDFILE=$MAXSCALE_HOME/log/maxscale.pid
In the example is 26114, kill the process immediately:
[root@node2 ~]# kill -9 26114
[root@node2 ~]# crm status
Last updated: Mon Jun 30 13:16:11 2014
Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node2
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
1 Resources configured
Online: [ node1 node2 node3 ]
Failed actions:
MaxScale_monitor_15000 on node1 'not running' (7): call=19, status=complete, last-rc-change='Mon Jun 30 13:16:14 2014', queued=0ms, exec=0ms
**Note** the **MaxScale_monitor** failed action
After a few seconds it will be started again:
[root@node2 ~]# crm status
Last updated: Mon Jun 30 13:21:12 2014
Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node1
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
1 Resources configured
Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node1
# 2. The resource cannot be migrated to node1 for a failure:
First, migrate the the resource to another node, say node3
[root@node1 ~]# crm resource migrate MaxScale node3
...
Online: [ node1 node2 node3 ]
Failed actions:
MaxScale_start_0 on node1 'not running' (7): call=76, status=complete, last-rc-change='Mon Jun 30 13:31:17 2014', queued=2015ms, exec=0ms
Note the **MaxScale_start** failed action on node1, and after a few seconds
[root@node3 ~]# crm status
Last updated: Mon Jun 30 13:35:00 2014
Last change: Mon Jun 30 13:31:13 2014 via crm_resource on node3
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
1 Resources configured
Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node2
Failed actions:
MaxScale_start_0 on node1 'not running' (7): call=76, status=complete, last-rc-change='Mon Jun 30 13:31:17 2014', queued=2015ms, exec=0ms
Successfully, MaxScale has been started on a new node: node2.
**Note**: Failed actions remain in the output of crm status.
With "crm resource cleanup MaxScale" is possible to cleanup the messages:
[root@node1 ~]# crm resource cleanup MaxScale
Cleaning up MaxScale on node1
Cleaning up MaxScale on node2
Cleaning up MaxScale on node3
The cleaned status is visible from other nodes as well:
[root@node2 ~]# crm status
Last updated: Mon Jun 30 13:38:18 2014
Last change: Mon Jun 30 13:38:17 2014 via crmd on node3
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
1 Resources configured
Online: [ node1 node2 node3 ]
MaxScale (lsb:maxscale): Started node2
# Add a Virtual IP (VIP) to the cluster
It’s possible to add a virtual IP to the cluster:
MaxScale process will be only contacted with this IP, that mat move across nodes with maxscale process as well.
Setup is very easy:
assuming an addition IP address is available and can be added to one of the nodes, this i the new configuration to add:
[root@node2 ~]# crm configure primitive maxscale_vip ocf:heartbeat:IPaddr2 params ip=192.168.122.125 op monitor interval=10s
MaxScale process and the VIP must be run in the same node, so it’s mandatory to add to the configuration the group ‘maxscale_service’.
[root@node2 ~]# crm configure group maxscale_service maxscale_vip MaxScale
The final configuration is, from another node:
[root@node3 ~]# crm configure show
node node1
node node2
node node3
primitive MaxScale lsb:maxscale \
op monitor interval=15s timeout=10s \
op start interval=0 timeout=15s \
op stop interval=0 timeout=30s
primitive maxscale_vip IPaddr2 \
params ip=192.168.122.125 \
op monitor interval=10s
group maxscale_service maxscale_vip MaxScale \
meta target-role=Started
property cib-bootstrap-options: \
dc-version=1.1.10-14.el6_5.3-368c726 \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=3 \
stonith-enabled=false \
no-quorum-policy=ignore \
placement-strategy=balanced \
last-lrm-refresh=1404125486
Check the resource status:
[root@node1 ~]# crm status
Last updated: Mon Jun 30 13:51:29 2014
Last change: Mon Jun 30 13:51:27 2014 via crmd on node1
Stack: classic openais (with plugin)
Current DC: node2 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured, 3 expected votes
2 Resources configured
Online: [ node1 node2 node3 ]
Resource Group: maxscale_service
maxscale_vip (ocf::heartbeat:IPaddr2): Started node2
MaxScale (lsb:maxscale): Started node2
With both resources on node2, now MaxScale service will be reachable via the configured VIP address 192.168.122.125

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@ -0,0 +1,204 @@
Getting Started With MariaDB MaxScale
Common Administration Tasks
The purpose of this tutorial is to introduce the MaxScale Administrator to a few of the common administration tasks that need to be performed with MaxScale. It is not intended as a reference to all the tasks that may be performed, more this is aimed as an introduction for administrators who are new to MaxScale.
# Starting MaxScale
There are several ways to start MaxScale, the most convenient mechanism is probably using the Linux service interface. When a MaxScale package is installed the package manager will also installed a script in /etc/init.d which may be used to start and stop MaxScale either directly or via the service interface.
$ service maxscale start
or
$ /etc/init.d/maxscale start
It is also possible to start MaxScale by executing the maxscale command itself, in this case you must ensure that the environment is correctly setup or command line options are passed. The major elements to consider are the correct setting of the MAXSCALE_HOME directory and to ensure that LD_LIBRARY_PATH. The LD_LIBRARY_PATH should include the lib directory that was installed as part of the MaxScale installation, the MAXSCALE_HOME should point to /usr/local/skysql/maxscale if a default installation has been created or to the directory this was relocated to. Running the executable $MAXSCALE_HOME/bin/maxscale will result in MaxScale running as a daemon process, unattached to the terminal in which it was started and using configuration files that it finds in the $MAXSCALE_HOME directory.
Options may be passed to the MaxScale binary that alter this default behaviour, this options are documented in the table below.
<table>
<tr>
<td>Switch</td>
<td>Long Option</td>
<td>Description</td>
</tr>
<tr>
<td>-d</td>
<td>--nodaemon</td>
<td>Run MaxScale attached to the terminal rather than as a daemon process. This is useful for debugging purposes.</td>
</tr>
<tr>
<td>-c</td>
<td>--homedir=</td>
<td>Ignore the environment variable MAXSCALE_HOME and use the supplied argument instead.</td>
</tr>
<tr>
<td>-f</td>
<td>--config=</td>
<td>Use the filename passed as an argument instead of looking in $MAXSCALE_HOME/etc/MaxScale.cnf</td>
</tr>
<tr>
<td>-l<file>|<shm></td>
<td>--log=</td>
<td>Control where logs are written for the debug and trace level log messages. the default is to write these to a shared memory device, however using the -lfile or --log=file option will forced these to be written to regular files.</td>
</tr>
<tr>
<td>-v</td>
<td>--version</td>
<td>Print version information for MaxScale</td>
</tr>
<tr>
<td>-?</td>
<td>--help</td>
<td>Print usage information for MaxScale</td>
</tr>
</table>
# Stopping MaxScale
There are numerous ways in which MaxScale can be stopped; using the service interface, killing the process or by use of the maxadmin utility.
Stopping MaxScale with the service interface is simply a case of using the service stop command or calling the init.d script with the stop argument.
$ service maxscale stop
or
$ /etc/init.d/maxscale stop
MaxScale will also stop gracefully if it received a hangup signal, to find the process id of the MaxScale server use the ps command or read the contents of the maxscale.pid file located in the same directory as the logs.
$ kill -HUP `cat $MAXSCALE_HOME/log/maxscale.pid`
In order to shutdown MaxScale using the maxadmin command you may either connect with maxadmin in interactive mode or pass the "shutdown maxscale" command you wish to execute as an argument to maxadmin.
$ maxadmin -pskysql shutdown maxscale
# Checking The Status Of The MaxScale Services
It is possible to use the maxadmin command to obtain statistics regarding the services that are configured within your MaxScale configuration file. The maxadmin command "list services" will give very basic information regarding the services that are define. This command may be either run in interactive mode or passed on the maxadmin command line.
$ maxadmin -pskysql
MaxScale> list services
Services.
--------------------------+----------------------+--------+---------------
Service Name | Router Module | #Users | Total Sessions
--------------------------+----------------------+--------+---------------
RWSplitter | readwritesplit | 2 | 4
Cassandra | readconncouter | 1 | 1
CLI | cli | 2 | 2
--------------------------+----------------------+--------+---------------
MaxScale>
It should be noted that network listeners count as a user of the service, therefore there will always be one user per network port in which the service listens. More detail can be obtained by use of the "show service" command which is passed a service name.
# What Clients Are Connected To MaxScale
To determine what client are currently connected to MaxScale you can use the "list clients" command within maxadmin. This will give you IP address and the ID’s of the DCB and session for that connection. As with any maxadmin command this can be passed on the command line or typed interactively in maxadmin.
$ maxadmin -pskysql list clients
Client Connections
-----------------+------------------+----------------------+------------
Client | DCB | Service | Session
-----------------+------------------+----------------------+------------
127.0.0.1 | 0x7fe694013410 | CLI | 0x7fe69401ac10
-----------------+------------------+----------------------+------------
$
# Rotating Log Files
MaxScale write log data into four log files with varying degrees of detail. With the exception of the error log, which can not be disabled, these log files may be enabled and disabled via the maxadmin interface or in the configuration file. The default behaviour of MaxScale is to grow the log files indefinitely, the administrator must take action to prevent this.
It is possible to rotate either a single log file or all the log files with a single command. When the logfile is rotated, the current log file is closed and a new log file, with an increased sequence number in its name, is created. Log file rotation is achieved by use of the "flush log" or “flush logs” command in maxadmin.
$ maxadmin -pskysql flush logs
Flushes all of the logs, whereas an individual log may be flushed with the "flush log" command.
$ maxadmin -pskysql
MaxScale> flush log error
MaxScale> flush log trace
MaxScale>
This may be integrated into the Linux logrotate mechanism by adding a configuration file to the /etc/logrotate.d directory. If we assume we want to rotate the log files once per month and wish to keep 5 log files worth of history, the configuration file would look like the following.
<table>
<tr>
<td>/usr/local/skysql/maxscale/log/*.log {
monthly
rotate 5
missingok
nocompress
sharedscripts
postrotate
# run if maxscale is running
if test -n "`ps acx|grep maxscale`"; then
/usr/local/skysql/maxscale/bin/maxadmin -pskysql flush logs
fi
endscript
}</td>
</tr>
</table>
One disadvantage with this is that the password used for the maxadmin command has to be embedded in the log rotate configuration file. MaxScale will also rotate all of its log files if it receives the USR1 signal. Using this the logrotate configuration script can be rewritten as
<table>
<tr>
<td>/usr/local/skysql/maxscale/log/*.log {
monthly
rotate 5
missingok
nocompress
sharedscripts
postrotate
kill -USR1 `cat /usr/local/skysql/maxscale/log/maxscale.pid`
endscript
}</td>
</tr>
</table>
# Taking A Database Server Out Of Use
MaxScale supports the concept of maintenance mode for servers within a cluster, this allows for planned, temporary removal of a database from the cluster within the need to change the MaxScale configuration.
To achieve the removal of a database server you can use the set server command in the maxadmin utility to set the maintenance mode flag for the server. This may be done interactively within maxadmin or by passing the command on the command line.
MaxScale> set server dbserver3 maintenance
MaxScale>
This will cause MaxScale to stop routing any new requests to the server, however if there are currently requests executing on the server these will not be interrupted.
To bring the server back into service use the "clear server" command to clear the maintenance mode bit for that server.
MaxScale> clear server dbserver3 maintenance
MaxScale>
Note that maintenance mode is not persistent, if MaxScale restarts when a node is in maintenance mode a new instance of MaxScale will not honour this mode. If multiple MaxScale instances are configured to use the node them maintenance mode must be set within each MaxScale instance. However if multiple services within one MaxScale instance are using the server then you only need set the maintenance mode once on the server for all services to take note of the mode change.

View File

@ -0,0 +1,232 @@
Getting Started With MariaDB MaxScale
Filters
# What Are Filters?
The filter mechanism in MaxScale is a means by which processing can be inserted into the flow of requests and responses between the client connection to MaxScale and the MaxScale connection to the backend database servers. The path from the client side of MaxScale out to the actual database servers can be considered a pipeline, filters can then be placed in that pipeline to monitor, modify, copy or block the content that flows through that pipeline.
# Types Of Filter
Filters can be divided into a number of categories
* Logging filters
Logging filters do not in any way alter the statement or results of the statements that are passed through MaxScale. They merely log some information about some or all of the statements and/or result sets.
Two examples of logging filters are contained within the MaxScale GA, a filter that will log all statements and another that will log only a number of statements, based on the duration of the execution of the query.
* Statement rewriting filters
Statement rewriting filters modify the statements that are passed through the filter. This allows a filter to be used as a mechanism to alter the statements that are seen by the database, an example of the use of this might be to allow an application to remain unchanged when the underlying database changes or to compensate for the migration from one database schema to another.
The MaxScale GA includes a filter that can modify statements by use of regular expressions to match statements and replaced that matched text.
* Result set manipulation filters
A result set manipulation filter is very similar to a statement rewriting but applies to the result set returned rather than the statement executed. An example of this may be obfuscating the values in a column.
The MaxScale 1.0 GA release does not contain any result set manipulation filters.
* Routing hint filters
Routing hint filters are filters that embed hints in the request that can be used by the router onto which the query is passed. These hints include suggested destinations as well as metric that may be used by the routing process.
The MaxScale 1.0 GA release does not contain any hint filters.
* Firewall filters
A firewall filter is a mechanism that allows queries to be blocked within MaxScale before they are sent on to the database server for execution. They allow constructs or individual queries to be intercepted and give a level of access control that is more flexible than the traditional database grant mechanism.
The 1.0 GA release of MaxScale does not include any firewall filters.
* Pipeline control filters
A pipeline filter is one that has an affect on how the requests are routed within the internal MaxScale components. The most obvious version of this is the ability to add a "tee" connector in the pipeline, duplicating the request and sending it to a second MaxScale service for processing.
The MaxScale 1.0 GA release contains an implementation of a tee filter that allows statements to be matched using a regular expression and passed to a second service within MaxScale.
# Filter Definition
Filters are defined in the configuration file, MaxScale.ini, using a section for each filter instance. The content of the filter sections in the configuration file various from filter to filter, however there are always to entries present for every filter, the type and module.
[MyFilter]
type=filter
module=xxxfilter
The type is used by the configuration manager within MaxScale to determine what this section is defining and the module is the name of the plugin that implements the filter.
When a filter is used within a service in MaxScale the entry filters= is added to the service definition in the ini file section for the service. Multiple filters can be defined using a syntax akin to the Linux shell pipe syntax.
[Split Service]
type=service
router=readwritesplit
servers=dbserver1,dbserver2,dbserver3,dbserver4
user=massi
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
filters=hints | top10
The names used in the filters= parameter are the names of the filter definition sections in the ini file. The same filter definition can be used in multiple services and the same filter module can have multiple instances, each with its own section in the ini file.
# Filter Examples
The filters that are bundled with the MaxScale 1.0 GA release are documented separately, in this section a short overview of how these might be used for some simple tasks will be discussed. These are just examples of how these filters might be used, other filters may also be easily added that will enhance the MaxScale functionality still further.
## Log The 30 Longest Running Queries
The top filter can be used to measure the execution time of every statement within a connection and log the details of the longest running statements.
The first thing to do is to define a filter entry in the ini file for the top filter. In this case we will call it "top30". The type is filter and the module that implements the filter is called topfilter.
[top30]
type=filter
module=topfilter
count=30
filebase=/var/log/DBSessions/top30
In the definition above we have defined two filter specific parameters, the count of the number of statement to be logged and a filebase that is used to define where to log the information. This filename is a stem to which a session id is added for each new connection that uses the filter.
The filter keeps track of every statement that is executed, monitors the time it takes for a response to come back and uses this as the measure of execution time for the statement. If the time is longer than the other statements that have been recorded, then this is added to the ordered list within the filter. Once 30 statements have been recorded those statements that have been recorded with the least time are discarded from the list. The result is that at any time the filter has a list of the 30 longest running statements in each session.
It is possible to see what is in the current list by using the maxadmin tool to view the state of the filter by looking at the session data. First you need to find the session id for the session of interest, this can be done using commands such as list sessions. You can then use the show session command to see the details for a particular session.
MaxScale> show session 0x736680
Session 0x736680
State: Session ready for routing
Service: Split Service (0x719f60)
Client DCB: 0x7361a0
Client Address: 127.0.0.1
Connected: Thu Jun 26 10:10:44 2014
Filter: top30
Report size 30
Logging to file /var/log/DBSessions/top30.1.
Current Top 30:
1 place:
Execution time: 23.826 seconds
SQL: select sum(salary), year(from_date) from salaries s, (select distinct year(from_date) as y1 from salaries) y where (makedate(y.y1, 1) between s.from_date and s.to_date) group by y.y1 ("1988-08-01?
2 place:
Execution time: 5.251 seconds
SQL: select d.dept_name as "Department", y.y1 as "Year", count(*) as "Count" from departments d, dept_emp de, (select distinct year(from_date) as y1 from dept_emp order by 1) y where d.dept_no = de.dept_no and (makedate(y.y1, 1) between de.from_date and de.to_date) group by y.y1, d.dept_name order by 1, 2
3 place:
Execution time: 2.903 seconds
SQL: select year(now()) - year(birth_date) as age, gender, avg(salary) as "Average Salary" from employees e, salaries s where e.emp_no = s.emp_no and ("1988-08-01" between from_date AND to_date) group by year(now()) - year(birth_date), gender order by 1,2
...
When the session ends a report will be written for the session into the logfile defined. That report will include the top 30 longest running statements, plus summary data for the session;
* The time the connection was opened.
* The host the connection was from.
* The username used in the connection.
* The duration of the connection.
* The total number of statements executed in the connection.
* The average execution time for a statement in this connection.
## Duplicate Data From Your Application Into Cassandra
The scenario we are using in this example is one in which you have an online gaming application that is designed to work with a MariaDB/MySQL database. The database schema includes a high score table which you would like to have access to in a Cassandra cluster. The application is already using MaxScale to connect to a MariaDB Galera cluster, using a service names BubbleGame. The definition of that service is as follows
[BubbleGame]
type=service
router=readwritesplit
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
user=maxscale
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
The table you wish to store in Cassandra in called HighScore and will contain the same columns in both the MariaDB table and the Cassandra table. The first step is to install a MariaDB instance with the Cassandra storage engine to act as a bridge server between the relational database and Cassandra. In this bridge server add a table definition for the HighScore table with the engine type set to cassandra. Add this server into the MaxScale configuration and create a service that will connect to this server.
[CassandraDB]
type=server
address=192.168.4.28
port=3306
protocol=MySQLBackend
[Cassandra]
type=service
router=readconnrouter
router_options=running
servers=CassandraDB
user=maxscale
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
Next add a filter definition for the tee filter that will duplication insert statements that are destined for the HighScore table to this new service.
[HighScores]
type=filter
module=teefilter
match=insert.*HighScore.*values
service=Cassandra
The above filter definition will cause all statements that match the regular expression inset.*HighScore.*values to be duplication and sent not just to the original destination, via the router but also to the service named Cassandra.
The final step is to add the filter to the BubbleGame service to enable the use of the filter.
[BubbleGame]
type=service
router=readwritesplit
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
user=maxscale
passwd=6628C50E07CCE1F0392EDEEB9D1203F3
filters=HighScores

View File

@ -0,0 +1,292 @@
Getting Started With MariaDB MaxScale
Connection Routing with Galera Cluster
# Environment & Solution Space
This document is designed as a quick introduction to setting up MaxScale in an environment in which you have a Galera Cluster within which you wish to balance connection across all the database nodes of the cluster that are active members of cluster.
The process of setting and configuring MaxScale will be covered within this document. However the installation and configuration of the Galera Cluster will not be covered.
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
# Process
The steps involved in creating a system from the binary distribution of MaxScale are:
* Install the package relevant to your distribution
* Create the required users in your MariaDB or MySQL Galera cluster
* Create a MaxScale configuration file
## Installation
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
Upon successful completion of the installation command you will have MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MaxScale.
## Creating Database Users
MaxScale needs to connect to the backend databases and run queries for two reasons; one to determine the current state of the database and the other to retrieve the user information for the database cluster. This may be done either using two separate usernames or with a single user.
The first user required must be able to select data from the table mysql.user, to create this user follow the steps below.
1. Connect to one of the nodes in your Galera cluster as the root user
2. Create the user, substituting the username, password and host on which maxscale runs within your environment
MariaDB [(none)]> create user '*username*'@'*maxscalehost*' identified by '*password*';
**Query OK, 0 rows affected (0.00 sec)**
3. Grant select privileges on the mysql.user table
MariaDB [(none)]> grant SELECT on mysql.user to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.03 sec)**
Additionally, GRANT SELECT on the mysql.db table and SHOW DATABASES privileges are required in order to load databases name and grants suitable for database name authorization.
MariaDB [(none)]> GRANT SELECT ON mysql.db TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
MariaDB [(none)]> GRANT SHOW DATABASES ON *.* TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
The second user is used to monitored the state of the cluster. This user, which may be the same username as the first, requires permissions to access the various sources of monitoring data within the information schema. No special permission need to be granted to the user in order to query the information schema.
If you wish to use two different usernames for the two different roles of monitoring and collecting user information then create a different username using the first two steps from above.
## Creating Your MaxScale Configuration
MaxScale configuration is held in an ini file that is located in the file MaxScale.cnf in the directory $MAXSCALE_HOME/etc, if you have installed in the default location then this file is available in /usr/local/skysql/maxscale/etc/MaxScale.cnf. This is not created as part of the installation process and must be manually created. A template file does exist within this directory that may be use as a basis for your configuration.
A global, maxscale, section is included within every MaxScale configuration file; this is used to set the values of various MaxScale wide parameters, perhaps the most important of these is the number of threads that MaxScale will use to execute the code that forwards requests and handles responses for clients.
[maxscale]
threads=4
Since we are using Galera Cluster and connection routing we want a single to which the client application can connect; MaxScale will then route connections to this port onwards to the various nodes within the Galera Cluster. To achieve this within MaxScale we need to define a service in the ini file. Create a section for each in your MaxScale.ini file and set the type to service, the section name is the names of the service and should be meaningful to the administrator. Names may contain whitespace.
[Galera Service]
type=service
The router for this section the readconnroute module, also the service should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
[Galera Service]
type=service
router=readconnroute
servers=dbserv1, dbserv2, dbserv3
In order to instruct the router to which servers it should route we must add router options to the service. The router options are compared to the status that the monitor collects from the servers and used to restrict the eligible set of servers to which that service may route. In our case we use the option that restricts us to servers that are fully functional members of the Galera cluster which are able to support SQL operations on the cluster. To achieve this we use the router option synced.
[Galera Service]
type=service
router=readconnroute
router_options=synced
servers=dbserv1, dbserv2, dbserv3
The final step in the service section is to add the username and password that will be used to populate the user data from the database cluster. There are two options for representing the password, either plain text or encrypted passwords may be used. In order to use encrypted passwords a set of keys must be generated that will be used by the encryption and decryption process. To generate the keys use the maxkeys command and pass the name of the secrets file in which the keys are stored.
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
%
Once the keys have been created the maxpasswd command can be used to generate the encrypted password.
% maxpasswd plainpassword
96F99AA1315BDC3604B006F427DD9484
%
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
[Galera Service]
type=service
router=readconnroute
router_options=synced
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
This completes the definitions required by the service, however listening ports must be associated with a service in order to allow network connections. This is done by creating a series of listener sections. These sections again are named for the convenience of the administrator and should be of type listener with an entry labelled service which contains the name of the service to associate the listener with. Each service may have multiple listeners.
[Galera Listener]
type=listener
service=Galera Service
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
[Galera Listener]
type=listener
service=Galera Service
protocol=MySQLClient
port=4306
socket=/tmp/DB.Cluster
An address parameter may be given if the listener is required to bind to a particular network address when using hosts with multiple network addresses. The default behaviour is to listen on all network interfaces.
The next stage is the configuration is to define the server information. This defines how to connect to each of the servers within the cluster, again a section is created for each server, with the type set to server, the network address and port to connect to and the protocol to use to connect to the server. Currently the protocol for all database connections in MySQLBackend.
[dbserv1]
type=server
address=192.168.2.1
port=3306
protocol=MySQLBackend
[dbserv2]
type=server
address=192.168.2.2
port=3306
protocol=MySQLBackend
[dbserv3]
type=server
address=192.168.2.3
port=3306
protocol=MySQLBackend
In order for MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
[Galera Monitor]
type=monitor
module=galeramon
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
As with the password definition in the server either plain text or encrypted passwords may be used.
The final stage in the configuration is to add the option service which is used by the maxadmin command to connect to MaxScale for monitoring and administration purposes. This creates a service section and a listener section.
[CLI]
type=service
router=cli
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=localhost
port=6603
In the case of the example above it should be noted that an address parameter has been given to the listener, this limits connections to maxadmin commands that are executed on the same machine that hosts MaxScale.
# Starting MaxScale
Upon completion of the configuration process MaxScale is ready to be started for the first time. This may either be done manually by running the maxscale command or via the service interface.
% maxscale
or
% service maxscale start
Check the error log in /usr/local/skysql/maxscale/log to see if any errors are detected in the configuration file and to confirm MaxScale has been started. Also the maxadmin command may be used to confirm that MaxScale is running and the services, listeners etc have been correctly configured.
% maxadmin -pskysql list services
Services.
--------------------------+----------------------+--------+---------------
Service Name | Router Module | #Users | Total Sessions
--------------------------+----------------------+--------+---------------
Galera Service | readconnroute | 1 | 1
CLI | cli | 2 | 2
--------------------------+----------------------+--------+---------------
% maxadmin -pskysql list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Synced, Master
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Synced, Slave
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Synced, Slave
-------------------+-----------------+-------+-------------+--------------------
A Galera Cluster is a multi-master clustering technology, however the monitor is able to impose false notions of master and slave roles within a Galera Cluster in order to facilitate the use of Galera as if it were a standard MySQL Replication setup. This is merely an internal MaxScale convenience and has no impact on the behaviour of the cluster.
% maxadmin -pskysql list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Galera Service | MySQLClient | * | 4306 | Running
CLI | maxscaled | localhost | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
%
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, such as using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document "MaxAdmin - The MaxScale Administration & Monitoring Client Application".

View File

@ -0,0 +1,298 @@
Getting Started With MariaDB MaxScale
Read/Write Splitting with Galera Cluster
# Environment & Solution Space
This document is designed as a quick introduction to setting up MaxScale in an environment in which you have a Galera Cluster which you wish to use as a single database node for update and one or more read only nodes. The object of this tutorial is to have a system that appears to the clients of MaxScale as if there is a single database behind MaxScale. MaxScale will split the statements such that write statements will be sent to only one server in the cluster and read statements will be balanced across the remainder of the servers.
The reason for a configuration like this, with all the updates being directed to a single node within what is a multi-master cluster, is to prevent any possible conflict between updates that may run on multiple nodes. Galera is built to provide the mechanism for this situation, however issues have been known to occur when conflicting transactions are committed on multiple nodes. Some applications are unable to deal with the resulting errors that may be created in this situation.
The process of setting and configuring MaxScale will be covered within this document. However the installation and configuration of the Galera Cluster will not be covered in this tutorial.
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
# Process
The steps involved in creating a system from the binary distribution of MaxScale are:
* Install the package relevant to your distribution
* Create the required users in your Galera Cluster
* Create a MaxScale configuration file
## Installation
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
Upon successful completion of the installation command you will have MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MaxScale.
## Creating Database Users
MaxScale needs to connect to the backend databases and run queries for two reasons; one to determine the current state of the database and the other to retrieve the user information for the database cluster. This may be done either using two separate usernames or with a single user.
The first user required must be able to select data from the table mysql.user, to create this user follow the steps below.
1. Connect to Galera Cluster as the root user
2. Create the user, substituting the username, password and host on which maxscale runs within your environment
MariaDB [(none)]> create user '*username*'@'*maxscalehost*' identified by '*password*';
**Query OK, 0 rows affected (0.00 sec)**
3. Grant select privileges on the mysql.user table.
MariaDB [(none)]> grant SELECT on mysql.user to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.03 sec)**
Additionally, GRANT SELECT on the mysql.db table and SHOW DATABASES privileges are required in order to load databases name and grants suitable for database name authorization.
MariaDB [(none)]> GRANT SELECT ON mysql.db TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
MariaDB [(none)]> GRANT SHOW DATABASES ON *.* TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
The second user is used to monitored the state of the cluster. This user, which may be the same username as the first, requires permissions to access the various sources of monitoring data within the information schema. No special permission need to be granted to the user in order to query the information schema.
If you wish to use two different usernames for the two different roles of monitoring and collecting user information then create a different username using the first two steps from above.
## Creating Your MaxScale Configuration
MaxScale configuration is held in an ini file that is located in the file MaxScale.cnf in the directory $MAXSCALE_HOME/etc, if you have installed in the default location then this file is available in /usr/local/skysql/maxscale/etc/MaxScale.cnf. This is not created as part of the installation process and must be manually created. A template file does exist within this directory that may be use as a basis for your configuration.
A global, maxscale, section is included within every MaxScale configuration file; this is used to set the values of various MaxScale wide parameters, perhaps the most important of these is the number of threads that MaxScale will use to execute the code that forwards requests and handles responses for clients.
[maxscale]
threads=4
The first step is to create a service for our Read/Write Splitter. Create a section in your MaxScale.ini file and set the type to service, the section names are the names of the services themselves and should be meaningful to the administrator. Names may contain whitespace.
[Splitter Service]
type=service
The router for we need to use for this configuration is the readwritesplit module, also the services should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
[Splitter Service]
type=service
router=readwritesplit
servers=dbserv1, dbserv2, dbserv3
The final step in the service sections is to add the username and password that will be used to populate the user data from the database cluster. There are two options for representing the password, either plain text or encrypted passwords may be used. In order to use encrypted passwords a set of keys must be generated that will be used by the encryption and decryption process. To generate the keys use the maxkeys command and pass the name of the secrets file in which the keys are stored.
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
%
Once the keys have been created the maxpasswd command can be used to generate the encrypted password.
% maxpasswd plainpassword
96F99AA1315BDC3604B006F427DD9484
%
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
[Splitter Service]
type=service
router=readwritesplit
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
This completes the definitions required by the service, however listening ports must be associated with the service in order to allow network connections. This is done by creating a series of listener sections. This section again is named for the convenience of the administrator and should be of type listener with an entry labelled service which contains the name of the service to associate the listener with. A service may have multiple listeners.
[Splitter Listener]
type=listener
service=Splitter Service
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
[Splitter Listener]
type=listener
service=Splitter Service
protocol=MySQLClient
port=3306
socket=/tmp/ClusterMaster
An address parameter may be given if the listener is required to bind to a particular network address when using hosts with multiple network addresses. The default behaviour is to listen on all network interfaces.
The next stage is the configuration is to define the server information. This defines how to connect to each of the servers within the cluster, again a section is created for each server, with the type set to server, the network address and port to connect to and the protocol to use to connect to the server. Currently the protocol module for all database connections in MySQLBackend.
[dbserv1]
type=server
address=192.168.2.1
port=3306
protocol=MySQLBackend
[dbserv2]
type=server
address=192.168.2.2
port=3306
protocol=MySQLBackend
[dbserv3]
type=server
address=192.168.2.3
port=3306
protocol=MySQLBackend
In order for MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
[Galera Monitor]
type=monitor
module=galeramon
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
As with the password definition in the server either plain text or encrypted passwords may be used.
This monitor module will assign one node within the Galera Cluster as the current master and other nodes as slave. Only those nodes that are active members of the cluster are considered when making the choice of master node. Normally the master node will be the node with the lowest value of the status variable, WSREP_LOCAL_INDEX. When cluster membership changes a new master may be elected. In order to prevent changes of the node that is currently master, a parameter can be added to the monitor that will result in the current master remaining as master even if a node with a lower value of WSREP_LOCAL_INDEX joins the cluster. This parameter is called disable_master_failback.
[Galera Monitor]
type=monitor
module=galeramon
diable_master_failback=1
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
Using this option the master node will only change if there is a problem with the current master and never because other nodes have joined the cluster.
The final stage in the configuration is to add the option service which is used by the maxadmin command to connect to MaxScale for monitoring and administration purposes. This creates a service section and a listener section.
[CLI]
type=service
router=cli
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=localhost
port=6603
In the case of the example above it should be noted that an address parameter has been given to the listener, this limits connections to maxadmin commands that are executed on the same machine that hosts MaxScale.
# Starting MaxScale
Upon completion of the configuration process MaxScale is ready to be started for the first time. This may either be done manually by running the maxscale command or via the service interface.
% maxscale
or
% service maxscale start
Check the error log in /usr/local/skysql/maxscale/log to see if any errors are detected in the configuration file and to confirm MaxScale has been started. Also the maxadmin command may be used to confirm that MaxScale is running and the services, listeners etc have been correctly configured.
% maxadmin -pskysql list services
Services.
--------------------------+----------------------+--------+---------------
Service Name | Router Module | #Users | Total Sessions
--------------------------+----------------------+--------+---------------
Splitter Service | readwritesplit | 1 | 1
CLI | cli | 2 | 2
--------------------------+----------------------+--------+---------------
% maxadmin -pskysql list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Synced, Master
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Synced, Slave
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Synced, Slave
-------------------+-----------------+-------+-------------+--------------------
A Galera Cluster is a multi-master clustering technology, however the monitor is able to impose false notions of master and slave roles within a Galera Cluster in order to facilitate the use of Galera as if it were a standard MySQL Replication setup. This is merely an internal MaxScale convenience and has no impact on the behaviour of the cluster but does allow the monitor to create these pseudo roles which are utilised by the Read/Write Splitter.
% maxadmin -pskysql list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Splitter Service | MySQLClient | * | 3306 | Running
CLI | maxscaled | localhost | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
%
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, these include monitoring the replication lag within the cluster and routing only to slaves that are within a predetermined delay from the current master or using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document "MaxAdmin - The MaxScale Administration & Monitoring Client Application".

View File

@ -0,0 +1,354 @@
Getting Started With MariaDB MaxScale
Connection Routing with MySQL Replication
# Environment & Solution Space
This document is designed as a quick introduction to setting up MaxScale in an environment in which you have a MySQL Replication Cluster with one master and multiple slave servers. The object of this tutorial is to have a system that has two ports available, one for write connections to the database cluster and the other for read connections to the database.
The process of setting and configuring MaxScale will be covered within this document. However the installation and configuration of the MySQL Replication subsystem will not be covered nor will any discussion of installation management tools to handle automated or semi-automated failover of the replication cluster.
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
# Process
The steps involved in creating a system from the binary distribution of MaxScale are:
* Install the package relevant to your distribution
* Create the required users in your MariaDB or MySQL Replication cluster
* Create a MaxScale configuration file
## Installation
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
Upon successful completion of the installation command you will have MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MaxScale.
## Creating Database Users
MaxScale needs to connect to the backend databases and run queries for two reasons; one to determine the current state of the database and the other to retrieve the user information for the database cluster. This may be done either using two separate usernames or with a single user.
The first user required must be able to select data from the table mysql.user, to create this user follow the steps below.
1. Connect to the current master server in your replication tree as the root user
2. Create the user, substituting the username, password and host on which maxscale runs within your environment
MariaDB [(none)]> create user '*username*'@'*maxscalehost*' identified by '*password*';
**Query OK, 0 rows affected (0.00 sec)**
3. Grant select privileges on the mysql.user table
MariaDB [(none)]> grant SELECT on mysql.user to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.03 sec)**
Additionally, GRANT SELECT on the mysql.db table and SHOW DATABASES privileges are required in order to load databases name and grants suitable for database name authorization.
MariaDB [(none)]> GRANT SELECT ON mysql.db TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
MariaDB [(none)]> GRANT SHOW DATABASES ON *.* TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
The second user is used to monitored the state of the cluster. This user, which may be the same username as the first, requires permissions to access the various sources of monitoring data. In order to monitor a replication cluster this user must be granted the roles REPLICATION SLAVE and REPLICATION CLIENT
MariaDB [(none)]> grant REPLICATION SLAVE on *.* to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.00 sec)**
MariaDB [(none)]> grant REPLICATION CLIENT on *.* to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.00 sec)**
If you wish to use two different usernames for the two different roles of monitoring and collecting user information then create a different username using the first two steps from above.
## Creating Your MaxScale Configuration
MaxScale configuration is held in an ini file that is located in the file MaxScale.cnf in the directory $MAXSCALE_HOME/etc, if you have installed in the default location then this file is available in /usr/local/skysql/maxscle/etc/MaxScale.cnf. This is not created as part of the installation process and must be manually created. A template file does exist within this directory that may be use as a basis for your configuration.
A global, maxscale, section is included within every MaxScale configuration file; this is used to set the values of various MaxScale wide parameters, perhaps the most important of these is the number of threads that MaxScale will use to execute the code that forwards requests and handles responses for clients.
[maxscale]
threads=4
Since we are using MySQL Replication and connection routing we want two different ports to which the client application can connect; one that will be directed to the current master within the replication cluster and another that will load balance between the slaves. To achieve this within MaxScale we need to define two services in the ini file; one for the read/write operations that should be executed on the master server and another for connections to one of the slaves. Create a section for each in your MaxScale.ini file and set the type to service, the section names are the names of the services themselves and should be meaningful to the administrator. Names may contain whitespace.
[Write Service]
type=service
[Read Service]
type=service
The router for these two sections is identical, the readconnroute module, also the services should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
[Write Service]
type=service
router=readconnroute
servers=dbserv1, dbserv2, dbserv3
[Read Service]
type=service
router=readconnroute
servers=dbserv1, dbserv2, dbserv3
In order to instruct the router to which servers it should route we must add router options to the service. The router options are compared to the status that the monitor collects from the servers and used to restrict the eligible set of servers to which that service may route. In our case we use the two options master and slave for our two services.
[Write Service]
type=service
router=readconnroute
router_options=master
servers=dbserv1, dbserv2, dbserv3
[Read Service]
type=service
router=readconnroute
router_options=slave
servers=dbserv1, dbserv2, dbserv3
The final step in the service sections is to add the username and password that will be used to populate the user data from the database cluster. There are two options for representing the password, either plain text or encrypted passwords may be used. In order to use encrypted passwords a set of keys must be generated that will be used by the encryption and decryption process. To generate the keys use the maxkeys command and pass the name of the secrets file in which the keys are stored.
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
%
Once the keys have been created the maxpasswd command can be used to generate the encrypted password.
% maxpasswd plainpassword
96F99AA1315BDC3604B006F427DD9484
%
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
[Write Service]
type=service
router=readconnroute
router_options=master
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
[Read Service]
type=service
router=readconnroute
router_options=slave
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
This completes the definitions required by the services, however listening ports must be associated with the services in order to allow network connections. This is done by creating a series of listener sections. These sections again are named for the convenience of the administrator and should be of type listener with an entry labelled service which contains the name of the service to associate the listener with. Each service may have multiple listeners.
[Write Listener]
type=listener
service=Write Service
[Read Listener]
type=listener
service=Read Service
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
[Write Listener]
type=listener
service=Write Service
protocol=MySQLClient
port=4306
socket=/tmp/ClusterMaster
[Read Listener]
type=listener
service=Read Service
protocol=MySQLClient
port=4307
An address parameter may be given if the listener is required to bind to a particular network address when using hosts with multiple network addresses. The default behaviour is to listen on all network interfaces.
The next stage is the configuration is to define the server information. This defines how to connect to each of the servers within the cluster, again a section is created for each server, with the type set to server, the network address and port to connect to and the protocol to use to connect to the server. Currently the protocol for all database connections in MySQLBackend.
[dbserv1]
type=server
address=192.168.2.1
port=3306
protocol=MySQLBackend
[dbserv2]
type=server
address=192.168.2.2
port=3306
protocol=MySQLBackend
[dbserv3]
type=server
address=192.168.2.3
port=3306
protocol=MySQLBackend
In order for MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
[Replication Monitor]
type=monitor
module=mysqlmon
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
As with the password definition in the server either plain text or encrypted passwords may be used.
The final stage in the configuration is to add the option service which is used by the maxadmin command to connect to MaxScale for monitoring and administration purposes. This creates a service section and a listener section.
[CLI]
type=service
router=cli
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=localhost
port=6603
In the case of the example above it should be noted that an address parameter has been given to the listener, this limits connections to maxadmin commands that are executed on the same machine that hosts MaxScale.
# Starting MaxScale
Upon completion of the configuration process MaxScale is ready to be started for the first time. This may either be done manually by running the maxscale command or via the service interface.
% maxscale
or
% service maxscale start
Check the error log in /usr/local/skysql/maxscale/log to see if any errors are detected in the configuration file and to confirm MaxScale has been started. Also the maxadmin command may be used to confirm that MaxScale is running and the services, listeners etc have been correctly configured.
% maxadmin -pskysql list services
Services.
--------------------------+----------------------+--------+---------------
Service Name | Router Module | #Users | Total Sessions
--------------------------+----------------------+--------+---------------
Read Service | readconnroute | 1 | 1
Write Service | readconnroute | 1 | 1
CLI | cli | 2 | 2
--------------------------+----------------------+--------+---------------
% maxadmin -pskysql list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Slave
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Master
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Slave
-------------------+-----------------+-------+-------------+--------------------
% maxadmin -pskysql list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Read Service | MySQLClient | * | 4307 | Running
Write Service | MySQLClient | * | 4306 | Running
CLI | maxscaled | localhost | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
%
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, these include monitoring the replication lag within the cluster and routing only to slaves that are within a predetermined delay from the current master or using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document "MaxAdmin - The MaxScale Administration & Monitoring Client Application".

View File

@ -0,0 +1,284 @@
Getting Started With MariaDB MaxScale
Read/Write Splitting with MySQL Replication
# Environment & Solution Space
This document is designed as a quick introduction to setting up MaxScale in an environment in which you have a MySQL Replication Cluster with one master and multiple slave servers. The object of this tutorial is to have a system that appears to the clients of MaxScale as if there is a single database behind MaxScale. MaxScale will split the statements such that write statements will be sent to the current master server in the replication cluster and read statements will be balanced across a number of the slave statements.
The process of setting and configuring MaxScale will be covered within this document. However the installation and configuration of the MySQL Replication subsystem will not be covered nor will any discussion of installation management tools to handle automated or semi-automated failover of the replication cluster.
This tutorial will assume the user is running from one of the binary distributions available and has installed this in the default location. Building from source code in GitHub is covered in guides elsewhere as is installing to non-default locations.
# Process
The steps involved in creating a system from the binary distribution of MaxScale are:
* Install the package relevant to your distribution
* Create the required users in your MariaDB or MySQL Replication cluster
* Create a MaxScale configuration file
## Installation
The precise installation process will vary from one distribution to another details of what to do with the RPM and DEB packages can be found on the download site when you select the distribution you are downloading from. The process involves setting up your package manager to include the MariaDB repositories and then running the package manager for your distribution, RPM or apt-get.
Upon successful completion of the installation command you will have MaxScale installed and ready to be run but without a configuration. You must create a configuration file before you first run MaxScale.
## Creating Database Users
MaxScale needs to connect to the backend databases and run queries for two reasons; one to determine the current state of the database and the other to retrieve the user information for the database cluster. This may be done either using two separate usernames or with a single user.
The first user required must be able to select data from the table mysql.user, to create this user follow the steps below.
1. Connect to the current master server in your replication tree as the root user
2. Create the user, substituting the username, password and host on which maxscale runs within your environment
MariaDB [(none)]> create user '*username*'@'*maxscalehost*' identified by '*password*';
**Query OK, 0 rows affected (0.00 sec)**
3. Grant select privileges on the mysql.user table.
MariaDB [(none)]> grant SELECT on mysql.user to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.03 sec)**
Additionally, GRANT SELECT on the mysql.db table and SHOW DATABASES privileges are required in order to load databases name and grants suitable for database name authorization.
MariaDB [(none)]> GRANT SELECT ON mysql.db TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
MariaDB [(none)]> GRANT SHOW DATABASES ON *.* TO 'username'@'maxscalehost';
**Query OK, 0 rows affected (0.00 sec)**
The second user is used to monitored the state of the cluster. This user, which may be the same username as the first, requires permissions to access the various sources of monitoring data. In order to monitor a replication cluster this user must be granted the roles REPLICATION SLAVE and REPLICATION CLIENT
MariaDB [(none)]> grant REPLICATION SLAVE on *.* to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.00 sec)**
MariaDB [(none)]> grant REPLICATION CLIENT on *.* to '*username*'@'*maxscalehost*';
**Query OK, 0 rows affected (0.00 sec)**
If you wish to use two different usernames for the two different roles of monitoring and collecting user information then create a different username using the first two steps from above.
## Creating Your MaxScale Configuration
MaxScale configuration is held in an ini file that is located in the file MaxScale.cnf in the directory $MAXSCALE_HOME/etc, if you have installed in the default location then this file is available in /usr/local/skysql/maxscale/etc/MaxScale.cnf. This is not created as part of the installation process and must be manually created. A template file does exist within this directory that may be use as a basis for your configuration.
A global, maxscale, section is included within every MaxScale configuration file; this is used to set the values of various MaxScale wide parameters, perhaps the most important of these is the number of threads that MaxScale will use to execute the code that forwards requests and handles responses for clients.
[maxscale]
threads=4
The first step is to create a service for our Read/Write Splitter. Create a section in your MaxScale.ini file and set the type to service, the section names are the names of the services themselves and should be meaningful to the administrator. Names may contain whitespace.
[Splitter Service]
type=service
The router for we need to use for this configuration is the readwritesplit module, also the services should be provided with the list of servers that will be part of the cluster. The server names given here are actually the names of server sections in the configuration file and not the physical hostnames or addresses of the servers.
[Splitter Service]
type=service
router=readwritesplit
servers=dbserv1, dbserv2, dbserv3
The final step in the service sections is to add the username and password that will be used to populate the user data from the database cluster. There are two options for representing the password, either plain text or encrypted passwords may be used. In order to use encrypted passwords a set of keys must be generated that will be used by the encryption and decryption process. To generate the keys use the maxkeys command and pass the name of the secrets file in which the keys are stored.
% maxkeys /usr/local/skysql/maxscale/etc/.secrets
%
Once the keys have been created the maxpasswd command can be used to generate the encrypted password.
% maxpasswd plainpassword
96F99AA1315BDC3604B006F427DD9484
%
The username and password, either encrypted or plain text, are stored in the service section using the user and passwd parameters.
[Splitter Service]
type=service
router=readwritesplit
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
This completes the definitions required by the service, however listening ports must be associated with the service in order to allow network connections. This is done by creating a series of listener sections. This section again is named for the convenience of the administrator and should be of type listener with an entry labelled service which contains the name of the service to associate the listener with. A service may have multiple listeners.
[Splitter Listener]
type=listener
service=Splitter Service
A listener must also define the protocol module it will use for the incoming network protocol, currently this should be the MySQLClient protocol for all database listeners. The listener may then supply a network port to listen on and/or a socket within the file system.
[Splitter Listener]
type=listener
service=Splitter Service
protocol=MySQLClient
port=3306
socket=/tmp/ClusterMaster
An address parameter may be given if the listener is required to bind to a particular network address when using hosts with multiple network addresses. The default behaviour is to listen on all network interfaces.
The next stage is the configuration is to define the server information. This defines how to connect to each of the servers within the cluster, again a section is created for each server, with the type set to server, the network address and port to connect to and the protocol to use to connect to the server. Currently the protocol module for all database connections in MySQLBackend.
[dbserv1]
type=server
address=192.168.2.1
port=3306
protocol=MySQLBackend
[dbserv2]
type=server
address=192.168.2.2
port=3306
protocol=MySQLBackend
[dbserv3]
type=server
address=192.168.2.3
port=3306
protocol=MySQLBackend
In order for MaxScale to monitor the servers using the correct monitoring mechanisms a section should be provided that defines the monitor to use and the servers to monitor. Once again a section is created with a symbolic name for the monitor, with the type set to monitor. Parameters are added for the module to use, the list of servers to monitor and the username and password to use when connecting to the the servers with the monitor.
[Replication Monitor]
type=monitor
module=mysqlmon
servers=dbserv1, dbserv2, dbserv3
user=maxscale
passwd=96F99AA1315BDC3604B006F427DD9484
As with the password definition in the server either plain text or encrypted passwords may be used.
The final stage in the configuration is to add the option service which is used by the maxadmin command to connect to MaxScale for monitoring and administration purposes. This creates a service section and a listener section.
[CLI]
type=service
router=cli
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=localhost
port=6603
In the case of the example above it should be noted that an address parameter has been given to the listener, this limits connections to maxadmin commands that are executed on the same machine that hosts MaxScale.
# Starting MaxScale
Upon completion of the configuration process MaxScale is ready to be started for the first time. This may either be done manually by running the maxscale command or via the service interface.
% maxscale
or
% service maxscale start
Check the error log in /usr/local/skysql/maxscale/log to see if any errors are detected in the configuration file and to confirm MaxScale has been started. Also the maxadmin command may be used to confirm that MaxScale is running and the services, listeners etc have been correctly configured.
% maxadmin -pskysql list services
Services.
--------------------------+----------------------+--------+---------------
Service Name | Router Module | #Users | Total Sessions
--------------------------+----------------------+--------+---------------
Splitter Service | readwritesplit | 1 | 1
CLI | cli | 2 | 2
--------------------------+----------------------+--------+---------------
% maxadmin -pskysql list servers
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
dbserv1 | 192.168.2.1 | 3306 | 0 | Running, Slave
dbserv2 | 192.168.2.2 | 3306 | 0 | Running, Master
dbserv3 | 192.168.2.3 | 3306 | 0 | Running, Slave
-------------------+-----------------+-------+-------------+--------------------
% maxadmin -pskysql list listeners
Listeners.
---------------------+--------------------+-----------------+-------+--------
Service Name | Protocol Module | Address | Port | State
---------------------+--------------------+-----------------+-------+--------
Splitter Service | MySQLClient | * | 3306 | Running
CLI | maxscaled | localhost | 6603 | Running
---------------------+--------------------+-----------------+-------+--------
%
MaxScale is now ready to start accepting client connections and routing them to the master or slaves within your cluster. Other configuration options are available that can alter the criteria used for routing, these include monitoring the replication lag within the cluster and routing only to slaves that are within a predetermined delay from the current master or using weights to obtain unequal balancing operations. These options may be found in the MaxScale Configuration Guide. More detail on the use of maxadmin can be found in the document "MaxAdmin - The MaxScale Administration & Monitoring Client Application".